patent_id
stringlengths
7
8
description
stringlengths
125
2.47M
length
int64
125
2.47M
11863438
DESCRIPTION OF EMBODIMENTS On some networks, CPEs belonging to different tenants do not need to receive or send routes from each other. However, based on a network deployment requirement, different CPEs belonging to a same tenant cannot directly receive or send routes from each other. Instead, the CPEs send routes to or receive routes from a specific network node such as an RR. For example, as shown inFIG.1, CPE1and CPE2cannot directly receive or send a route from each other. When a routing path between the CPE1and the CPE2needs to be established, generally, the CPE1first sends routing information to the RR, and then the RR sends routing information to the CPE2. In this way, the routing path between the CPE1and the CPE2is established. The CPE1and the CPE2belong to the same tenant, and other CPEs belong to another tenant. Generally, the RR is connected to the plurality of CPEs belonging to the different tenants. Therefore, if the RR receives routing information sent by CPE, the RR parses a routing feature in the routing information, and determines, from a plurality of network nodes adjacent to the RR, a network node (which may be CPE or another RR) belonging to a tenant corresponding to the routing information. Specifically, the RR compares the routing feature obtained by parsing with a routing feature of a route that can be advertised by the RR to another network node, and determines a network node corresponding to a route whose routing feature is consistent, so that the RR may send the routing information to the determined network node. However, there is a relatively complex implementation process in this manner in which the routing feature needs to be parsed from the routing information, and the routing feature needs to be compared, to determine a network node that needs to receive routing information. Besides, there are generally a relatively large quantity of network nodes adjacent to the RR. As a result, there are also a large quantity of routes whose routing features are compared by the RR. This takes a long time and degrades performance of the network nodes by using this method. Based on this, to resolve the foregoing technical problem, in this embodiment of this application, a tenant identifier corresponding to the received routing information may be used to determine the network node that needs to receive the routing information. This reduces a time period for determining the network node, and improves the performance of the network nodes. Specifically, a tenant identifier may be preset for a tenant to which a network node on a network belongs. Network nodes belonging to a same tenant correspond to a same tenant identifier, and different tenants correspond to different tenant identifiers. Because network nodes belonging to the different tenants usually do not need to receive and send routes to each other, after receiving routing information sent by a second network node adjacent to the first network node, the first network node may advertise a route only to a third network node that has a same tenant identifier as the second network node, and does not need to advertise the route to a network node corresponding to another tenant identifier, to determine the network node that receives the routing information. Compared with the manner in which the routing feature in the routing information is parsed and compared, to determine the network node that needs to receive the routing information, this manner is simpler. The time period required for the first network node to advertise the route may be shortened, and performance of the first network node is improved. For example, one of scenarios in this embodiment of this application may be applied to an SD-WAN network topology structure shown inFIG.2. CPE1and CPE2belong to a tenant1, and a routing path may be established by using an RR1. A network formed by the CPE1and the CPE2that belong to the tenant1may also be referred to as a network slice1, for example, a slice1. In other words, different tenant identifiers may represent different network slices. CPE3, CPE4, and CPE5belong to a tenant2, and a routing path may be established by using the RR1and/or an RR2. A network formed by the CPE3, the CPE4, and the CPE5that belong to the tenant2may also be referred to as a network slice2, for example, a slice2. CPE6belongs to a tenant3. If the CPE1currently needs to establish the routing path with the CPE2, the CPE1may advertise a route to the RR1, in other words, send routing information to the RR1. After receiving the routing information, the RR1may determine that the routing information sent by the CPE1corresponds to a tenant identifier of the tenant1. Then, the RR1may determine the CPE2belonging to the tenant1, and send routing information to the determined CPE2, so that the routing path is established between the CPE1and the CPE2by using the RR1. In a process of establishing the routing path between the CPE1and the CPE2, the RR1determines, based on the tenant identifier of the tenant1, the CPE2belonging to the tenant1, and further advertises a route only to the CPE2. In this way, the RR1may determine, in a relatively short time, the CPE2to which the route needs to be advertised. This improves performance of the network node. It may be understood that the foregoing scenario is merely an example of a scenario provided in this embodiment of this application, and this embodiment of this application is not limited to this scenario. With reference to the accompanying drawings, the following describes in detail specific implementations of the routing information sending method in this embodiment of this application by using embodiments. FIG.3is a schematic flowchart of a routing information sending method according to an embodiment of this application. The method may specifically include the following steps. S301: A first network node receives routing information sent by a second network node. In this embodiment, the first network node may be a node on a network, for example, a router or a switch on the network, that can support a routing connection between the second network node and a third network node, and the second network node may be a device belonging to a tenant, or a node that is on the network and that can support establishment of a routing connection between the device and another network node. The first network node is adjacent to the second network node. For example, when this embodiment is applied to the application scenario shown inFIG.2, the second network node may be the CPE1, and the first network node adjacent to the second network node is the RR1that supports establishment of a routing connection between the CPE1and the CPE2. Certainly, the second network node may also be the RR1, and the RR1may support the CPE3in separately establishing a routing connection to the CPE4and the CPE5. In this case, the first network node adjacent to the second network node is the RR2that can also support the routing connection. S302: The first network node determines to correspond to the routing information to a tenant identifier. Generally, devices on which a routing connection needs to be performed usually belong to a same tenant, and devices on which the routing connection does not need to be performed belong to different tenants. Based on this, in this embodiment, a corresponding tenant identifier may be established for each tenant, and the tenant identifier may be used to distinguish among a plurality of tenants served by the first network node. Correspondingly, routing information sent by different second network nodes belonging to the same tenant corresponds to the same tenant identifier, and routing information sent by second network nodes belonging to the different tenants corresponds to different tenant identifiers. In an actual application, devices of a tenant on the network may be sliced. Specifically, devices that are on the network and that belong to the same tenant may be grouped into a same network slice. In this case, the network slice includes the devices that belong to the same tenant, and the network slice corresponds to a tenant identifier of the tenant, in other words, the devices of the different tenants belong to different network slices, and the different network slices correspond to the different tenant identifiers. In an example implementation of determining the tenant identifier, the first network node may configure, in advance by using configuration information of a tenant identifier, the tenant identifier for the second network node adjacent to the first network node. Specifically, the first network node may configure the tenant identifier for the second network node by using the tenant identifier and a session between the first network node and the second network node. A tenant represented by the tenant identifier is a tenant served by the first network node or a tenant to which the second network node belongs. In this way, after receiving, based on a first session between the first network node and the second network node, the routing information sent by the second network node, the first network node may determine a tenant identifier corresponding to the first session. In an example, the first session may be a BGP (border gateway protocol) session between the first network node and the second network node. During specific implementation, a correspondence between a tenant identifier and a first session between a first network node and a second network node may be pre-established. In this way, after receiving the routing information sent by the second network node, the first network node may first determine the first session that transmits the routing information, and then, determine, based on the pre-established correspondence between the first session and the tenant identifier, the tenant identifier corresponding to the first session. In an actual application, in addition to determining the tenant identifier based on the session, the tenant identifier may also be directly determined based on the second network node. Specifically, in another example implementation of determining the tenant identifier, a correspondence between a second network node and a tenant identifier may be pre-established, and the correspondence may be configured on the first network node. In this way, after receiving the routing information, the first network node may first determine a network node that sends the routing information as the second network node, and then determine, based on the pre-established correspondence between the second network node and the tenant identifier, the tenant identifier corresponding to the second network node. It may be understood that, in the actual application, the first network node may receive routing information sent by a plurality of network nodes. A correspondence between each network node and a tenant identifier of a tenant to which the network node belongs may be pre-established, and correspondences between the plurality of network nodes and the tenant identifiers are configured on the first network node. It should be noted that in different application scenarios, there may be one or more tenant identifiers determined by the first network node based on the received routing information. Specifically, when the second network node is CPE, because the CPE belongs to only one tenant, after receiving the routing information sent by the CPE, the first network node may determine, based on the correspondence between the tenant identifier and the first session between the CPE and the first network node, that the routing information corresponds to only one tenant identifier, in other words, a tenant identifier of a tenant to which the CPE belongs. When the second network node is an RR, because the RR may serve the plurality of tenants, the first network node may not accurately determine a specific tenant based on only the routing information sent by the second network node, where the routing connection needs to be established between devices of the specific tenant. Based on this, if the first network node serves only one tenant, there is still one tenant identifier determined based on the routing information. However, if the first network node serves the plurality of tenants, the first network node may determine the plurality of tenant identifiers based on the routing information. A tenant corresponding to each determined tenant identifier is served by the first network node. In addition to a tenant identifier of a target tenant, the plurality of tenant identifiers further include a tenant identifier of another tenant. The target tenant is the tenant to which the devices belong, and the routing connection needs to be established between the devices. It may be understood that although the first network node cannot determine the specific tenant based on only the received routing information, where the routing connection needs to be established between the devices of the specific tenant, the tenant is definitely the tenant served by the first network node. So that the tenant identifiers of the plurality of tenants served by the first network node are determined, and finally established route connections can include the route connection that needs to be established between the devices of the target tenant. In some possible implementations, the plurality of determined tenant identifiers may be tenant identifiers corresponding to a tenant jointly served by the first network node and the second network node, or certainly may be tenant identifiers corresponding to all tenants served by the first network node. S303: The first network node determines that the third network node belongs to the tenant corresponding to the tenant identifier. After determining the tenant identifier, the first network node may further determine, based on the determined tenant identifier, the third network node belonging to the tenant corresponding to the tenant identifier. The third network node may be a device belonging to the tenant corresponding to the tenant identifier, or a node that is on the network and that can support establishment of the routing connection between a device of the tenant and another network node. The first network node is adjacent to the third network node. For example, when this embodiment is applied to the application scenario shown inFIG.2, if the first network node is the RR1, the third network node adjacent to the first network node may be the CPE2that needs to establish the route connection to the CPE1, and the second network node is the CPE1; or the third network node may be the RR2that supports establishment of a route connection between the CPE3and the CPE4, and between the CPE3and the CPE5, and the second network node is the CPE3. In an example, the first network node may determine the third network node based on a second session corresponding to the determined tenant identifier. During specific implementation, a correspondence between a session and a tenant identifier may be pre-established. The session is a session between the first network node and a network node that needs to receive the routing information. A tenant corresponding to the tenant identifier is a tenant jointly served by the first network node and the another network node. In this way, after determining the tenant identifier, the first network node may determine, based on the established correspondence between the session and the tenant identifier, the second session corresponding to the tenant identifier, and then may determine the third network node based on the second session. The second session is a session between the first network node and the third network node. In an example, the second session may be specifically the BGP session. S304: The first network node sends the routing information to the third network node in response to determining that the third network node belongs to the tenant corresponding to the tenant identifier. Because the determined third network node and the first network node belong to the tenant corresponding to the same tenant identifier, the first network node may send the routing information to the third network node, to continue to establish a routing path based on a route established between the first network node and the second network node. It should be noted that, processes in which the first network node sends the routing information to third network nodes of the different tenants are independent of each other, in other words, when the first network node sends the routing information to a third network node belonging to a tenant corresponding to a tenant identifier A, this does not affect sending, by the first network node, the routing information to a third network node belonging to a tenant corresponding to a tenant identifier B. Based on this, in some possible implementations, execution modules that are independent of each other may be disposed based on the tenants served by the first network node. An execution module of each tenant corresponds to a tenant identifier of the tenant, and the execution modules of the different tenants correspond to the different tenant identifiers. When the first network node needs to send the routing information to a third network node of a tenant belonging to a tenant identifier, an execution module that is on the first network node and that corresponds to the tenant identifier may send the routing information. In this way, when the first network node determines the plurality of tenant identifiers based on the received routing information, the plurality of execution modules that are on the first network node and that correspond to the plurality of tenant identifiers may concurrently send the routing information to third network nodes belonging to tenants corresponding to the different tenant identifiers, to improve processing efficiency of the first network node. In an example, an execution module corresponding to each tenant identifier may include a central processing unit, a thread, a process, or the like on the first network node. In an actual application, after determining the tenant identifier based on the received routing information, the first network node may further generate a routing entry based on the routing information, and may add the routing entry to a routing information base corresponding to the tenant identifier. In some possible implementations, the routing information base may be stored on the first network node, so that when receiving a packet sent by the device of the tenant corresponding to the tenant identifier, the first network node can forward the packet to a corresponding next network node based on the routing entry stored in the routing information base. In this way, the packet can be forwarded. It may be understood that because the routing connection may be implemented, by using the first network node and the second network node, or by using the first network node and the another network node, between devices of the tenant corresponding to the tenant identifier, in addition to the routing entry generated based on the routing information sent by the second network node, a routing entry generated based on routing information sent by the another network node adjacent to the first network node may be stored in the routing information base corresponding to the tenant identifier. Further, different routing information bases may be set for the different tenants. In this way, routing entries of the different tenants are prevented from being stored in one routing information base, to reduce a quantity of routing entries that are in the routing information base and that need to be searched for, to reduce a time period for searching for the routing entries during packet forwarding, and to improve forwarding efficiency. In the actual application, the first network node may determine the plurality of tenant identifiers based on the routing information sent by the second network node, so that the first network node sends the routing information to third network nodes corresponding to the plurality of tenant identifiers. Based on this, in some possible implementations, to reduce a scale of routes advertised by the first network node, a corresponding filtering policy may be set on the first network node. When determining the plurality of tenant identifiers based on the routing information, the first network node may determine, according to the filtering policy, a specific tenant to which the devices belong, where the routing connection is established between the devices, in other words, the first network node may determine, from the plurality of tenant identifiers, a tenant identifier of the tenant to which the devices belong, where the routing connection is established between the devices, so that the first network node advertises the route only to the third network node corresponding to the tenant identifier, to reduce the scale of the routes advertised by the first network node. Specifically,FIG.4is a schematic flowchart of still another routing information sending method according to an embodiment of this application. The method may specifically include the following steps. S401: A first network node receives routing information sent by a second network node. S402: The first network node determines the routing information corresponds to a tenant identifier. In this embodiment, the step S401and the step S402are similar to the step S301and the step S302in the foregoing embodiment. For a specific implementation process of the step S401and the step S402, refer to related descriptions in the foregoing embodiment. Details are not described herein again. S403: The first network node parses a routing feature in the routing information. S404: The first network node determines, from one or more tenant identifiers based on the parsed routing feature, the tenant identifier corresponding to the routing information. In this embodiment, if the first network node determines the plurality of tenant identifiers based on the received routing information, in an example, the first network node may establish a correspondence between a first session and a plurality of tenant identifiers. The first session is a session between the first network node and the second network node. The plurality of tenant identifiers may be filtered according to a preset filtering policy for the tenant identifiers, to determine the tenant identifier corresponding to the routing information, in other words, determine a tenant identifier of a tenant to which devices belong, where a routing connection is established between the devices. In a specific implementation example, the first network node may determine whether the plurality of tenant identifiers are determined based on the routing information. If the plurality of tenant identifiers are determined based on the routing information, the first network node may parse the routing feature in the routing information. In an example, the first network node may specifically parse one or more features: a community attribute, an extended community attribute, a prefix address of a route, and an autonomous system path that are of the routing information. Then, the first network node may compare the routing feature obtained by parsing with a routing feature corresponding to each tenant identifier, and determine a tenant identifier corresponding to the same routing feature as the tenant identifier corresponding to the routing information. In this way, after the tenant identifier corresponding to the routing information is filtered from the plurality of tenant identifiers, when advertising the route, the first network node needs to advertise the route only to a third network node belonging to a tenant corresponding to the tenant identifier, and does not need to advertise the route to a network node belonging to a tenant corresponding to another tenant identifier, to reduce a scale of routes advertised by the first network node. S405: The first network node determines that the third network node belongs to the tenant corresponding to the determined tenant identifier. S406: The first network node sends the routing information to the third network node in response to determining that the third network node belongs to the tenant corresponding to the tenant identifier. In this embodiment, based on the foregoing embodiment, the filtering policy for the tenant identifiers is set, so that when the first network node determines the plurality of tenant identifiers based on the routing information, the tenant identifier corresponding to the routing information can be filtered from the plurality of tenant identifiers by using the routing feature of the routing information. Therefore, when advertising the route, the first network node needs to advertise the route only to the third network node belonging to the tenant corresponding to the tenant identifier, and does not need to advertise the route to the third network node belonging to the tenant corresponding to the another tenant identifier, to reduce the scale of the routes advertised by the first network node. In an actual application, the second network node may be a device (for example, CPE) belonging to a tenant, or may be a node (for example, an RR) that is on a network and that supports the routing connection. Therefore, this embodiment of this application may be applied to at least two different scenarios. To describe the technical solutions in the embodiments of this application in more detail, the following describes the technical solutions in the embodiments of this application with reference to two specific scenarios. With reference to an application scenario in which the second network node is CPE of a tenant, an embodiment of this application provides a routing information sending method. The first network node is an RR.FIG.5is a schematic flowchart of another routing information sending method according to an embodiment of this application. The method may specifically include the following steps. S501: An RR receives routing information sent by CPE. S502: The RR determines the routing information corresponds to a tenant identifier. In this embodiment, a correspondence between a first session and a tenant identifier may be pre-established. The first session may be a BGP session between the CPE and the RR. In this way, after receiving the routing information sent by the CPE, the RR may find, based on the established correspondence between a first session and a tenant identifier, the tenant identifier corresponding to the first session between the CPE and the RR, and use the tenant identifier as the tenant identifier corresponding to the routing information. Certainly, the determined tenant identifier is also a tenant identifier of a tenant to which the CPE belongs. S503: The RR determines that a third network node belongs to the tenant corresponding to the tenant identifier. In this embodiment, a correspondence between a second session and a tenant identifier may be pre-established. The second session may be a BGP session between the RR and the third network node. In this way, after determining the tenant identifier, the RR may determine, by using the correspondence, the second session corresponding to the tenant identifier, and further determine, based on the second session, the third network node that has the second session with the RR. S504: The RR sends the routing information to the third network node in response to determining that the third network node belongs to the tenant corresponding to the tenant identifier. In this embodiment, the RR may determine the third network node by using the tenant identifier corresponding to the received routing information, in other words, determine a network node that needs to receive the routing information. This can shorten a time period required for the network node to advertise a route, and improve performance of the network node. In the foregoing embodiment with reference to an application scenario, the second network node is used as a device belonging to a tenant, to describe the technical solution in this embodiment of this application. In some other application scenarios, the second network node may alternatively be a node that is on a network and that supports a routing connection. It is assumed that the first network node is an RR1, and the second network node is an RR2. In this application scenario, the RR1may determine a plurality of tenant identifiers based on routing information sent by the RR2. To reduce a scale of routes advertised by the RR1, a corresponding filtering policy may be set on the RR1, and the filtering policy is used to filter the plurality of tenant identifiers determined by the RR1. With reference to an application scenario in which the RR2is a node supporting a routing connection, an embodiment of this application provides a routing information sending method.FIG.6is a schematic flowchart of still another routing information sending method according to an embodiment of this application. The method may specifically include the following steps. S601: An RR1receives routing information sent by an RR2. S602: The RR1determines the routing information corresponds to a tenant identifier. S603: The RR1parses a routing feature in the routing information. S604: The RR1determines, from a plurality of tenant identifiers based on the parsed routing feature, a tenant identifier corresponding to the routing information, and uses the tenant identifier as a target tenant identifier. In this embodiment, the RR1may determine the plurality of tenant identifiers based on the received routing information. In this case, the RR1may filter the plurality of tenant identifiers according to a filtering policy that is for the tenant identifiers and that is preset on the RR1, to determine a tenant identifier of a tenant to which devices belong, where a routing connection needs to be established between the devices. In a specific implementation, the RR1may determine whether the plurality of tenant identifiers are determined based on the routing information. If the plurality of tenant identifiers are determined based on the routing information, the RR1may parse the routing feature in the routing information. The RR1may determine the plurality of tenant identifiers based on a correspondence between an established first session and a plurality of tenant identifiers. In an example, specifically, one or more of features: a community attribute, an extended community attribute, a prefix address of a route, and an autonomous system path that are of the routing information may be parsed. Then, the RR1may compare the routing feature obtained by parsing with a routing feature corresponding to each tenant identifier, and determine a tenant identifier corresponding to the same routing feature as the tenant identifier corresponding to the routing information. In this way, after the tenant identifier corresponding to the routing information is filtered from the plurality of tenant identifiers, when advertising a route, the RR1needs to advertise the route only to a third network node belonging to a tenant corresponding to the tenant identifier, and does not need to advertise the route to a network node belonging to a tenant corresponding to another tenant identifier, to reduce a scale of routes advertised by the RR1. S605: The RR1determines that the third network node belongs to the tenant corresponding to the determined tenant identifier. S606: The RR1sends the routing information to the third network node in response to determining that the third network node belongs to the tenant corresponding to the tenant identifier. In this embodiment, based on the foregoing embodiment, the filtering policy for the tenant identifiers is set on the RR1, so that when the RR1determines the plurality of tenant identifiers based on the routing information, the tenant identifier corresponding to the routing information can be filtered from the plurality of tenant identifiers by using the routing feature of the routing information. Therefore, when advertising the route, the RR1needs to advertise the route only to the third network node belonging to the tenant corresponding to the tenant identifier, and does no need to advertise the route to a third network node belonging to the tenant corresponding to the another tenant identifier, to reduce the scale of the routes advertised by the RR1. In addition, an embodiment of this application further provides a routing information sending apparatus.FIG.7is a schematic structural diagram of a routing information sending apparatus according to an embodiment of this application. The apparatus700may specifically include: a receiving module701, configured to receive routing information sent by a second network node; a first determining module702, configured to determine the routing information corresponds to a tenant identifier; a second determining module703, configured to determine that a third network node belongs to a tenant corresponding to the tenant identifier; and a sending module704, configured to send the routing information to the third network node in response to determining that the third network node belongs to the tenant corresponding to the tenant identifier. In some possible implementations, the first determining module702is specifically configured to determine that a first session corresponds to the tenant identifier. The first network node receives the routing information through the first session. The first session is a border gateway protocol session between the first network node and the second network node. Because there is the first session between the first network node and the second network node, a correspondence between a first session and a tenant identifier may be pre-established. In this way, if the first network node receives, through the first session, the routing information sent by the second network node, the first network node may determine, based on the established correspondence, the tenant identifier corresponding to the first session. Because the routing information received by the first network node is received through the first session, the determined tenant identifier corresponds to the routing information. In some possible implementations, the first determining module702is specifically configured to determine that the second network node sending the routing information belongs to the tenant corresponding to the tenant identifier. In this implementation, a correspondence between a second network node and a tenant identifier may be pre-established. In this way, after receiving the routing information, the first network node may determine, based on the correspondence, the tenant identifier corresponding to the second network node sending the routing information. It may be understood that when the second network node corresponds to the tenant identifier, it indicates that the second network node belongs to the tenant corresponding to the tenant identifier. In some possible implementations, the apparatus700further includes:a generating module, configured to generate a routing entry based on the routing information; andan adding module, configured to add the routing entry to a routing information base corresponding to the tenant identifier. Because the routing entry is generated and stored based on the routing information, when receiving a packet sent by a device of the tenant corresponding to the tenant identifier, the first network node can forward the packet to a corresponding next network node based on the routing entry stored in the routing information base. In this way, the packet can be forwarded. Further, different tenants can correspond to different routing information bases. In this way, routing entries of the different tenants are prevented from being stored in one routing information base, to reduce a quantity of routing entries that are in the routing information base and that need to be searched for, to reduce a time period for searching for a routing entry during packet forwarding, and to improve forwarding efficiency. In some possible implementations, the second determining module703is specifically configured to determine that a second session corresponds to the tenant identifier. The second session is a border gateway protocol session between the first network node and the third network node. In this implementation, a correspondence between a second session and a tenant identifier may be pre-established. In this way, after determining the tenant identifier, the first network node may determine, based on the established correspondence, the second session corresponding to the tenant identifier, and determine, based on the second session, the third network node that forms the second session with the first network node. In this way, the third network node belonging to the tenant corresponding to the tenant identifier may be determined based on the second session. In some possible implementations, the sending module704is specifically configured to send the routing information to the third network node by using an execution module corresponding to the tenant identifier. Different tenant identifiers on the first network node correspond to different execution modules, and the execution module includes a central processing unit, a thread, or a process. Because processes in which the first network node sends the routing information to third network nodes of different tenants are independent of each other, when the first network node determines a plurality of tenant identifiers based on the received routing information, a plurality of execution modules that are on the first network node and that correspond to the plurality of tenant identifiers may concurrently send the routing information to the third network nodes belonging to tenants corresponding to the different tenant identifiers, to improve processing efficiency of the first network node. In some possible implementations, the second network node is a customer premises equipment CPE, and the tenant identifier includes one tenant identifier. Alternatively, the second network node is a route reflector RR, and the tenant identifier includes one or more tenant identifiers. It may be understood that if the second network node is CPE of a tenant, because the CPE belongs to only one tenant, and corresponds to a tenant identifier of only one tenant, the first network node may determine, based on routing information sent by the CPE, the tenant identifier of the tenant to which the CPE belongs, and there is also only one determined tenant identifier. If the second network node is a node RR that is on a network and that supports a routing connection, because the RR may serve a plurality of tenants, in other words, the RR may belong to the plurality of tenants, and therefore correspond to tenant identifiers of the plurality of tenants, the first network node determines, based on the routing information sent by the RR, the tenant identifiers of the plurality of tenants, and there may be a plurality of determined tenant identifiers. Certainly, if the RR serves only one tenant, there may also be only one determined tenant identifier. In some possible implementations, the first determining module702includes:a parsing unit, configured to parse a routing feature in the routing information; anda determining unit, configured to determine, from the one or more tenant identifiers based on the routing feature, the tenant identifier corresponding to the routing information. In this implementation, if the first network node determines the plurality of tenant identifiers based on the routing information, the first network node may filter, according to a preset filtering policy, the tenant identifier corresponding to the routing information from the plurality of tenant identifiers. In this way, when advertising the route, the first network node needs to advertise the route only to the third network node belonging to the tenant corresponding to the tenant identifier, and does not need to advertise the route to a network node belonging to a tenant corresponding to another tenant identifier, to reduce a scale of the routes advertised by the first network node. In some possible implementations, the routing feature includes any one or more of the following: a prefix address, a community attribute, an extended community attribute, and an autonomous system path that are of the routing information. The foregoing describes the routing information sending apparatus provided in this embodiment of this application. For a specific implementation, refer to the foregoing description in the embodiment of the routing information sending method corresponding toFIG.3. An effect achieved is consistent with that in the foregoing method embodiment. Details are not described herein again. In the foregoing embodiment, the routing information sending apparatus in the embodiments of this application is described from a perspective of a functional entity. The following describes in detail a routing information sending device in the embodiments of this application from a perspective of hardware processing. The following describes a routing information sending device according to an embodiment of this application. The device includes a processor, a memory, a communications interface, and a bus. The processor, the communications interface, and the memory communicate with each other by using the bus. The communications interface is configured to receive and send data. The memory is configured to store an instruction. The processor is configured to execute the instruction in the memory, to perform the following operations:receiving routing information sent by a second network node;determining the routing information corresponds to a tenant identifier; anddetermining that a third network node belongs to a tenant corresponding to the tenant identifier; andsending the routing information to the third network node in response to determining that the third network node belongs to the tenant corresponding to the tenant identifier. In some possible implementations, that the first network node determines to correspond the routing information to the tenant identifier includes:the first network node determines that a first session corresponds to the tenant identifier, and the first network node receives the routing information through the first session, where the first session is a border gateway protocol session between the first network node and the second network node. In some possible implementations, that the first network node determines to correspond the routing information to the tenant identifier includes:the first network node determines that the second network node sending the routing information belongs to the tenant corresponding to the tenant identifier. In some possible implementations, that the first network node determines to correspond the routing information to the tenant identifier further includes:the first network node generates a routing entry based on the routing information, and adds the routing entry to a routing information base corresponding to the tenant identifier. In some possible implementations, that the first network node determines that the third network node belongs to the tenant corresponding to the tenant identifier includes:the first network node determines that a second session corresponds to the tenant identifier, where the second session is a border gateway protocol session between the first network node and the third network node. In some possible implementations, that the first network node sends the routing information to the third network node includes:the first network node sends the routing information to the third network node by using an execution module corresponding to the tenant identifier, where different tenant identifiers on the first network node correspond to different execution modules, and the execution module includes a central processing unit, a thread, or a process. In some possible implementations, the second network node is a customer premises equipment CPE, and the tenant identifier includes one tenant identifier. Alternatively, the second network node is a route reflector RR, and the tenant identifier includes one or more tenant identifiers. In some possible implementations, that the first network node determines correspond the routing information to the tenant identifier includes:the first network node parses a routing feature in the routing information, and determines, from the one or more tenant identifiers based on the routing feature, the tenant identifier corresponding to the routing information. In some possible implementations, the routing feature includes any one or more of the following: a prefix address, a community attribute, an extended community attribute, and an autonomous system path that are of the routing information. The foregoing describes the routing information sending device provided in this embodiment of this application. For a specific implementation, refer to the foregoing description in the embodiment of the routing information sending method corresponding toFIG.2. An effect achieved is consistent with that in the foregoing method embodiment. Details are not described herein again. The following describes the device in detail. Referring toFIG.8, a device800includes a receiver801, a transmitter802, a processor803, and a memory804(there may be one or more processors803in the device800, and one processor is used as an example inFIG.8). The communications interface may include the receiver801and the transmitter802. In some embodiments of this application, the receiver801, the transmitter802, the processor803, and the memory804may be connected by using the bus or in another manner. InFIG.8, a connection by using the bus is used as an example. The memory804may include a read-only memory and a random access memory, and provide an instruction and data to the processor803. A part of the memory804may further include a non-volatile random access memory (NVRAM). The memory804stores an operating system and an operation instruction, an executable module or a data structure, or a subnet thereof, or an extended set thereof. The operation instruction may include various operation instructions for implementing various operations. The operating system may include various system programs, to implement various basic services and process hardware-based tasks. The processor803controls an operation of the device800, and the processor803may also be referred to as a central processing unit (CPU). In a specific application, components are coupled together by using a bus system. In addition to a data bus, the bus system includes a power bus, a control bus, and a status signal bus. However, for clear description, various types of buses in the figure are marked as the bus system. The methods disclosed in the foregoing embodiments of this application may be applied to the processor803, or may be implemented by the processor803. The processor803may be an integrated circuit chip and has a signal processing capability. In an implementation process, steps in the foregoing methods can be implemented by using a hardware integrated logical circuit in the processor803, or by using instructions in a form of software. The processor803may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or another programmable logical device, a discrete gate or transistor logic device, or a discrete hardware component. It may implement or perform the methods, the steps, and logical block diagrams that are disclosed in the embodiments of this application. The general purpose processor may be a microprocessor, or the processor may be any conventional processor or the like. Steps of the methods disclosed with reference to the embodiments of this application may be directly executed and accomplished by a hardware decoding processor, or may be executed and accomplished by using a combination of hardware and software modules in the decoding processor. A software module may be located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register. The storage medium is located in the memory804, and a processor803reads information in the memory804and completes the steps in the foregoing methods in combination with hardware of the processor. The receiver801may be configured to: receive an input digit or character information, and generate signal input related to a related setting and function control of the network device800. The transmitter802may include a display device such as a display screen. The transmitter802may be configured to output the digit or character information through an external interface. In this embodiment of this application, the processor803is configured to perform the following operations:receiving routing information sent by a second network node;determining the routing information corresponds to a tenant identifier; anddetermining that a third network node belongs to a tenant corresponding to the tenant identifier; andsending the routing information to the third network node in response to determining that the third network node belongs to the tenant corresponding to the tenant identifier. In addition, an embodiment of this application further provides a computer-readable storage medium. The computer-readable storage medium stores an instruction, and when the instruction is run on a computer or a processor, the computer or the processor is enabled to perform the foregoing routing information sending method. In addition, an embodiment of this application further provides a computer program product including an instruction. When the computer program product runs on a computer or a processor, the computer or the processor is enabled to perform the foregoing routing information sending method. “First” in names such as “first network node” and “first determining module” mentioned in the embodiments of this application is merely used as a name identifier, and does not represent first in a sequence. The rule is also applicable to “second”, “third”, and the like. From the foregoing descriptions of the implementations, a person skilled in the art may clearly understand that some or all steps of the methods in the embodiments may be implemented by software in addition to a universal hardware platform. Based on such an understanding, the technical solutions of this application may be implemented in a form of a software product. The software product may be stored in a storage medium, such as a read-only memory (ROM)/RAM, a magnetic disk, or an optical disc, and includes a plurality of instructions for instructing a computer device (which may be a personal computer, a server, or a network communications device such as a router) to perform the methods described in the embodiments or some parts of the embodiments of this application. The embodiments in this specification are all described in a progressive manner, for same or similar parts in the embodiments, refer to these embodiments, and each embodiment focuses on a difference from other embodiments. Especially, an apparatus embodiment is basically similar to a method embodiment, and therefore is described briefly; for related parts, refer to partial descriptions in the method embodiment. The described apparatus embodiment is merely an example. The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one position, or may be distributed on a plurality of network units. Some or all the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments. A person of ordinary skill in the art may understand and implement the embodiments of the present disclosure without creative efforts. The foregoing descriptions are merely example implementations of this application, but are not intended to limit the protection scope of this application.
52,501
11863439
DESCRIPTION OF EMBODIMENTS To make objectives, technical solutions, and advantages of this application clearer, the following further describes embodiments of this application in detail with reference to the accompanying drawings. Before a method for application identification according to the embodiments of this application is described, an application scenario of the embodiments of this application is first described. A key service and a non-key service usually exist in private networks such as an enterprise network and a campus network. When the non-key service occupies relatively large bandwidth, the key service occupies relatively small bandwidth. In this case, small bandwidth may affect quality of the key service. In addition, a key service of an enterprise is usually a service corresponding to a private application of the enterprise. Therefore, the private application of the enterprise usually needs to be identified, so that a network administrator of the enterprise can configure some policies that can improve quality of the key service, and correspondingly the quality of the key service is improved. For example, for the private application of the enterprise, bandwidth required for normal running of the private application may be ensured. For a public network application, traffic-limiting processing may be performed. Traffic-limiting processing is not performed on a data flow corresponding to the private application, and traffic-limiting processing is performed on a data flow corresponding to the public network application. This improves the quality of the key service. It is clear that this application is applied to a scenario that is described above and in which a traffic-limiting policy is used to ensure the quality of the key service. This application may further be applied to another scenario. Other scenarios are not listed one by one in this application. FIG.1is a diagram of an architecture of an application identification system according to an embodiment of this application. The system includes a plurality of clients101, one network device102, and a plurality of servers103. Each client101is connected to the network device102in a wired or wireless manner for communication. Each server103is also connected to the network device102in a wired or wireless manner for communication. An application is installed on any one of the plurality of clients101. When the client101runs the application, data flows are generated. In this case, the client101may send the data flows to the network device102. When receiving the data flows, the network device102may process the data flows, to identify an application corresponding to the data flows. Then, when the data flows are transmitted to the server103, the server103may process the data flows in response to an operation of the client101. The application installed on the client101may be a private application, or may be a public network application. The private application is an application used within an enterprise, and the public network application is an application that can be used by anyone. For example, the private application may be an application used for communication within an enterprise, and the public network application may be an application used for communication between the enterprise and an external service. The client101may be any electronic product that can perform human-computer interaction with a user in one or more manners such as a keyboard, a touchpad, a touchscreen, a remote controller, a voice interaction device, or a handwriting device, for example, a personal computer (PC), a mobile phone, a smartphone, a personal digital assistant (PDA), a wearable device, a pocket personal computer (PPC), a tablet, a smart head unit, a smart television, a smart speaker, or the like. The network device102may be a core switch, an access switch, a router, or another device. The server103may be a server, a server cluster including a plurality of servers, or a cloud computing service center. InFIG.1, only three clients and three servers are used to describe the application identification system. This does not constitute a limitation on this embodiment of this application. In addition, the method for application identification provided in the embodiments of this application may be applied to identifying both a private application of an enterprise and a public network application. FIG.2is a diagram of a computer device according to an embodiment of this application. The computer device may be the client101, the network device102, or the server103shown inFIG.1. The computer device includes at least one processor201, a communications bus202, a memory203, and at least one communications interface204. The processor201may be a general-purpose central processing unit (CPU), a network processor (NP), a microprocessor, or may be one or more integrated circuits configured to implement the solutions of this application, for example, an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof. The PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), generic array logic (GAL), or any combination thereof. The communications bus202is used to transmit information between the foregoing components. The communications bus202may be classified into an address bus, a data bus, a control bus, or the like. For ease of representation, only one thick line is used to represent the bus in the figure, but this does not mean that there is only one bus or only one type of bus. The memory203may be a read-only memory (ROM) or another type of static storage device that can store static information and/or instructions, or a random access memory (RAM) or another type of dynamic storage device that can store information and/or instructions. Alternatively, the memory203may be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or another compact disc storage, an optical disc storage (including a compact optical disc, a laser disc, an optical disc, a digital versatile disc, a Blu-ray disc, and the like), a magnetic disk storage medium or another magnetic storage device, or any other medium that can be configured to carry or store expected program code in a form of an instruction or a data structure and that can be accessed by the network device. However, the memory203is not limited thereto. The memory203may exist independently, and be connected to the processor201through the communications bus202. Alternatively, the memory203may be integrated with the processor201. The communications interface204is configured to communicate with another device or a communications network. The communications interface204includes a wired communications interface, or may include a wireless communications interface. The wired communications interface may be, for example, an Ethernet interface. The Ethernet interface may be an optical interface, an electrical interface, or a combination thereof. The wireless communications interface may be a wireless local area network (WLAN) interface, a cellular network communications interface, a combination thereof, or the like. In an embodiment, the processor201may include one or more CPUs, for example, a CPU0and a CPU1shown inFIG.2. In an embodiment, the network device may include a plurality of processors, for example, the processor201and a processor205shown inFIG.2. Each of the processors may be a single-core processor (single-CPU) or may be a multi-core processor (multi-CPU). The processor herein may be one or more devices, circuits, and/or processing cores configured to process data (for example, a computer program instruction). In an embodiment, the computer device may further include an output device206and an input device207. The output device206communicates with the processor201, and may display information in a plurality of manners. For example, the output device206may be a liquid crystal display (LCD), a light emitting diode (LED) display device, a cathode ray tube (CRT) display device, or a projector. The input device207communicates with the processor201, and may receive an input from a user in a plurality of manners. For example, the input device207may be a mouse, a keyboard, a touchscreen device, or a sensing device. In some embodiments, the memory203is configured to store program code210for executing the solutions of this application, and the processor201may execute the program code210stored in the memory203. For example, the computer device may implement a method for application identification provided in the following embodiment inFIG.3by using the processor201and the program code210in the memory203. FIG.3is a flowchart of a method for application identification according to an embodiment of this application. The method is applied to the network device in the application identification system shown inFIG.1. The method includes the following steps. Step301: A network device separately extracts features from a plurality of data flows to obtain a flow table and a domain name table, where the flow table includes a plurality of flow entries, each of the plurality of flow entries includes a 5-tuple and a flow start time point, the domain name table includes a plurality of domain name entries, and each of the plurality of domain name entries includes a source IP address, a destination domain name, a destination IP address, and a domain name type. Usually, one data flow may include one or more packets, and the one or more packets have a same 5-tuple. In other words, one or more packets with a same 5-tuple may form one data flow. In addition, the destination domain name is a domain name corresponding to the destination IP address, and when the destination domain name is unique, the domain name type is also unique. Therefore, the network device may separately extract the 5-tuple, the destination domain name, and the domain name type in any packet included in each data flow, and determine a receiving time point of a first packet in each data flow as the flow start time point. Then, the network device may obtain the source IP address and the destination IP address from the 5-tuple, to generate the flow table based on the 5-tuple and the flow start time point of each data flow, and generate the domain name table based on the source IP address, the destination domain name, the destination IP address, and the domain name type of each data flow. Because the 5-tuple includes a source IP address, a source port, a destination IP address, a destination port, and a protocol number, the network device may obtain the source IP address and the destination IP address of the data flow from the 5-tuple. For example, it is assumed that a client currently needs to send a packet to a server. A source IP address and a source port of the packet are an IP address and a port of the client, a destination IP address and a destination port are an IP address and a port of the server, and a protocol number is a number of a transmission protocol used for communication between the client and the server. Based on the foregoing description, the flow start time point of each data flow is the receiving time point of the first packet in each data flow. However, the first packet in each data flow may not be a first packet in the entire data flow, but a first packet in packets received when the features are currently extracted. For example, currently features need to be separately extracted from all data flows collected between 1:30 and 2:00. It is assumed that a first packet of a data flow A is received at 1:00, and between 1:30 and 2:00, a first data packet of the data flow A is received at 1:31. In this case, a flow start time point of the data flow A is 1:31. The domain name type has two formats: A.name and C.name. The A.name is used to resolve a host name or a domain name to an IP address. The C.name resolves a plurality of host names or domain names to another domain name, and then resolves the another domain name to an IP address, where the IP address is the same as the IP address to which the A.name resolves. In other words, a plurality of C.names are equivalent to branches of one A.name. In this embodiment of this application, the network device may further set a trigger condition for extracting features. When the trigger condition is met, the network device separately extracts the features from the plurality of data flows. For example, when collecting data flows, the network device may determine whether a data volume of currently collected data flows reaches a data volume threshold. When the data volume of the currently collected data flows reaches the data volume threshold, the network device may separately extract features from the collected data flows. For example, the network device sets the data volume threshold to 200 M. When collecting data flows, the network device may determine whether a data volume of currently collected data flows reaches 200 M. If the data volume of the currently collected data flows reaches 200 M, the network device separately extracts features from the collected data flows. For another example, the network device may collect statistics on a time difference between a collection start time point and a current time point. When the time difference reaches a first time threshold, the network device separately extracts features from a plurality of collected data flows. For example, the network device sets the first time threshold to 30 minutes, and may collect statistics on a time difference between a collection start time point and a current time point. If the time difference reaches 30 minutes, the network device separately extracts features from collected data flows. The data volume threshold and the first time threshold may be set based on a requirement. In some embodiments, after the flow table and the domain name table are obtained, the flow table and the domain name table may further be preprocessed. For example, for the flow table, the network device may deduplicate and combine the flow entries in the flow table, and then delete a flow entry with incomplete information, to obtain a preprocessed flow table. For the domain name table, the network device may deduplicate the domain name entries in the domain name table, select a domain name entry whose domain name type is A.name, and then delete a domain name entry with incomplete information, to obtain a preprocessed domain name table. For example, the flow table and the domain name table that are obtained through feature extraction may be shown in the following Table 1 and Table 2. In Table 1, a first flow entry and a second flow entry are duplicated. In this case, the first flow entry or the second flow entry may be deleted. In addition, a 5-tuple of the second flow entry is the same as a 5-tuple of a third flow entry, but a flow start time point of the second flow entry is different from a flow start time point of the third flow entry. It is assumed that the flow start time point of the second flow entry is earlier than the flow start time point of the third flow entry. In this case, the second flow entry may be used as a combined flow entry, and the third flow entry may be deleted. In addition, after the flow entries are deduplicated and combined, a flow entry with incomplete information further needs to be deleted from the flow table. Refer to Table 1. 5-tuples of a sixth flow entry, an eighth flow entry, and an eleventh flow entry in Table 1 are incomplete, and therefore the three flow entries may be deleted. Till now, preprocessing of the flow table may be completed, and the preprocessed flow table shown in Table 3 may be obtained. In Table 2, a first domain name entry and a second domain name entry are duplicated. In this case, the first domain name entry or the second domain name entry may be deleted. In addition, in Table 2, domain name types of a third domain name entry, a fifth domain name entry, and a seventh domain name entry are C.name. A domain name entry whose domain name type is C.name is equivalent to a branch of a domain name entry whose domain name type is A.name. Therefore, in the domain name table, a domain name entry whose domain name type is A.name may be reserved, and a domain name entry whose domain name type is C.name may be deleted. This can reserve a data flow with a distinct feature for flow behavior feature analysis, and further determine a plurality of services in a more accurate manner. In addition, after the foregoing processing is performed on the domain name table, a domain name entry with incomplete information further needs to be deleted from the domain name table. Refer to Table 2, a sixth domain name entry and an eighth domain name entry in Table 2 have incomplete information, and the two domain name entries may be deleted. Till now, preprocessing of the domain name table may be completed, and the preprocessed domain name table shown in Table 4 may be obtained. TABLE 15-tupleSourceSourceDestinationDestinationFlow startIPportIPporttime pointIP01Port01IP001Port001T1IP01Port01IP001Port001T1IP01Port01IP001Port001T2IP02Port02IP002Port002T3IP03Port03IP003Port003T4IP04IP004Port004T5IP05Port05IP005Port005T6IP06Port06Port006T7IP07Port07IP007Port007T8IP08Port08IP008Port008T9IP09Port09IP009T10IP10Port10IP010Port010T11IP11Port11IP011Port011T12. . .. . .. . .. . .. . . TABLE 2SourceDestination domainDestinationDomain nameIPnameIPtypeIP01abpd-jap.xxx.comIP001A.nameIP01abpd-jap.xxx.comIP001A.nameIP02acnd-jap.xxx.comIP002C.nameIP03abnc-jp.xxx.comIP003A.nameIP04afnd-hx.xxx.comIP004C.nameIP05IP005A.nameIP06acjd-jap.xxx.comIP006C.nameIP07abed-jap.xxx.comA.nameIP08abnd-hx.xxx.comIP008A.nameIP09abnt-jap.xxx.comIP009A.nameIP10asnd-jp.xxx.comIP011A.nameIP11atnd-jap.xxx.comIP012A.nameIP12abyd-jnp.xxx.comIP013A.name. . .. . .. . .. . . TABLE 35-tupleSourceSourceDestinationDestinationFlow startIPportIPporttime pointIP01Port01IP001Port001T1IP02Port02IP002Port002T3IP03Port03IP003Port003T4IP05Port05IP005Port005T6IP07Port07IP007Port007T8IP08Port08IP008Port008T9IP10Port10IP010Port010T11IP11Port11IP011Port011T12. . .. . .. . .. . .. . . TABLE 4SourceDestination domainDestinationDomain nameIPnameIPtypeIP01abpd-jap.xxx.comIP001A.nameIP03abnc-jp.xxx.comIP003A.nameIP08abnd-hx.xxx.comIP008A.nameIP09abnt-jp.xxx.comIP009A.nameIP10asnd-jp.xxx.comIP011A.nameIP11atnd-jap.xxx.comIP012A.nameIP12abyd-jnp.xxx.comIP013A.name. . .. . .. . .. . . An operation of preprocessing the flow table and the domain name table is optional. In other words, subsequent steps may be implemented according to the flow table and the domain name table that are not preprocessed, or may be implemented according to the preprocessed flow table and the preprocessed domain name table. Implementation processes are similar. In this embodiment of this application, the subsequent steps are described by using the preprocessed flow table and the preprocessed domain name table as an example. In other words, the flow table and the domain name table mentioned in the subsequent steps are the preprocessed flow table and the preprocessed domain name table. Although the flow table and the domain name table may be obtained through step301, data in the flow table and the domain name table is large, a quantity of flows is large, and flow behavior features are complex. Therefore, it is difficult to identify an application corresponding to the data flow from the flow table and the domain name table. However, an application usually includes a group of services, and one service includes one IP address and one port identifier. Therefore, an application corresponding to the data flow may be identified in a step-by-step manner. Step302is first performed, that is, flow behavior feature analysis is performed according to the flow table, to obtain a plurality of services. Then, step303is performed, that is, the plurality of services are clustered, to obtain a plurality of application types. Further, step303is performed, that is, a label corresponding to the application type is determined, and an application corresponding to the data flow is identified. Step302: The network device performs flow behavior feature analysis according to the flow table, to obtain the plurality of services, where each service includes one IP address and one port identifier. In some embodiments, step302may be implemented by using the following steps (1) to (4). (1) Determine a port with a loop according to the flow table, to obtain a loop port set. In some embodiments, for each port in the flow table, the network device may obtain a same-side IP address set and a peer-side IP address set of the port according to the flow table. The network device determines an intersection between the same-side IP address set and the peer-side IP address set of the port, to obtain a plurality of IP addresses. The network device determines, according to the flow table, a total flow quantity of data flows that correspond to the plurality of IP addresses and that are in all data flows passing through the port, and uses the determined total flow quantity as a first total flow quantity. If the first total flow quantity is greater than a first threshold, the network device determines a total flow quantity of all data flows passing through the port, and uses the determined total flow quantity as a second total flow quantity. If a ratio of the first total flow quantity to the second total flow quantity is greater than a second threshold, the network device determines that the port is a port with a loop. Then, ports with loops in the flow table may form the loop port set. An implementation in which the network device determines, according to the flow table, the total flow quantity of all data flows passing through the port, and determines, according to the flow table, the total flow quantity of data flows that correspond to the plurality of IP addresses and that are in all data flows passing through the port may be: selecting a flow entry at which the port is located from the flow table, collecting statistics on a quantity of selected flow entries, and determining the quantity of selected flow entries as the total flow quantity of all data flows passing through the port; and then collecting statistics on a quantity of flow entries that are in the selected flow entries and whose source IP address or destination IP address is any one of the plurality of IP addresses, and determining the quantity of flow entries as the total flow quantity of data flows that correspond to the plurality of IP addresses and that are in all data flows passing through the port. For example, when the port is a source port, the network device selects a flow entry whose source port is the port from the flow table, collects statistics on a quantity of selected flow entries, and determines the quantity of selected flow entries as a total flow quantity of all data flows passing through the port. Then, the network device collects statistics on a quantity of flow entries that are in the selected flow entries and whose source IP address or destination IP address is any one of the plurality of IP addresses, and determines the quantity of flow entries as a total flow quantity of data flows that correspond to the plurality of IP addresses and that are in all data flows passing through the port. The same-side IP address set of the port is a set of IP addresses on a same side as the port, and the peer-side IP address set of the port is a set of IP addresses on different sides from the port. For example, the port is a source port. A same-side IP address set of the port is a set of source-end IP addresses, and a peer-side IP address set of the port is a set of destination-end IP addresses. Similarly, it is assumed that the port is a destination port. A same-side IP address set of the port is a set of destination-end IP addresses, and a peer-side IP address set of the port is a set of source-end IP addresses. The first threshold and the second threshold may be set based on a requirement. For example, the first threshold may be 20, and the second threshold may be 0.2. Usually, the plurality of IP addresses determined by using the intersection between the same-side IP address set and the peer-side IP address set of the port are IP addresses of both the source-end device and the destination-end device. In this case, the port may be considered as a potential port with a loop. To further verify whether the port is a port with a loop, the first total flow quantity may be determined. If the first total flow quantity is greater than the first threshold, the second total flow quantity and the ratio of the first total flow quantity to the second total flow quantity may further be determined. If the ratio is greater than the second threshold, it may indicate that the source-end IP addresses and the destination-end IP addresses of most data flows passing through the port are the same, that is, the source-end devices and the destination-end devices of most data flows passing through the port are same devices, and the port may further be determined as a port with a loop. For example, for the destination port Port001 in Table 3, there are 100 flow entries in Table 3 whose destination ports are Port001, in which 30 flow entries have same destination IP addresses and source IP addresses. Then, it is determined that the first total flow quantity is 30, and the second total flow quantity is 100. It is assumed that the network device sets the first threshold to 20 and the second threshold to 0.2 based on a requirement. In this case, it may be determined that the first total flow quantity 30 is greater than the first threshold 20. Therefore, the ratio of the first total flow quantity to the second total flow quantity further needs to be determined. The ratio is 0.3 and the ratio is greater than the second threshold 0.2. Therefore, the port Port001 is determined as a port with a loop. Similarly, for the source port Port02 in Table 3, there are 150 flow entries in Table 3 whose source ports are Port02, in which 50 flow entries have same source IP addresses and destination IP addresses. Then, it is determined that the first total flow quantity is 50, and the second total flow quantity is 150. In this case, it may be determined that the first total flow quantity 50 is greater than the first threshold 20. Therefore, the ratio of the first total flow quantity to the second total flow quantity further needs to be determined. The ratio is 0.3 and the ratio is greater than the second threshold 0.2. Therefore, the port Port02 is determined as a port with a loop. Further, if the first total flow quantity is not greater than the first threshold, or the ratio of the first total flow quantity to the second total flow quantity is not greater than the second threshold, it may be determined that the port is not a port with a loop. (2) Determine a single-client access service set and a first multi-client access service set based on the loop port set and according to the flow table. Each service in the single-client access service set is accessed by a single client, an IP address and a port of the service belong to a same side, and the port does not belong to the loop port set. Each service in the first multi-client access service set is accessed by a plurality of clients, an IP address and a port of the service belong to a same side, and the port does not belong to the loop port set. In some embodiments, the network device may determine, according to the flow table, a service that is accessed by a single client and whose IP address and port belong to a same side, and a service that is accessed by a plurality of clients and whose IP address and port belong to a same side, to obtain a potential single-client access service set and a first potential multi-client access service set. The network device selects, from the potential single-client access service set, a service at which a port in the loop port set is located, to obtain the single-client access service set. The network device selects, from the first potential multi-client access service set, a service at which a port in the loop port set is located, to obtain the first multi-client access service set. An implementation in which the network device determines, according to the flow table, the service that is accessed by a single client and whose IP address and port belong to the same side, and the service that is accessed by a plurality of clients and whose IP address and port belong to the same side may be: determining a plurality of target services according to the flow table, where each target service corresponds to an IP address and a port in one flow entry that belong to a same side, and each target service corresponds to a plurality of data flows; determining, for each of the plurality of target services, whether a port of the target service is randomly generated; if the port of the target service is randomly generated, determining whether a quantity of same-side ports corresponding to the IP address of the target service is greater than a third threshold; if the quantity of same-side ports corresponding to the IP address of the target service is greater than the third threshold, determining whether a quantity of peer-side ports corresponding to the target service is greater than a fourth threshold; and if the quantity of peer-side ports corresponding to the target service is greater than the fourth threshold, determining that the target service is the service that is accessed by a plurality of clients and whose IP address and port belong to the same side; or if the quantity of peer-side ports corresponding to the target service is not greater than the fourth threshold, determining whether a peer-side IP address of the target service is unique; and if the peer-side IP address of the target service is unique, determining that the target service is the service that is accessed by a single client and whose IP address and port belong to the same side. The target service corresponds to an IP address and a port in one flow entry that belong to a same side, and the target service corresponds to the plurality of data flows. In other words, in the flow table, if the plurality of data flows correspond to an IP address and a port in one flow entry that belong to the same side, the IP address and the port may be used as the target service. For example, if the plurality of data flows in the flow table correspond to a destination IP address and a destination port in one flow entry, the destination IP address and the destination port may be determined as the target service. Similarly, if the plurality of data flows in the flow table correspond to a source IP address and a source port in one flow entry, the source IP address and the source port may be determined as the target service. In some embodiments, an implementation in which the network device determines whether the target service corresponds to the plurality of data flows may be: determining a quantity of flow entries in the flow table in which the target service is located; and if the determined quantity of flow entries is greater than a fifth threshold, determining that the target service corresponds to the plurality of data flows; or if the determined quantity of flow entries is not greater than a fifth threshold, determining that the target service does not correspond to the plurality of data flows. The third threshold, the fourth threshold, and the fifth threshold may be set based on a requirement. For example, the third threshold may be 20, the fourth threshold may be 5, and the fifth threshold may be 10. The target service corresponds to an IP address and a port in one flow entry that belong to a same side, and the target service corresponds to the plurality of data flows, that is, the IP address and the port of the target service may be an IP address and a port of the server. In other words, the target service may be the service that is accessed by a plurality of clients and whose IP address and port belong to the same side. Therefore, in this case, to further determine whether the target service is the service that is accessed by a plurality of clients and whose IP address and port belong to the same side, whether the port of the target service is randomly generated may further be determined. In some embodiments, an implementation of determining whether the port of the target service is randomly generated may be: determining whether a port number of the port of the target service is greater than 1024; and if the port number of the port of the target service is less than 1024, determining that the port is a well-known port, that is, the port is not randomly generated. If the port number of the port of the target service is greater than 1024, it is determined that the port is randomly generated. If the port of the target service is randomly generated, whether the target service is the service that is accessed by a plurality of clients and whose IP address and port belong to the same side cannot be determined, and whether the quantity of same-side ports corresponding to the IP address of the target service is greater than the third threshold further needs to be determined. When determining whether the quantity of same-side ports corresponding to the IP address of the target service is greater than the third threshold, the network device needs to first determine the quantity of same-side ports corresponding to the IP address of the target service. In some embodiments, an implementation of determining the quantity of same-side ports corresponding to the IP address of the target service may be: selecting a flow entry in which the IP address of the target service is located from the flow table; determining the quantity of same-side ports that correspond to the IP address of the target service and that are in selected flow entries; and using the determined quantity of ports as the quantity of same-side ports corresponding to the IP address of the target service. For example, based on the foregoing description, the target service may include a source IP address and a source port, or may include a destination IP address and a destination port. When the target service includes a source IP address and a source port, an implementation of determining the quantity of same-side ports corresponding to the IP address of the target service may be: selecting a flow entry in which the IP address of the target service is located from the flow table; determining a quantity of source ports in selected flow entries; and using the determined quantity of source ports as the quantity of same-side ports corresponding to the IP address of the target service. When the target service includes a destination IP address and a destination port, an implementation of determining the quantity of same-side ports corresponding to the IP address of the target service may be: selecting a flow entry in which the IP address of the target service is located from the flow table; determining a quantity of destination ports in selected flow entries; and using the determined quantity of destination ports as the quantity of same-side ports corresponding to the IP address of the target service. A quantity of ports of the server is usually small, but a quantity of clients accessing the server is relatively large. Therefore, when determining that the quantity of same-side ports corresponding to the IP address of the target service is greater than the third threshold, the network device may further determine whether the quantity of peer-side ports corresponding to the target service is greater than the fourth threshold. When determining whether the quantity of peer-side ports corresponding to the target service is greater than the fourth threshold, the network device needs to first determine the quantity of peer-side ports corresponding to the target service. In some embodiments, an implementation of determining the quantity of peer-side ports corresponding to the target service may be: selecting a flow entry in which the target service is located from the flow table; determining a quantity of IP addresses that are in selected flow entries and that belong to different sides as the target service; and using the determined quantity of IP addresses as the quantity of peer-side ports corresponding to the target service. For example, when the target service includes a source IP address and a source port, an implementation of determining the quantity of peer-side ports corresponding to the target service may be: selecting a flow entry in which the target service is located from the flow table; determining a quantity of destination IP addresses in selected flow entries; and using the determined quantity of destination IP addresses as the quantity of peer-side ports corresponding to the target service. When the target service includes a destination IP address and a destination port, an implementation of determining the quantity of peer-side ports corresponding to the target service may be: selecting a flow entry in which the target service is located from the flow table; determining a quantity of source IP addresses in selected flow entries; and using the determined quantity of source IP addresses as the quantity of peer-side ports corresponding to the target service. When determining that the quantity of peer-side ports corresponding to the target service is greater than the fourth threshold, the network device determines that the target service is the service that is accessed by a plurality of clients and whose IP address and port belong to the same side. Based on the foregoing implementations, the service that is accessed by a plurality of clients and whose IP address and port belong to the same side can be accurately determined. When it is determined that the quantity of peer-side ports corresponding to the target service is not greater than the fourth threshold, it indicates that the target service may be accessed by a single client, and the IP address and the port of the target service may be the IP address and the port of the server. In this case, whether the peer-side IP address of the target service is unique may be determined. If the peer-side IP address of the target service is unique, it may be directly determined that the target service is the service that is accessed by a single client and whose IP address and port belong to the same side. For example, a target service including a destination IP address and a destination port is IP001+Port001. It is assumed that the destination port Port001 is not randomly generated. Then, a quantity of destination ports corresponding to the IP address IP001 may be determined. It is assumed that the quantity of destination ports corresponding to the IP address IP001 is 25, and the third threshold set by the network device is 20. Because the quantity of destination ports corresponding to the IP address IP001 is greater than 20, a quantity of source ports corresponding to the target service may further be determined. It is assumed that the target service IP001+Port001 has 10 flow entries whose source IP addresses and source ports are not exactly the same in the flow table, and the fourth threshold set by the network device is 5. In this case, it may be determined that the quantity of source ports corresponding to the target service is 10 and is greater than 5. Therefore, it is determined that the target service IP001+Port001 is accessed by a plurality of clients and whose IP address and port belong to the same side. If the target service IP001+Port001 has 3 flow entries whose source IP addresses and source ports are not exactly the same in the flow table, it may be determined that the quantity of source ports corresponding to the target service is 3 and is less than 5. Then, it is determined whether the source IP addresses of the 3 flow entries are the same. If the source IP addresses of the 3 flow entries are the same, and the source ports are different, it is determined that the target service IP001+Port001 is accessed by a single client and whose IP address and port belong to the same side. Further, well-known ports are ports reserved by the server. Therefore, if the port of the target service is not randomly generated, that is, the port of the target service is a well-known port, it may be directly determined that the target service is the service that is accessed by a plurality of clients and whose IP address and port belong to the same side. Further, if the quantity of same-side ports corresponding to the IP address of the target service is not greater than the third threshold, it is determined that the target service is the service that is accessed by a plurality of clients and whose IP address and port belong to the same side. Alternatively, if the peer-side IP address of the target service is not unique, it is determined that the target service is neither the service that is accessed by a single client and whose IP address and port belong to the same side, nor the service that is accessed by a plurality of clients and whose IP address and port belong to the same side. (3) Determine a second multi-client access service set based on the loop port set and the first multi-client access service set and according to the flow table. In some embodiments, the network device may determine, based on the first multi-client access service set and according to the flow table, a service that is accessed by a plurality of clients and whose IP address and port belong to different sides, to obtain a second potential multi-client access service set; and selecting, from the second potential multi-client access service set, a service at which a port in the loop port set is located, to obtain the second multi-client access service set. An implementation in which the network device may determine, based on the first multi-client access service set and according to the flow table, the service that is accessed by a plurality of clients and whose IP address and port belong to different sides may be: determining an IP address and a port that are in a same flow entry in the flow table and that belong to different sides as one reference service, to obtain a plurality of reference services; determining, for each of the plurality of reference services, whether the reference service corresponds to a plurality of data flows; if the reference service corresponds to the plurality of data flows, determining whether a port of the reference service is randomly generated; if the port of the reference service is not randomly generated, determining whether the port of the reference service is included in the first multi-client access service set; if the port of the reference service is not included in the first multi-client access service set, determining whether an IP address of the reference service is included in the source IP addresses in the domain name table; and if the IP address of the reference service is not included in the source IP addresses in the domain name table, determining that the reference service is the service that is accessed by a plurality of clients and whose IP address and port belong to different sides. The reference service also includes an IP address and a port. Different from the target service, the IP address and the port of the reference service belong to different sides. For example, a destination IP address and a source port in a same flow entry may form a reference service. Alternatively, a source IP address and a destination port in a same flow entry may form a reference service. In some embodiments, an implementation in which the network device determines whether the reference service corresponds to the plurality of data flows may be: determining a quantity of flow entries in the flow table in which the reference service is located; and if the determined quantity of flow entries is greater than a fifth threshold, determining that the reference service corresponds to the plurality of data flows; or if the determined quantity of flow entries is not greater than a fifth threshold, determining that the reference service does not correspond to the plurality of data flows. For an operation of determining, by the network device, whether the port of the reference service is randomly generated, refer to the foregoing operation of determining whether the port of the target service is randomly generated. Details are not described in this embodiment of this application. In some cases, in the flow table obtained through feature extraction, the source IP address and the destination IP address may be reversed. Consequently, the source IP address included in the reference service may be incorrectly used as a destination IP address, or the destination IP address is incorrectly used as a source IP address. In this case, it cannot be determined whether the IP address and port of the reference service are an IP address and a port that are in a same flow entry and that belong to different sides. However, the source IP addresses in the domain name table are the correct source IP addresses. Therefore, if it is determined that the IP address of the reference service is not included in the source IP addresses in the domain name table, the reference service is determined as the service that is accessed by a plurality of clients and whose IP address and port belong to different sides. Further, if the reference service does not correspond to the plurality of data flows, the port of the reference service is randomly generated, the port of the reference service is included in the first multi-client access service set, or the IP address of the reference service is included in the source IP addresses in the domain name table, it may be determined that the reference service is not the service that is accessed by a plurality of clients and whose IP address and port belong to different sides. (4) Combine the first multi-client access service set, the second multi-client access service set, and the single-client access service set, to obtain the plurality of services. Because each of the first multi-client access service set, the second multi-client access service set, and the single-client access service set includes one or more services, the plurality of services belonging to the server can be obtained after the first multi-client access service set, the second multi-client access service set, and the single-client access service set are combined. Step303: The network device clusters the plurality of services according to the flow table and the domain name table, to obtain a plurality of application types. In some embodiments, step303may be implemented by using the following steps (1) to (6). (1) Perform time-correlated clustering on the plurality of services according to the flow table and the domain name table, to obtain a time-correlated clustering result. In some embodiments, the network device may obtain, according to the flow table, a flow start time point in a flow entry in which each of the plurality of services is located. The network device determines a time difference between every two services in the plurality of services based on the obtained flow start time point. The network device determines time correlation between every two services in the plurality of services based on the determined time difference. The network device selects, from the plurality of services based on the time correlation between every two services in the plurality of services, a service that meets a time correlation condition. The network device generates a similarity matrix based on the time correlation between the selected services. The network device determines a spectral clustering result of the plurality of services through spectral cluster analysis based on the similarity matrix. The network device determines a similarity between every two services in the plurality of services according to the domain name table. The network device determines the time-correlated clustering result based on the similarity between every two services in the plurality of services and the spectral clustering result. An implementation in which the network device determines the time correlation between every two services in the plurality of services based on the determined time difference may be: for any two services in the plurality of services, determining whether the time difference between the two services is less than a second time threshold; and if the time difference between the two services is less than the second time threshold, determining that the two services are time-correlated; or if the time difference between the two services is not less than the second time threshold, determining that the two services are not time-correlated. For any two other services, whether the two services are time-correlated may also be determined according to the foregoing method. The second time threshold may be set based on a requirement. This is not limited in this embodiment of this application. In some embodiments, an implementation in which the network device selects, from the plurality of services based on the time correlation between every two services in the plurality of services, the service that meets the condition may be: generating an undirected graph based on the time correlation between every two services in the plurality of services, where the undirected graph includes a plurality of nodes that are in a one-to-one correspondence with the plurality of services and edges corresponding to two services that are time-correlated, and the edge is used to connect two nodes corresponding to two services that are time-correlated; determining a maximal clique in the undirected graph, where the maximal clique is a connected region including a largest quantity of nodes after nodes are connected by using edges; and determining a service corresponding to a node in the maximal clique as the service that meets the time correlation condition. FIG.4is a diagram of an undirected graph according to an embodiment of this application. For example, the plurality of services are a service A to a service G. The service A is time-correlated with the service B, the service B is time-correlated with the service C, the service C is separately time-correlated with the service A and the service D, the service D is time-correlated with the service A, the service E is time-correlated with the service F, and the service E is time-correlated with the service G. Then, the undirected graph shown inFIG.4may be generated. In the undirected graph, a node A corresponding to the service A is connected to a node B corresponding to the service B to form an edge, the node B corresponding to the service B is connected to a node C corresponding to the service C to form an edge, the node C corresponding to the service C is separately connected to the node A corresponding to the service A and to a node D corresponding to the service D to form two edges, the node D corresponding to the service D is connected to the node A corresponding to the service A to form an edge, a node E corresponding to the service E is connected to a node F corresponding to the service F to form an edge, and the node E corresponding to the service E is connected to a node G corresponding to the service G to form an edge. In the undirected graph, the connected region including a largest quantity of nodes after nodes are connected by using edges is a connected region including the node A, the node B, the node C, and the node D. The connected region is the maximal clique in the foregoing undirected graph. In this case, the service A, the service B, the service C, and the service D are determined as services that meet the time correlation condition. In some embodiments, an implementation in which the network device generates the similarity matrix based on the time correlation between the selected services may be: for every two selected services, if the two services are time-correlated, determining that an element corresponding to the two services in the similarity matrix is 1; or if the two services are not time-correlated, determining that an element corresponding to the two services in the similarity matrix is 0. An element corresponding to a same service in the similarity matrix is 1. In some other embodiments, based on the foregoing maximal clique, an implementation in which the network device generates the similarity matrix based on the time correlation between the selected services may be: for every two selected services, if nodes corresponding to the two services in the maximal clique are connected to form an edge, determining that an element corresponding to the two services in the similarity matrix is 1; or if nodes corresponding to the two services in the maximal clique are not connected to form an edge, determining that an element corresponding to the two services in the similarity matrix is 0. For example, if the service A is time-correlated with the service B, an element corresponding to the service A and the service B in the similarity matrix is 1. If the service B is time-correlated with the service C, an element corresponding to the service B and the service C in the similarity matrix is 1. If the service C is time-correlated with the service A, an element corresponding to the service C and the service A in the similarity matrix is 1. If the service C is time-correlated with the service D, an element corresponding to the service C and the service D in the similarity matrix is 1. In this case, a similarity matrix generated based on the time correlation between the service A, the service B, the service C, and the service D is: [LAALABLACLADLBALBBLBCLBDLCALCBLCCLCDLDALDBLDCLDD]=[1110111011110011] The foregoing similarity matrix is a matrix of n rows and n columns, that is, n services are selected from the plurality of services based on the time correlation between every two services in the plurality of services. The foregoing spectral cluster analysis is clustering the n services based on the similarity matrix. The spectral cluster analysis may be used to determine which services in the n services can be clustered into one type. For an implementation of spectral cluster analysis, refer to a related technology. In some embodiments, an implementation in which the network device determines the similarity between every two services in the plurality of services according to the domain name table may be: determining, according to the domain name table, a domain name corresponding to an IP address of each of the plurality of services; and determining the similarity between domain names corresponding to IP addresses of every two services in the plurality of services, to obtain the similarity between every two services in the plurality of services. For any two services in the plurality of services, an implementation of determining the similarity between domain names corresponding to IP addresses of the two services may be: determining, according to the domain name table, the domain names corresponding to the IP addresses of the two services; performing word segmentation on the domain name corresponding to the IP address of each service, to obtain all words in the domain name corresponding to each IP address; removing a word used as a domain name suffix from all words in the domain name corresponding to each IP address, to obtain a word group of the domain name corresponding to each IP address; and determining intersection over union of word groups of the domain names corresponding to the IP addresses of the two services, and determining the similarity between the domain names corresponding to the IP addresses of the two services based on the determined intersection over union. Domain name suffixes of most domain names may be the same. Therefore, a similarity between words without the domain name suffixes can accurately reflect the similarity between the two domain names. The intersection over union of every two word groups is a ratio of a quantity of intersection elements to a quantity of union elements in the two word groups. For example, a domain name corresponding to an IP address of a service 1 is abnd-jap.xxx.com, and a word group of the domain name may include abnd, jap, and xxx. A domain name corresponding to an IP address of a service 2 is abnd-hx.xxx.com, and a word group of the domain name may include abnd, hk, and xxx. In the two word groups, intersection elements are abnd and xxx, and union elements are abnd, jap, hk, and xxx. A quantity of intersection elements in the two word groups is 2, and a quantity of union elements in the two word groups is 4. Therefore, intersection over union of the word groups of domain names corresponding to IP addresses of the two services is 2/4. Because spectral cluster analysis comprises dividing a plurality of relatively similar services into one type, a spectral clustering result may include a plurality of types of services, and each type of services includes a plurality of relatively similar services. In other words, the spectral clustering result may include a plurality of first service sets, and each first service set includes a plurality of relatively similar services. However, after the spectral cluster analysis, two service sets may include more similar services, and spectral cluster analysis may not be performed on some services. Therefore, the network device determines the time-correlated clustering result based on the similarity between every two services in the plurality of services and the spectral clustering result. In some embodiments, an implementation in which the network device determines the time-correlated clustering result based on the similarity between every two services in the plurality of services and the spectral clustering result may be: for each service in each of the plurality of first service sets included in the spectral clustering result, determining a similarity between the service and another service in the same first service set as the service based on the similarity between every two services in the plurality of services; and if the similarity between the service and another service in the same first service set as the service is not greater than a similarity threshold, removing the service from the first service set. After the foregoing processing manner is performed on each service in each first service set included in the spectral clustering result, a plurality of second service sets may be obtained. For each removed service, a similarity between the service and each service in each second service set may be determined based on the similarity between every two services in the plurality of services. If the similarity between the service and each service in one of the second service sets is greater than the similarity threshold, the service is added to the second service set. Services on which spectral cluster analysis is not performed are clustered by using the foregoing processing manner performed on a removed service, to obtain the time-correlated clustering result. The similarity threshold may be set based on a requirement. This is not limited in this embodiment of this application. Optionally, in this embodiment of this application, before performing time-correlated clustering on the plurality of services according to the flow table and the domain name table, the network device may further process the flow table again based on the plurality of services determined in step302. In some embodiments, a flow entry whose destination IP address and destination port are not an IP address and a port of any one of the plurality of services in the flow table may be deleted, and/or an entry whose source IP address and destination IP address are reversed may be corrected, and an entry whose source port and destination port are reversed may be corrected. (2) Select a periodic service from the plurality of services according to the flow table, to obtain a periodic clustering result. In some embodiments, for each of the plurality of services, the network device obtains, according to the flow table, a flow start time point of the plurality of data flows of the service accessed by a same client. The network device determines a time difference between every two adjacent flow start time points based on a sequence of the flow start time points of the plurality of data flows of the accessed service. The network device determines whether a periodicity of the service is a strong periodicity through a Fourier transform based on the determined time difference. If the periodicity of the service is a strong periodicity, the network device determines that the service is a periodic service. In some embodiments, an implementation in which the network device obtains, according to the flow table, the flow start time point of the plurality of data flows of the service accessed by the same client may be: selecting a flow entry whose destination IP address and destination port are an IP address and a port of the service from the flow table; determining a quantity of flow entries in which a source IP address of the selected flow entry is located; obtaining a flow start time point from a flow entry in which the source IP address, corresponding to a largest quantity of flow entries, is located; and determining the obtained flow start time point as the flow start time point of the plurality of data flows of the service accessed by the same client. For example, for a service IP008+Port008, in the flow table, there are 20 flow entries whose destination IP address and destination port are an IP address and a port of the service. In the 20 flow entries, if a quantity of flow entries in which a source IP address IP08 is located is 15, and a quantity of flow entries whose source IP address is IP01 is 5, a flow start time point in the flow entry in which the source IP address IP08 is located may be obtained, and the obtained flow start time point is determined as the flow start time point of the plurality of data flows of the service accessed by the same client. In some embodiments, an implementation of determining whether the periodicity of the service is a strong periodicity through a Fourier transform based on the determined time difference may be: establishing a coordinate system by using a quantity of determined time differences as a horizontal axis and by using the determined time differences as a vertical axis; drawing the determined time differences into the coordinate system, to obtain discrete signals; performing a Fourier transform on the discrete signals, to determine whether a quantity of peak values in transformed signals is less than a sixth threshold; and if the quantity of peak values is less than the sixth threshold, determining that the periodicity of the service is a strong periodicity; or if the quantity of peak values is not less than the sixth threshold, determining that the periodicity of the service is not strongly periodic. The sixth threshold may be set based on a requirement. This is not limited in this embodiment of this application. (3) Obtain a plurality of first services and a plurality of second services from the plurality of services according to the domain name table. The plurality of first services are services that are accessed by a plurality of clients and that correspond to domain names, and the plurality of second services include a service that is accessed by a plurality of clients and that does not correspond to domain names and a service that is accessed by a single client. It can be learned from step (4) in step302that the plurality of services include a service accessed by a plurality of clients and a service accessed by a single client, and each domain name entry in the domain name table includes the source IP address, the destination domain name, the destination IP address, and the domain name type. Therefore, for the services accessed by a plurality of clients, whether each of the plurality of services corresponds to a domain name may be determined from the domain name table, and then a service that is accessed by a plurality of clients and corresponds to a domain name is selected from the plurality of services, to obtain the plurality of first services. In addition, the service that is accessed by a plurality of clients and does not correspond to a domain name can be selected. (4) Perform semantic-correlated clustering on the plurality of first services, to obtain a semantic-correlated clustering result. In some embodiments, the network device may cluster the plurality of first services based on domain name semantic correlation, to obtain a plurality of first clustering results. The network device combines the plurality of first clustering results based on domain name similarity between the plurality of first clustering results, to obtain a plurality of second clustering results. The network device clusters an unclustered service and the plurality of second clustering results based on domain name semantic correlation between the unclustered service in the plurality of first services and each second clustering result, to obtain the semantic-correlated clustering result. An implementation of clustering the plurality of first services based on the domain name semantic correlation may be: obtaining, from the plurality of first services, a plurality of third services that are accessed by a plurality of clients and correspond to a unique domain name, and obtaining, from the plurality of first services, a plurality of fourth services that are accessed by a plurality of clients and correspond to a plurality of domain names; obtaining combinable domain names from the domain names corresponding to the plurality of fourth services, and combining the combinable domain names to obtain a plurality of first domain names; removing a number and a symbol from each first domain name, and removing a number and a symbol from the domain name corresponding to each third service, to obtain a plurality of second domain names; clustering services corresponding to the plurality of second domain names based on semantic correlation of the plurality of second domain names; and clustering services corresponding to domain names that are not combinable based on semantic correlation of the domain names that are not combinable in the domain names corresponding to the plurality of fourth services. Because the plurality of first services are services accessed by a plurality of clients, for each first service, a domain name corresponding to the first service may be determined from the domain name table. If the domain name corresponding to the first service is unique, the first service is determined as a third service that is accessed by a plurality of clients and that corresponds to a unique domain name. If the domain name corresponding to the first service is not unique, the first service is determined as a fourth service that is accessed by a plurality of clients and that corresponds to a plurality of domain names. In some embodiments, an implementation of obtaining the combinable domain names from the domain names corresponding to the plurality of fourth services, and combining the combinable domain names to obtain the plurality of first domain names may be: obtaining a plurality of domain names with a same level-1 domain name from the domain names corresponding to the plurality of fourth services; performing word segmentation on the obtained plurality of domain names to obtain all words in each domain name; removing a word belonging to the level-1 domain name from all words in each domain name to obtain a word group of each domain name; determining intersection over union of obtained word groups of the plurality of domain names and if the intersection over union is greater than a first intersection over union threshold, determining that in the domain names corresponding to the plurality of fourth services, the plurality of domain names with the same level-1 domain name are combinable domain names; determining an intersection of the obtained word groups of the plurality of domain names; and adding the intersection of the plurality of word groups in front of the level-1 domain name to obtain a combined domain name, namely, the first domain name. For example, the domain names corresponding to the plurality of fourth services are abpd-jap.xxx.com, acnd-jap.xxx.com, and abed-jap.xxx.com. Word segmentation is performed on the three domain names, to obtain words abpd, jap, xxx, and com in the domain name abpd-jap.xxx.com, words acnd, jap, xxx, and com in the domain name acnd-jap.xxx.com, and words abed, jap, xxx, and com in the domain name abed-jap.xxx.com. Words xxx and com that belong to the level-1 domain name are removed from all words in each domain name, to obtain a word group {abpd, jap} of the domain name abpd-jap.xxx.com, a word group {acnd, jap} of the domain name acnd-jap.xxx.com, and a word group {abed, jap} of the domain name abed-jap.xxx.com. An intersection element in the three word groups is jap, and a quantity of intersection elements is 1. Union elements are abpd, acnd, abed, and jap, and a quantity of union elements is 4. In this case, intersection over union of the three word groups may be determined to be 1/4. It is assumed that the intersection over union is greater than the first intersection over union threshold. It is determined that the three domain names are combinable domain names. Then, an intersection of the obtained word groups of the plurality of domain names is determined. The intersection of the plurality of word groups is added in front of the level-1 domain name, to obtain a combined domain name, that is, the first domain name is jap.xxx.com. The semantic correlation of the plurality of second domain names may also be determined based on intersection over union of corresponding word groups, and second domain names whose intersection over union is greater than a second intersection over union threshold are further divided into one type, to implement clustering of the services corresponding to the plurality of second domain names. For a manner of determining the intersection over union of the word groups corresponding to the plurality of second domain names, refer to the foregoing manner. Details are not described in this embodiment of this application. The semantic correlation of the domain names that are not combinable in the domain names corresponding to the plurality of fourth services may also be determined based on intersection over union of corresponding word groups. Then, domain names that are not combinable and whose intersection over union is greater than a third intersection over union threshold are further divided into one type, to implement clustering of the services corresponding to the domain names that are not combinable. For a manner of determining the intersection over union of the word groups corresponding to the domain names that are not combinable in the domain names corresponding to the plurality of fourth services, refer to the foregoing manner. Details are not described in this embodiment of this application. The first intersection over union threshold, the second intersection over union threshold, and the third intersection over union threshold may be set based on a requirement, and the first intersection over union threshold, the second intersection over union threshold, and the third intersection over union threshold may be the same, or may be different. (5) Perform client-similarity clustering on the plurality of second services, to obtain a client-similarity clustering result. In some embodiments, the network device may determine an IP address of a client accessing each of the plurality of second services; and cluster the plurality of second services based on intersection over union of the IP addresses of the clients accessing each second service, to obtain the client-similarity clustering result. For example, the network device may divide the second services whose intersection over union of the IP addresses of the clients of the second services is greater than a fourth intersection over union threshold into one type, to obtain the client-similarity clustering result. For example, a client of a second service A has three IP addresses: IP01, IP02, and IP03, and a client of a second service B also has three IP addresses: IP01, IP03, and IP05. Two IP addresses of the client of the second service A are the same as two IP addresses of the client of the second service B. Therefore, intersection elements between the IP addresses of the client of the second service A and the IP addresses of the client of the second service B are {IP01, IP03}, union elements between the IP addresses of the client of the second service A and the IP addresses of the client of the second service B are {IP01, IP02, IP03, IP05}, and intersection over union between the IP addresses of the client of the second service A and the IP addresses of the client of the second service B is 2/4. It is assumed that the intersection over union between the IP addresses of the client of the second service A and the IP addresses of the client of the second service B is greater than the fourth intersection over union threshold. The two services are clustered, to obtain a client-similarity clustering result. (6) Merge the time-correlated clustering result, the periodic clustering result, the semantic-correlated clustering result, and the client-similarity clustering result, to obtain the plurality of application types. In some embodiments, the network device may determine intersection over union between every two clustering results, and combine two clustering results between which intersection over union is greater than a fifth intersection over union threshold. After all clustering results are processed by using the foregoing manner, the plurality of application types may be obtained. The intersection over union between every two clustering results is intersection over union of services in the two clustering results, that is, a ratio of a quantity of intersection services to a quantity of union services in the two clustering results. Each clustering result includes a plurality of application types. Therefore, after the foregoing four clustering results are combined, the plurality of application types may be obtained, and each application type may correspond to a plurality of services. In addition, the fourth intersection over union threshold and the fifth intersection over union threshold may be set based on a requirement, and the fourth intersection over union threshold and the fifth intersection over union threshold may be the same, or may be different. The fourth intersection over union threshold, the fifth intersection over union threshold, the first intersection over union threshold, the second intersection over union threshold, and the third intersection over union threshold may be the same, or may be different. Step304: The network device determines a label corresponding to each of the plurality of application types, where the label is used to identify an application to which a data flow belongs. In some embodiments, the network device may divide the plurality of application types into a first application group, a second application group, and a third application group. A service included in each application type in the first application group corresponds to a domain name, a service included in each application type in the second application group does not correspond to a domain name, and each application type in the third application group corresponds to a service that is not clustered. The network device determines a label corresponding to each application type based on the domain name corresponding to the service included in each application type in the first application group, and determines a label corresponding to each application type in the second application group and a label corresponding to each application type in the third application group. In some embodiments, an implementation in which the network device determines the label corresponding to each application type based on the domain name corresponding to the service included in each application type in the first application group may be: for each application type in the first application group, determining level-1 domain names in domain names corresponding to services included in the application type; and if all determined level-1 domain names are the same, determining an enterprise name corresponding to the level-1 domain name, and using the enterprise name as the label corresponding to the application type; or if the determined level-1 domain names are different, determining an enterprise name corresponding to a level-1 domain name with a largest proportion in the level-1 domain names, and using the enterprise name as the label corresponding to the application type. For example, an application type in the first application group includes 20 services, and a level-1 domain name in a domain name corresponding to each service is xxx.com. It is assumed that an enterprise name corresponding to the level-1 domain name is xxx. Then, xxx is used as a label corresponding to the application type. For another example, an application type in the first application group includes 50 services, level-1 domain names in domain names corresponding to 40 services are xxx.com, and level-1 domain names in domain names corresponding to the other 10 services are scmd.com. It is assumed that an enterprise name corresponding to the level-1 domain name xxx.com is xxx, and an enterprise name corresponding to the level-1 domain name scmd.com is scmd. Because a proportion of the level-1 domain names xxx.com is the largest, xxx may be used as a label corresponding to the application type. In some embodiments, an implementation in which the network device determines the label corresponding to each application type in the second application group may be: for each application type in the second application group, determining, based on the service included in the application type, a service at which the well-known port is located, a service belonging to the loop port set, a service whose source port and destination port are the same, and a service belonging to the second multi-client access service set; determining whether a quantity of services is greater than a seventh threshold; and if the quantity of services is not greater than the seventh threshold, generating the label corresponding to the application type based on a first target character and a first format; or if the quantity of services is greater than the seventh threshold, generating the label corresponding to the application type based on a first target character and a second format. The first target character may be set based on a requirement. For example, the first target character is NN. The first format and the second format may also be set based on a requirement. For example, the first format may be: first target character/{IP01+Port001, IP02+Port002, IP01+Port003}, and the second format may be: first target character/{IP: 001, 002, 003, 004}. For example, an application type in the second application group includes 30 services. It is determined based on the 30 services that services at which the well-known port is located are IP01+Port001 and IP02+Port002, the service belonging to the loop port set is IP12+Port002, the service whose source port and destination port are the same is IP14+Port011, and the service belonging to the second multi-client access service set is IP05+Port005. In this case, a quantity of services may be determined as 5. It is assumed that the quantity 5 of services is not greater than a seventh threshold, and the first target character is NN. It may be determined that the label corresponding to the application type is NN/{IP01+Port001, IP02+Port002, IP12+Port002, IP14+Port011, IP05+Port005}. It is assumed that the quantity 5 of services is greater than the seventh threshold. It may be determined that the label corresponding to the application type is NN/{IP: 001, 002, 002, 011, 005}. In some embodiments, an implementation in which the network device determines the label corresponding to each application type in the third application group may be: for each application type in the third application group, determining whether a quantity of services included in the application type is greater than the seventh threshold; and if the quantity is not greater than the seventh threshold, generating the label corresponding to the application type based on a second target character and a first format; or if the quantity is greater than the seventh threshold, generating the label corresponding to the application type based on a second target character and a second format. The second target character may be set based on a requirement. For example, the second target character is UKN. The first format and the second format may also be set based on a requirement. For example, the first format may be: second target character/{IP01+Port001, IP02+Port002, IP01+Port003}, and the second format may be: second target character/{IP: 001, 002, 003, 004}. After determining the label corresponding to each of the plurality of application types, the network device may further display the label corresponding to each of the plurality of application types. In this embodiment of this application, the network device may perform flow behavior feature analysis according to the flow table to obtain the plurality of services. Each service includes one IP address and one port identifier, and one application may usually include a group of services. Therefore, after the plurality of services are clustered according to the flow table and the domain name table, the plurality of application types may be obtained, where each application type includes a plurality of services and each application type corresponds to one application. In this case, a label corresponding to each of the plurality of application types may be determined. Therefore, the label may be used to identify an application to which a data flow belongs. It may be learned that in this embodiment of this application, an application may be identified based on a flow behavior feature without a traffic feature database. In this case, when a new application appears, the new application may be directly identified based on an IP address and a port of a server accessed by the new application. This improves an application identification rate. FIG.5is a diagram of an apparatus500for application identification according to an embodiment of this application. The apparatus500for application identification may be implemented as a part or the entire of a network device by using software, hardware, or a combination thereof. The apparatus500includes an extraction module501, an analysis module502, a clustering module503, and a determining module504. Functions of the extraction module501, the analysis module502, the clustering module503, and the determining module504may be implemented by using the processor in the embodiment inFIG.2. The extraction module501is configured to perform an operation of step301in the embodiment inFIG.3. The analysis module502is configured to perform an operation of step302in the embodiment inFIG.3. The clustering module503is configured to perform an operation of step303in the embodiment inFIG.3. The determining module504is configured to perform an operation of step304in the embodiment inFIG.3. FIG.6is a diagram of the analysis module502ofFIG.5according to an embodiment of this application. The analysis module502includes:a first determining submodule601, configured to determine a port with a loop according to a flow table, to obtain a loop port set;a second determining submodule602, configured to determine a single-client access service set and a first multi-client access service set based on the loop port set and according to the flow table, where each service in the single-client access service set is accessed by a single client, an IP address and a port of the service belong to a same side, the port does not belong to the loop port set, each service in the first multi-client access service set is accessed by a plurality of clients, an IP address and a port of the service belong to a same side, and the port does not belong to the loop port set;a third determining submodule603, configured to determine a second multi-client access service set based on the loop port set and the first multi-client access service set and according to the flow table, where each service in the second multi-client access service set is accessed by a plurality of clients, an IP address and a port of the service belong to different sides, and the port does not belong to the loop port set; anda combination submodule604, configured to combine the first multi-client access service set, the second multi-client access service set, and the single-client access service set, to obtain a plurality of services. Optionally, the first determining submodule601is configured to:obtain, for each port in the flow table, a same-side IP address set and a peer-side IP address set of the port according to the flow table;determine an intersection between the same-side IP address set and the peer-side IP address set of the port, to obtain a plurality of IP addresses;determine, according to the flow table, a total flow quantity of data flows that correspond to the plurality of IP addresses and that are in all data flows passing through the port, and use the determined total flow quantity as a first total flow quantity;if the first total flow quantity is greater than a first threshold, determine a total flow quantity of all data flows passing through the port, and use the determined total flow quantity as a second total flow quantity; andif a ratio of the first total flow quantity to the second total flow quantity is greater than a second threshold, determine that the port is a port with a loop. Optionally, the second determining submodule602is configured to:determine, according to the flow table, a service that is accessed by a single client and whose IP address and port belong to a same side, and a service that is accessed by a plurality of clients and whose IP address and port belong to a same side, to obtain a potential single-client access service set and a first potential multi-client access service set; andselect, from the potential single-client access service set, a service at which a port in the loop port set is located, to obtain the single-client access service set and select, from the first potential multi-client access service set, a service at which a port in the loop port set is located, to obtain the first multi-client access service set. Optionally, the second determining submodule602is further configured to:determine a plurality of target services according to the flow table, where each target service corresponds to an IP address and a port in one flow entry that belong to a same side, and each target service corresponds to a plurality of data flows;determine, for each of the plurality of target services, whether a port of the target service is randomly generated;if the port of the target service is randomly generated, determine whether a quantity of same-side ports corresponding to the IP address of the target service is greater than a third threshold;if the quantity of same-side ports corresponding to the IP address of the target service is greater than the third threshold, determine whether a quantity of peer-side ports corresponding to the target service is greater than a fourth threshold; andif the quantity of peer-side ports corresponding to the target service is greater than the fourth threshold, determine that the target service is the service that is accessed by a plurality of clients and whose IP address and port belong to the same side; orif the quantity of peer-side ports corresponding to the target service is not greater than the fourth threshold, determine whether a peer-side IP address of the target service is unique; andif the peer-side IP address of the target service is unique, determine that the target service is the service that is accessed by a single client and whose IP address and port belong to the same side. Optionally, the second determining submodule602is further configured to:if the port of the target service is not randomly generated, determine that the target service is the service that is accessed by a plurality of clients and whose IP address and port belong to the same side. Optionally, the second determining submodule602is further configured to:if the quantity of same-side ports corresponding to the IP address of the target service is not greater than the third threshold, determine that the target service is the service that is accessed by a plurality of clients and whose IP address and port belong to the same side. Optionally, the third determining submodule603is configured to:determine, based on the first multi-client access service set and according to the flow table, a service that is accessed by a plurality of clients and whose IP address and port belong to different sides, to obtain a second potential multi-client access service set; andselect, from the second potential multi-client access service set, a service at which a port in the loop port set is located, to obtain the second multi-client access service set. Optionally, the third determining submodule603is further configured to:determine an IP address and a port that are in a same flow entry in the flow table and that belong to different sides as one reference service, to obtain a plurality of reference services;determine, for each of the plurality of reference services, whether the reference service corresponds to a plurality of data flows;if the reference service corresponds to the plurality of data flows, determine whether a port of the reference service is randomly generated;if the port of the reference service is not randomly generated, determine whether the port of the reference service is included in the first multi-client access service set;if the port of the reference service is not included in the first multi-client access service set, determine whether an IP address of the reference service is included in the source IP addresses in the domain name table; andif the IP address of the reference service is not included in the source IP addresses in the domain name table, determine that the reference service is the service that is accessed by a plurality of clients and whose IP address and port belong to different sides. FIG.7is a diagram of the clustering module503ofFIG.5according to an embodiment of this application. Optionally, the clustering module503includes:a first clustering submodule701, configured to perform time-correlated clustering on the plurality of services according to the flow table and the domain name table, to obtain a time-correlated clustering result;a second clustering submodule702, configured to select a periodic service from the plurality of services according to the flow table, to obtain a periodic clustering result;an obtaining submodule703, configured to obtain a plurality of first services and a plurality of second services from the plurality of services according to the domain name table, where the plurality of first services are services that are accessed by a plurality of clients and that correspond to domain names, and the plurality of second services include a service that is accessed by a plurality of clients and that does not correspond to domain names and a service that is accessed by a single client;a third clustering submodule704, configured to perform semantic-correlated clustering on the plurality of first services to obtain a semantic-correlated clustering result;a fourth clustering submodule705, configured to perform client-similarity clustering on the plurality of second services to obtain a client-similarity clustering result; anda merging submodule706, configured to merge the time-correlated clustering result, the periodic clustering result, the semantic-correlated clustering result, and the client-similarity clustering result, to obtain the plurality of application types. Optionally, the first clustering submodule701is configured to:obtain, according to the flow table, a flow start time point in a flow entry in which each of the plurality of services is located;determine a time difference between every two services in the plurality of services based on the obtained flow start time point;determine time correlation between every two services in the plurality of services based on the determined time difference;select, from the plurality of services based on the time correlation between every two services in the plurality of services, a service that meets a time correlation condition;generate a similarity matrix based on the time correlation between the selected services;determine a spectral clustering result of the plurality of services through spectral cluster analysis based on the similarity matrix;determine a similarity between every two services in the plurality of services according to the domain name table; anddetermine the time-correlated clustering result based on the similarity between every two services in the plurality of services and the spectral clustering result. Optionally, the second clustering submodule702is configured to:for each of the plurality of services, obtain, according to the flow table, a flow start time point of the plurality of data flows of the service accessed by a same client;determine a time difference between every two adjacent flow start time points based on a sequence of the flow start time points of the plurality of data flows of the accessed service;determine whether a periodicity of the service is a strong periodicity through a Fourier transform based on the determined time difference; andif the periodicity of the service is a strong periodicity, determine that the service is a periodic service. Optionally, the third clustering submodule704is configured to:cluster the plurality of first services based on domain name semantic correlation to obtain a plurality of first clustering results;combine the plurality of first clustering results based on domain name similarity between the plurality of first clustering results to obtain a plurality of second clustering results; andcluster an unclustered service and the plurality of second clustering results based on domain name semantic correlation between the unclustered service in the plurality of first services and each second clustering result, to obtain the semantic-correlated clustering result. Optionally, the third clustering submodule704is further configured to:obtain, from the plurality of first services, a plurality of third services that are accessed by a plurality of clients and correspond to a unique domain name, and obtain, from the plurality of first services, a plurality of fourth services that are accessed by a plurality of clients and correspond to a plurality of domain names;obtain combinable domain names from the domain names corresponding to the plurality of fourth services and combine the combinable domain names to obtain a plurality of first domain names;remove a number and a symbol from each first domain name and remove a number and a symbol from the domain name corresponding to each third service, to obtain a plurality of second domain names;cluster services corresponding to the plurality of second domain names based on semantic correlation of the plurality of second domain names; andcluster services corresponding to domain names that are not combinable based on semantic correlation of the domain names that are not combinable in the domain names corresponding to the plurality of fourth services. Optionally, the fourth clustering submodule705is configured to:determine an IP address of a client accessing each of the plurality of second services; andcluster the plurality of second services based on intersection over union of the IP addresses of the clients accessing each second service, to obtain the client-similarity clustering result. Optionally, the determining module504is configured to:divide the plurality of application types into a first application group, a second application group, and a third application group, where a service included in each application type in the first application group corresponds to a domain name, a service included in each application type in the second application group does not correspond to a domain name, and each application type in the third application group corresponds to a service that is not clustered;determine a label corresponding to each application type based on the domain name corresponding to the service included in each application type in the first application group; anddetermine a label corresponding to each application type in the second application group and a label corresponding to each application type in the third application group. In this embodiment of this application, the network device may perform flow behavior feature analysis according to the flow table to obtain the plurality of services. Each service includes one IP address and one port identifier, and one application may usually include a group of services. Therefore, after the plurality of services are clustered according to the flow table and the domain name table, the plurality of application types may be obtained, where each application type includes a plurality of services and each application type corresponds to one application. In this case, a label corresponding to each of the plurality of application types may be determined. Therefore, the label may be used to identify an application to which a data flow belongs. In this embodiment, an application may be identified based on a flow behavior feature without a traffic feature database. In this case, when a new application appears, the new application may be directly identified based on an IP address and a port of a server accessed by the new application. This improves an application identification rate. When the apparatus for application identification provided in the foregoing embodiment identifies an application, division into the foregoing functional modules is merely used as an example for description. During actual application, the foregoing functions may be allocated to different functional modules for implementation based on a requirement. An internal structure of the apparatus is divided into different functional modules, to implement all or some of the functions described above. In addition, the apparatus for application identification provided in the foregoing embodiment and the method for application identification embodiments pertain to a same concept. For a specific implementation process thereof, refer to the method embodiments. Details are not described herein again. All or some of the embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When the software is used to implement the embodiments, all or some of the embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the procedures or functions according to the embodiments of this application are fully or partially generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instruction may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instruction may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired or wireless manner. The computer-readable storage medium may be any usable medium accessible by the computer, or may be a data storage device, such as a server or a data center, integrating one or more usable media. It should be understood that “a plurality of” in this specification means two or more than two. In description of this application, “/” means “or” unless otherwise specified. For example, A/B may represent A or B. In this specification, “and/or” describes only an association relationship for describing associated objects and represents that three relationships may exist. For example, A and/or B may represent the following three cases: Only A exists, both A and B exist, and only B exists. In addition, to clearly describe the technical solutions in the embodiments of this application, terms such as “first” and “second” are used in embodiments of this application to distinguish between same items or similar items that have basically same functions and purposes. A person skilled in the art may understand that the terms such as “first” and “second” do not limit a quantity and an execution sequence, and the terms such as “first” and “second” do not indicate a definite difference. The foregoing descriptions are embodiments provided in this application, but are not intended to limit this application. Any modification, equivalent replacement, or improvement made without departing from the principle of this application shall fall within the protection scope of this application.
101,977
11863440
DESCRIPTION OF EMBODIMENTS The following describes the embodiments with reference to the accompanying drawings in the embodiments. Embodiments provide a method for forwarding a packet in a network and a network device based on the method, to replicate a first packet in the network, obtain a plurality of second packets, and forward the plurality of second packets to a same network device over a plurality of different parallel forwarding paths. The network device stores only a second packet that first reaches the network device, and discards a second packet in the plurality of second packets except the second packet that first reaches the network device. This improves reliability of packet forwarding. The method and the network device are based on a same inventive concept. Because the method and the network device resolve problems by using similar principles, cross reference may be made between the implementations of the network device and the method. Repeated parts are not described again. FIG.1shows an example application scenario according to an embodiment. In the application scenario, a network device R1, a network device R2, a network device R3, a network device R4, a network device R5, and a network device R6constitute a physical network. Alternatively, the physical network in this embodiment may include only the network device R2, the network device R3, the network device R4, the network device R5, and the like. An existence form of the physical network is not limited in this embodiment. In some embodiments, the physical network may be a data center network, a wireless network, a deterministic network (DetNet), a segment routing (SR) network, or the like. A first network device in the embodiments may be the network device R2inFIG.1, a second network device may be the network device R5inFIG.1, and there are a plurality of forwarding paths between the first network device and the second network device. For example, in a network architecture inFIG.1, the network device R2may reach the network device R5through the network device R3. In addition, the network device R2may reach the network device R5through the network device R4. In other words, there are two forwarding paths between the network device R2and the network device R5. It may be understood that, there may be another forwarding path between the network device R2and the network device R5. In this embodiment, an example in which there are only two forwarding paths is used for description. In some embodiments, for one of the forwarding paths, for example, a forwarding path R2-R3-R5, the network device R2may reach the network device R5through an intermediate network device (namely, the network device R3inFIG.1) used for forwarding. Alternatively, the network device R2may reach the network device R5through two or more intermediate network devices used for forwarding. For example, after a packet reaches the network device R3, the network device R3forwards the packet to a network device R7, and the network device R7forwards the packet to the network device R5. The network devices R1to R6each may be a router or a switch, or a forwarder in a network architecture of software-defined networking (SDN). In this embodiment, after receiving a first packet, the first network device (for example, R2) generates a plurality of second packets when determining that the first packet includes first indication information used to instruct the first network device to generate the plurality of second packets, and separately forwards the plurality of second packets to the second network device (for example, R5) over different forwarding paths. The second network device stores a second packet that is in the plurality of second packets and that first reaches the second network device, and discards a second packet in the plurality of second packets except the second packet that first reaches the second network device. In the foregoing packet forwarding mode, even if network links of some of the plurality of forwarding paths are faulty, receiving of the second packet by the second network device is not affected. Therefore, this improves reliability of packet forwarding. With reference to the application scenario shown inFIG.1, referring toFIG.2, an embodiment provides a schematic flowchart of a method for forwarding a packet. The method includes the following steps. S10: A first network device receives a first packet, where the first packet includes first indication information, payload data, and a packet sequence number of the first packet in a data flow corresponding to the first packet. In one embodiment, the first indication information is used to instruct the first network device to generate a plurality of second packets based on the first packet. The payload data is user data that needs to be transmitted. The packet sequence number is a number of the first packet in a corresponding data flow. For example, the data flow corresponding to the first packet includes a plurality of packets, and each of the plurality of packets are numbered in a sending sequence. The number may be the packet sequence number. A packet sequence number of a packet is not changed in a process in which the packet is forwarded and re-encapsulated. For example, when the first packet is re-encapsulated to obtain the second packet, the packet sequence number is not changed. In other words, the packet sequence number included in the second packet is the same as the packet sequence number included in the first packet. For another example, when the second packet is received by another network device, and is re-encapsulated to obtain a third packet, the packet sequence number is still not changed. In other words, the packet sequence number included in the third packet is the same as the packet sequence number included in the first packet. S11: When determining that the first packet includes the first indication information, the first network device generates the plurality of second packets based on the first packet, where each of the plurality of second packets includes the payload data, the packet sequence number, and second indication information. S12: The first network device separately forwards the plurality of second packets to the second network device over different forwarding paths in a plurality of forwarding paths, where the second indication information is used to instruct the second network device to discard a packet in the plurality of second packets except a packet that first reaches the second network device. S13: The second network device receives the second packet, where the second packet is any one of the plurality of second packets that are generated by the first network device based on the first packet. S14: When determining that the second packet includes the second indication information, the second network device searches a packet receiving table to determine whether there is the packet sequence number, where the packet receiving table is used to record a packet sequence number included in the second packet that is in the plurality of second packets and that first reaches the second network device. S15: If the packet sequence number is not in the packet receiving table, the second network device stores the second packet. S16: If the packet sequence number is in the packet receiving table, the second network device discards the second packet. In some embodiments, a manner in which the first network device generates the plurality of second packets based on the first packet may be as follows: The first network device replicates the first packet to obtain a plurality of replicated packets, pops information (such as path information and the first indication information that are carried in the first packet) unnecessary for the second packet out from each of the replicated packets, and then pushes information (such as the second indication information, and path information corresponding to the second packet) necessary for the second packet. Alternatively, the first network device pops information (such as path information and the first indication information that are carried in the first packet) unnecessary for the second packet out from the first packet, replicates a packet obtained after pop processing, to obtain a plurality of replicated packets, and pushes information (such as the second indication information, and path information corresponding to the second packet) necessary for the second packet to each of the replicated packets. In some other embodiments, the first network device pops information (such as path information and the first indication information that are carried in the first packet) unnecessary for the second packet out from the first packet, pushes information (such as the second indication information) common to all of the second packets, replicates a packet obtained after push processing, to obtain a plurality of replicated packets, and pushes information (such as path information corresponding to the second packet) unique to each of the replicated packets to the replicated packet. It should be noted that a manner of generating the plurality of second packets by the first network device based on the first packet is not limited to the foregoing three manners. The foregoing three manners are merely examples for description. It should be noted that the path information corresponding to the second packet refers to path information of a forwarding path corresponding to forwarding of the second packet, and each packet is forwarded over a different forwarding path. Therefore, path information of a forwarding path of each packet is different. For example, one second packet is forwarded over a forwarding path1, and another second packet is forwarded over a forwarding path2. In this case, the path information that is of the forwarding path and that is included in the one second packet is path information of the forwarding path1, and the path information that is of the forwarding path and that is included in the another second packet is path information of the forwarding path2. In some embodiments, the path information of the forwarding path corresponding to each packet may be preconfigured in the first network device. After generating the plurality of second packets, the first network device searches a local storage device for path information of a forwarding path corresponding to each second packet. For example, the first network device stores path information of each of the plurality of forwarding paths associated with a flow identifier of the data flow corresponding to the first packet. The first packet may further include the flow identifier of the data flow corresponding to the first packet. When obtaining the flow identifier from the first packet through parsing, the first network device may find path information of the plurality of forwarding paths associated with the flow identifier, and encapsulate path information of each of the plurality of forwarding paths in a corresponding second packet. The second packet may also include the flow identifier, so that the second network device searches, based on the flow identifier, for path information of a forwarding path corresponding to the third packet obtained by re-encapsulating the second packet. In a first implementation, the first indication information may include a first label, and the second indication information may include a second label. The first label corresponds to a first function, and the first function is used to instruct the first network device to generate the plurality of second packets. For example, the first label is a replication label. The second label corresponds to a second function, and the second function is used to instruct the second network device to discard the packet in the plurality of second packets except the packet that first reaches the second network device. For example, the second label is a redundancy label. When identifying the first label included at a top of the first packet, the first network device generates the plurality of second packets based on the first packet, and forwards the plurality of second packets to the second network device over different forwarding paths. Labels corresponding to different functions are encapsulated in a packet, so that the network device identifies the label and performs an operation corresponding to the label. This improves operation efficiency. In a second implementation, both the first indication information and the second indication information may include a third label, and the third label is used to uniquely identify the data flow corresponding to the first packet. Because the third label may be used to identify the data flow corresponding to the first packet, the flow identifier may not need to be encapsulated in the first packet and the second packet, to reduce packet overheads. A correspondence between the third label and an operation type needs to be preconfigured in the first network device and the second network device. For example, an operation type corresponding to the third label is configured as a target operation type in the first network device, and the target operation type is used to instruct the first network device to generate the plurality of second packets based on the first packet. In some embodiments, the target operation type is a replication operation type. When receiving the first packet, the first network device obtains, through parsing, that the top of the first packet is the third label, and finds that the target operation type corresponding to the third label is the replication operation type. Therefore, the first network device generates the plurality of second packets based on the first packet. The target operation type corresponding to the third label is configured in the second network device. The target operation type is a deletion operation type that is used to instruct the second network device to discard the packet in the plurality of second packets except the packet that first reaches the second network device. In the foregoing manner, the third label may be used to identify the data flow corresponding to the packet, and may also be used as different indication information. Therefore, a flow identifier does not need to be additionally encapsulated in the packet. This reduces packet overheads. In some embodiments, the first label, the second label, and the third label are labels used in an SR network. In a third optional implementation, the method for forwarding a packet in this embodiment may be applied to an SRv6 network. The first indication information may include first function information corresponding to a first address in a destination address field in an IPv6 header of the first packet. The first function information may be extended function information, and is used to instruct the first network device to generate the plurality of second packets. For example, the first function information is a replication function information. The first address matches a network address of the first network device. The first network device generates the plurality of second packets based on the first packet, where each of the packets includes the second indication information, path information of a forwarding path corresponding to the packet, the packet sequence number, and the payload data. The second indication information may be second function information corresponding to a second address in an SRH of the second packet, and the second address matches a network address of the second network device. The second function information may be other extended function information, and is used to instruct the second network device to discard another packet in the plurality of second packets except the packet that first reaches the second network device. For example, the second function information is redundancy deletion function information. As shown inFIG.3a, a main idea of SRv6 programming is to divide an SRv6 local segment identification (local SID) into two parts: LOC (Local) and FUNCT (Function). Each of the two parts occupies 64 bits. The LOC is usually a network segment address through which a current network device can be routed, and the FUNCT usually corresponds to a specific function of a SID. For example, a current available function of the FUNCT is an Endpoint function. In some embodiments, a structure of an SRv6 packet includes an IPv6 header shown inFIG.3band an SRH shown inFIG.3c. When information carried in a destination address field in an IPv6 header of the packet matches an SRv6 local SID of a network device, and a function of the FUNCT is Endpoint, the network device updates information in the destination address field in the IPv6 header by using a corresponding segment list in the SRH of the structure of the packet, further searches a forwarding table for an updated destination address, and forwards the packet based on a search result; otherwise the network device discards the packet. It should be noted that a format of the destination address field in the IPv6 header is the same as a format of the SRv6 local SID inFIG.3a, and a format of each segment list in the SRH is the same as the format of the SRv6 local SID inFIG.3a. In this embodiment, two types of new function information different from the Endpoint function are extended, that is, the first function information and the second function information. The first function information is used to instruct the first network device to generate the plurality of second packets. For example, the first function information is replication function information. The second function information is used to instruct the second network device to discard the packet in the plurality of second packets except the packet that first reaches the second network device. For example, the second function information is the redundancy deletion function information. In some embodiments, when the first network device receives the first packet, an address in the destination address field in the packet header of the first packet matches the network address of the first network device, and the first function information in the destination address field is the replication function information, the first network device replicates the first packet. In addition, the first network device obtains the flow identifier of the data flow corresponding to the first packet, and searches for an SRH corresponding to the flow identifier. The SRH includes the second address and the second function information corresponding to the second address, and the second address matches the network address of the second network device. In addition, the SRH includes path information of a forwarding path of the second packet (that is, network addresses of all network devices on the forwarding path). Function information corresponding to another address (that is, a network address of an intermediate network device on the forwarding path) different from the second address in the SRH is Endpoint. In other words, the intermediate network device updates only a destination address field in the SRH of the second packet, and searches the forwarding table for forwarding. The first network device replaces an SRH of the replicated packet with the SRH obtained through searching, and updates the destination address field in the IPv6 header to obtain the second packet. In some embodiments, if the SID is encapsulated in a manner inFIG.3a, the first packet and the second packet may further include a DetNet SRv6 header, and the DetNet SRv6 header includes the flow identifier and the packet sequence number. In some embodiments, if the SID is encapsulated in a manner inFIG.3d, to be specific, the flow identifier and the packet sequence number are used as parameters of function information and encapsulated in the SID, the first packet and the second packet may not include the DetNet SRv6 header. This reduces packet overheads. The FUN occupies 4 bits, a flow ID occupies 28 bits, and a packet sequence number SN occupies 32 bits. The first function information and the second function information are extended, so that the foregoing method of forwarding a packet can be used in a network supporting an SRv6 protocol. This improves reliability of packet forwarding. The first network device forwards the plurality of second packets to the second network device over the different forwarding paths in the plurality of forwarding paths between the first network device and the second network device. The second network device receives the second packets. It should be noted that the second packets received by the second network device may be different from the second packets sent by the first network device. For example, there is at least one intermediate network device on the forwarding path between the first network device and the second network device. The intermediate network device re-encapsulates (for example, pops a corresponding MPLS label out) a received packet, and then forwards the re-encapsulated packet. However, the packet re-encapsulated by the intermediate network device still includes the second indication information, the packet sequence number, the payload data, and the like. Essentially, the packet is the same as the second packet sent by the first network device. Therefore, the packet is generally referred to as the second packet in this embodiment. FIG.1is still used as an example for description herein. There are two forwarding paths between the network device R2and the network device R5, and a forwarding path R2-R4-R5is used as an example for description. The network device R2sends a second packet, and the second packet reaches the network device R4. The network device R4performs corresponding encapsulation processing (for example, pops a corresponding MPLS label out or updates information in a destination address field in an IPv6 header) on the second packet, and sends a packet that is obtained after encapsulation processing to the network device R5. In this embodiment, the packet received by the network device R5is still referred to as a second packet, and the second packet is essentially the same as the second packet sent by the network device R2. However, some changes may occur in the packet received by the network device R5due to processing performed by the intermediate network device R4. The second network device parses the second packet and searches, when determining that the second packet includes indication information, a packet receiving table to determine whether there is the packet sequence number included in the second packet. In some embodiments, the indication information is used to instruct the second network device to discard a packet in the plurality of second packets, sent by the first network device, except a packet that first reaches the second network device. The indication information is the same as the second indication information included in the second packet sent by the first network device. The packet receiving table is used to record the packet sequence number included in the second packet that first reaches the second network device. For example, each time the second network device receives a packet, the second network device searches the packet receiving table for a packet sequence number included in the packet. If the packet sequence number is in the packet receiving table, it indicates that the second network device has received a packet including the packet sequence number, and the second network device discards the packet. If the packet sequence number is not in the packet receiving table, it indicates that the second network device has not received a packet including the packet sequence number, and the second network device stores the packet. In some embodiments, the second network device may further forward the packet. In an implementation, the indication information includes a label corresponding to a target function, and the target function is used to instruct the second network device to discard the packet in the plurality of second packets except the packet that first reaches the second network device. The label herein is the same as the second label included in the second packet sent by the first network device in the first optional implementation described above. A target function corresponding to the label herein is the same as the second function corresponding to the second label, and details are not described herein. When identifying the label included at the top of the second packet, the second network device stores the second packet that first reaches the second network device and discards the packet in the plurality of second packets except the packet that first reaches the second network device. The label corresponding to the target function is encapsulated in the second packet, so that the second network device identifies the label and performs an operation corresponding to the label. This improves operation efficiency. In another implementation, the indication information may include a label, and the label is used to identify a data flow corresponding to the first packet. The label herein is the same as the third label included in the second packet sent by the first network device in the second optional implementation described above, and details are not described herein. It should be noted that a correspondence between the label and an operation type needs to be configured in the second network device. When finding that the operation type corresponding to the label is a target operation type, the second network device searches the packet receiving table to determine whether there is the packet sequence number included in the second packet. The target operation type is used to instruct the second network device to discard the packet in the plurality of second packets, sent by the first network device, except the packet that first reaches the second network device. In the foregoing manner, the label may not only be used to identify a data flow corresponding to a packet, but also be used as different indication information. Therefore, a flow identifier does not need to be additionally encapsulated in the packet. This reduces packet overheads. In still another implementation, the indication information may include target function information corresponding to a destination address in a destination address field in an IPv6 header of the second packet, and the destination address matches a network address of the second network device. The target function information is used to instruct the second network device to discard the packet in the plurality of second packets, sent by the first network device, except the packet that first reaches the second network device. For a format of the IPv6 header of the second packet, refer to the foregoing third optional implementation. Details are not described herein. It should be noted that, there may be an intermediate network device used for forwarding between the first network device and the second network device. In a forwarding process, the intermediate network device updates, based on an SRH included in the second packet sent by the first network device, the IPv6 header of the second packet sent by the first network device. Therefore, the second packet received by the second network device differs from the second packet sent by the first network device in the IPv6 header. Information in the IPv6 header in the second packet is the same as the second indication information included in the SRH of the second packet sent by the first network device. In some embodiments, the network further includes a third network device, and there is at least one forwarding path between the second network device and the third network device. When not finding, in the packet receiving table, the packet sequence number included in the received second packet, the second network device searches for path information of a forwarding path corresponding to a flow identifier included in the second packet, encapsulates the path information in the second packet to obtain a third packet, and forwards the third packet obtained after encapsulation to the third network device. The path information may include the MPLS label stack and the SRH described in the foregoing embodiment. The target function information is extended, so that the foregoing method of forwarding a packet can be used in a network supporting an SRv6 protocol. This improves reliability of packet forwarding. The following describes the foregoing embodiments by using examples with reference toFIG.4toFIG.7AandFIG.7B. InFIG.4toFIG.7AandFIG.7B, a flow identifier is briefly referred to as a flow ID, and a packet sequence number is briefly referred to as an SN. In some embodiments, with reference to a scenario inFIG.4, an example is used for describing that the first indication information includes a first label and the foregoing second indication information includes a second label in the foregoing description. As shown inFIG.4, the scenario may be a packet forwarding scenario in an MPLS segment routing (SR) network, and payload data may be DetNet payload data. Two fields are extended at a bottom of an SR label stack to form a DetNet header (DetNet MPLS Segment Routing Encapsulation Header). The two fields include the flow identifier (Flow ID) and the packet sequence number (SN). In addition, three SR labels with special meanings, namely, a replication label, a redundancy label, and a DetNet label, are defined. The replication label is used as an instruction for replicating a packet. When a top of a DetNet packet received by a network device is the replication label, the network device replicates the packet, and pushes a corresponding label stack (for example, a redundancy label and an MPLS label stack) to the packet. A DetNet redundancy label is used as an instruction for deleting a redundant packet. When a top of a received DetNet packet is the redundancy label, a flow ID and a sequence Num of the packet are searched for, a packet that is first received is stored, and the redundant packet is discarded. If the packet needs to be further forwarded, a corresponding label stack (for example, a DetNet label and an MPLS label stack) is added to the packet before the packet is forwarded. Then, forwarding is performed. The DetNet label is used to mark that a transmitted packet belongs to a DetNet data flow. The DetNet label has the DetNet header. The first label mentioned in this embodiment may be the foregoing replication label, and the second label may be the foregoing redundancy label. A replication label stack table is configured in a first network device. The replication label stack table is used to describe an association relationship between the flow ID and path information (MPLS label stacks) of a plurality of forwarding paths corresponding to a plurality of second packets, and is used to push a new MPLS label stack to the second packet. The new MPLS label stack is used to indicate a forwarding path of the second packet. A convergence label stack table and a packet receiving table are configured in a second network device. The convergence label stack table is used to describe an association relationship between the flow ID and path information of a forwarding path corresponding to a third packet, and the third packet is a packet obtained after re-encapsulation is performed on a second packet that first reaches the second network device. The convergence label stack table is used to push a new MPLS label stack to the third packet, and the new MPLS label stack is used to indicate a forwarding path of the third packet. It should be noted that if the second network device does not further forward the second packet to another network device, the convergence label stack table does not need to be configured in the second network device. The packet receiving table is used to record a flow ID and a sequence Num. If a packet corresponding to a specific sequence number has been received by the second network device, the second network device records the sequence number in the packet receiving table. If the packet corresponding to the sequence number reaches the second network device again, the second network device discards the packet. The second network device may filter, based on the packet receiving table, the plurality of second packets sent by the first network device, and store or forward only the second packet that first reaches the second network device. Referring toFIG.4, the first network device is R2, and the second network device is R5. The network device R1receives a DetNet packet, and encapsulates the DetNet packet, to be specific, encapsulates an input stream ID1, an SN10, and a replication label1001in the DetNet packet, to obtain a first packet. If there is a multihop route between the network device R1and the network device R2, an MPLS label stack used to indicate a forwarding path of the first packet needs to be further encapsulated in the packet. The network device R2receives the first packet sent by the network device R1, parses the first packet, and determines that a top of a label stack of the first packet is a replication label1001. Therefore, the network device R2replicates the first packet, pops the replication label1001out, and pushes a new label stack to obtain two second packets. The new label stack includes a redundancy label1002and an MPLS label stack that is used to indicate a forwarding path of the second packet, where the redundancy label1002is located at a bottom of the MPLS label stack. The network device R2separately sends the obtained two second packets to a network device R3and a network device R4. After receiving the second packets, the network device R3and the network device R4forward the second packets based on an MPLS label at a top of a label stack of the second packet. The network device R5receives the packets that have a sequence Num of 10 and that are separately transmitted from the network device R4and the network device R3. For example, if the packet from the network device R4first arrives, the network device R5updates the packet receiving table and pushes new label stacks including a DetNet label and an MPLS label stack that indicates a subsequent forwarding path. Then, when the packet from the network device R3arrives, R5searches and determines that the packet sequence number SN10is in the packet receiving table, and therefore discards the packet forwarded by the network device R3. A packet sent by the network device R5is finally transmitted to a network device R7, and the network device R7performs decapsulation and obtains the payload data. In some embodiments, with reference to a scenario inFIG.5, an example is used for describing that both the foregoing first indication information and the foregoing second indication information are third labels. As shown inFIG.5, the scenario may be a packet forwarding scenario according to an MPLS SR protocol. Payload data may be DetNet payload data. A field of a packet sequence number (SN) is extended at a bottom of an SR label stack. In addition, a DetNet SR label (that is, the third label) is defined, and the DetNet SR label is in a one-to-one correspondence with data flow. A first network device and a second network device determine, by identifying a DetNet SR label, an operation type of an operation performed on the packet. The first network device may be a network device R2inFIG.5, and the second network device may be a network device R5inFIG.5. A DetNet SR label operation table is configured in the first network device and the second network device. The label operation table is used to describe an operation type corresponding to a DetNet SR label. For example, in the first network device, the operation type that corresponds to the DetNet SR label and that is described in the label operation table is a replication operation. In the second network device, the operation type that corresponds to the DetNet SR label and that is described in the label operation table is a redundancy deletion operation. In addition, the second network device configures the packet receiving table. For a description of the packet receiving table, refer to the description inFIG.4.FIG.5uses a DetNet SR label to replace the flow ID inFIG.4, and details are not described herein. Referring toFIG.5, the network device R1receives a DetNet packet, and encapsulates the DetNet packet, to be specific, encapsulates an SN10and a DetNet12(that is, the DetNet SR label) in the DetNet packet, to obtain a first packet. If there is a multihop route between the network device R1and the network device R2, an MPLS label stack used to indicate a forwarding path of the first packet needs to be further encapsulated in the packet. The network device R2receives the first packet sent by the network device R1, parses the first packet, obtains the DetNet12label included in the first packet, and searches the label operation table for a target operation type corresponding to the DetNet12. If the target operation type instructs to perform a replication operation on the first packet, the network device R2replicates the first packet and pushes a new label stack to obtain two second packets. The new label stack includes an MPLS label stack that is used to indicate a forwarding path of the second packet. The network device R2separately sends the obtained two second packets to a network device R3and a network device R4. After receiving the second packets, the network device R3and the network device R4forward the second packets based on an MPLS label at a top of a label stack of the second packet. The network device R5receives the packets separately transmitted from the network device R4and the network device R3, searches the DetNet label operation table, and finds that the target operation type corresponding to the DetNet12label is redundancy deletion. Therefore, the network device R5forwards a packet that is first received, and discards a packet that is repeatedly received. For a specific operation process of the network device R5, refer to the description inFIG.4. Details are not described herein. In some embodiments, with reference to scenarios inFIG.6AandFIG.6BandFIG.7AandFIG.7B, examples are used for describing that the foregoing first indication information includes first function information in a destination address field in an IPv6 header of a first packet and the foregoing second indication information includes second function information in an SRH of a second packet. As shown inFIG.6AandFIG.6BandFIG.7AandFIG.7B, the scenarios may be packet forwarding scenarios according to an SRv6 protocol, and payload data may be DetNet payload data. A first network device may be a network device R2and a second network device may be a network device R5. A packet SRH replication table is configured in the first network device. The packet SRH replication table is used to describe a correspondence between a flow identifier and a plurality of SRHs, and is used to encapsulate a new SRH in a replicated packet, to obtain the second packet. A redundant packet SRH deletion table is configured in the second network device. The redundant packet SRH deletion table is used to describe a correspondence between a flow identifier and a plurality of SRHs, and is used to encapsulate a new SRH in a second packet that is first received by the second network device. Further, a packet receiving table is configured in the second network device, and is used to record a packet sequence number included in the second packet that first reaches the second network device. In addition, two types of function information are extended, that is, replication function information and redundancy deletion function information. The replication function information: When a network device receives an SRv6 packet, a destination address in an IPv6 header of the packet matches a network address of the network device, and function information corresponding to the destination address is the replication function information, the network device replicates the packet, obtains a flow identifier, and searches the packet SRH replication table for an SRH corresponding to the flow identifier. Then, the network device replaces an SRH of the replicated packet with the SRH that corresponds to the flow identifier and that is in the table, updates the destination address field in the IPv6 header to obtain the second packet, and forwards the packet based on the information in a destination address field in an IPv6 header of the second packet. The redundancy deletion function information: When a network device receives an SRv6 packet, a destination address in an IPv6 header of the packet matches a network address of the network device, and function information corresponding to the destination address is the redundancy deletion function information, the network device obtains a flow identifier and a packet sequence number, searches a packet receiving table to determine whether there is the packet sequence number, and discards the packet if the sequence number of the packet is in the packet receiving table. If the packet sequence number is not in the packet receiving table, the network device searches the redundant packet SRH deletion table, replaces an SRH of the received packet with an SRH that corresponds to the flow identifier and that is in the redundant packet SRH deletion table, updates a destination address field in the IPv6 header, and forwards the packet based on information in the destination address field. The flow identifier and the packet sequence number may be encapsulated in a DetNet SRv6 header of the packet. In other words, the DetNet SRv6 header includes the flow identifier and the packet sequence number. Referring toFIG.6AandFIG.6B, a network device R1encapsulates a packet, adds the DetNet SRv6 header, an SRH and an IPv6 header, to obtain a first packet. The network device R2receives the first packet, parses the first packet, and determines that a destination address in the IPv6 header of the first packet matches a network address of the network device R2and function information corresponding to the destination address in the IPv6 header is a replication function. In this case, the network device R2replicates the packet. The network device R2obtains a flow identifier and a packet sequence number from the DetNet SRv6 header, searches the packet SRH replication table for a corresponding SRH, replaces an SRH of the replicated packet with the searched SRH, and updates the IPv6 header of the replicated packet, to obtain a second packet. The network device R2separately sends the obtained two second packets to a network device R3and a network device R4. Because function information corresponding to the network device R3and the network device R4is Endpoint, the network device R3and the network device R4each only updates a destination address field in an IPv6 header of the packet based on the SRH of the packet, and forwards the packet. In some embodiments, a manner of updating the destination address field of the packet based on the SRH of the packet may be specifically replacing information in the destination address field with a corresponding segment list in the SRH. The network device R5receives the packets that have a sequence Num of 10 and that are separately transmitted from the network device R4and the network device R3. For example, if the packet from the network device R4first arrives, and the network device R5determines that a destination address in a destination address field of the packet matches a network address of the network device R5and function information corresponding to the destination address is the redundancy deletion function information, the network device R5searches and determines that the packet sequence number is not in the packet receiving table. In this case, the network device R5updates the packet receiving table, replaces an SRH of the received packet based on the redundant packet SRH deletion table, updates an IPv6 header of the packet, and forwards the packet. Then, when the packet from the network device R3arrives, R5searches and determines that the packet sequence number SN10is in the packet receiving table, and therefore discards the packet forwarded by the network device R3. At last, the packet sent by the network device R5is transmitted to a network device R7, and the network device R7performs decapsulation and obtains the payload data. The flow identifier and the packet sequence number may further be encapsulated in an SRH. In other words, a segment list is encapsulated in the format shown inFIG.3d. As shown inFIG.7AandFIG.7B, a difference between an encapsulation structure of each packet and an encapsulation structure inFIG.6AandFIG.6Blies in that a DetNet SRv6 header does not need to be added. An operation manner of each network device is the same as that in the embodiment inFIG.6AandFIG.6B. Details are not described herein. Referring toFIG.8, an embodiment provides a first network device800for forwarding a packet in a network. The network includes the first network device and a second network device, and there are a plurality of forwarding paths between the first network device and the second network device. The first network device includes a receive unit801, a generation unit802, a forwarding unit803, and a searching unit804. The receive unit801is configured to receive a first packet, where the first packet includes first indication information, payload data, and a packet sequence number of the first packet in a data flow corresponding to the first packet. The generation unit802is configured to generate, when the first network device determines that the first packet comprises the first indication information, a plurality of second packets based on the first packet, where each of the plurality of second packets includes the payload data, the packet sequence number, and second indication information. The forwarding unit803is configured to separately forward the plurality of second packets to the second network device over different forwarding paths in the plurality of forwarding paths, where the second indication information is used to instruct the second network device to discard a packet in the plurality of second packets except a packet that first reaches the second network device. In an example implementation, the first packet further includes a flow identifier of the data flow corresponding to the first packet, and the first network device further includes the searching unit804. The searching unit804is configured to search for path information of each of the plurality of forwarding paths associated with the flow identifier, where one second packet corresponds to one of the plurality of forwarding paths. In an example implementation, the first indication information includes a first label, the second indication information includes a second label, the first label corresponds to a first function, the second label corresponds to a second function, the first function is used to instruct the first network device to generate the plurality of second packets, and the second function is used to instruct the second network device to discard the packet in the plurality of second packets except the packet that first reaches the second network device. The path information of the forwarding path includes a multi-protocol label switching MPLS label stack of the forwarding path. In an example implementation, the first indication information includes a third label, the second indication information includes the third label, and the third label is used to identify the data flow corresponding to the first packet. The path information of the forwarding path includes an MPLS label stack of the forwarding path. The searching unit804is further configured to search for an operation type corresponding to the third label. The generation unit802is configured to: if the operation type corresponding to the third label is a target operation type, generate the plurality of second packets based on the first packet, where the target operation type is used to instruct the first network device to generate the plurality of second packets. In an example implementation, the first indication information includes first function information corresponding to a first address in a destination address field in an Internet Protocol version 6 IPv6 header of the first packet, and the first address matches a network address of the first network device. The second packet includes a segment routing header SRH, the SRH includes the second indication information and path information of a forwarding path corresponding to the second packet, the second indication information includes second function information corresponding to a second address of a target segment list in the SRH, and the second address matches a network address of the second network device. In an example implementation, the flow identifier and the packet sequence number are encapsulated in a segment list in the SRH. In some embodiments, the second packet further includes an IPv6-based segment routing protocol SRv6 header, and the flow identifier and the packet sequence number are encapsulated in the SRv6 header. The first network device800may be a router, a switch, or a network device having a forwarding function. The first network device800can implement functions of the first network device in the foregoing embodiment. For a specific execution step, refer to the foregoing method embodiment. Details are not described herein. Referring toFIG.9, an embodiment provides a second network device900for forwarding a packet in a network. The network includes a first network device and the second network device, and there are a plurality of forwarding paths between the first network device and the second network device. The second network device includes a receive unit901, a searching unit902, a storage unit903, and a discarding unit904. The receive unit901is configured to receive a second packet, where the second packet is any one of a plurality of second packets that are generated by the first network device based on a first packet, the second packet includes indication information, payload data carried in the first packet, and a packet sequence number of the first packet in a data flow corresponding to the first packet. The searching unit902is configured to: when the second network device determines that the second packet includes the indication information, search a packet receiving table to determine whether there is the packet sequence number, where the packet receiving table is used to record a packet sequence number included in a second packet that is in the plurality of second packets and that first reaches the second network device. The storage unit903is configured to: if the packet sequence number is not in the packet receiving table, store the second packet. The discarding unit904is configured to: if the packet sequence number is in the packet receiving table, discard the second packet. In an example implementation, the indication information includes a label corresponding to a target function, and the target function is used to instruct the second network device to discard a packet in the plurality of second packets except the packet that first reaches the second network device. In an example implementation, the indication information includes a label, and the label is used to identify the data flow corresponding to the first packet. The searching unit902is specifically configured to: if an operation type corresponding to the label is a target operation type, search the packet receiving table to determine whether there is the packet sequence number, where the target operation type is used to instruct the second network device to discard the packet in the plurality of second packets except the packet that first reaches the second network device. In an example implementation, the indication information includes target function information corresponding to a destination address in a destination address field in an Internet Protocol version 6 IPv6 header of the second packet, and the destination address matches a network address of the second network device. In an example implementation, the network further includes a third network device, and the second network device further includes a generation unit905and a forwarding unit906. The generation unit905is configured to generate a third packet based on the second packet, where the third packet includes the payload data and the packet sequence number. The forwarding unit906is configured to forward the third packet to the third network device. The second network device900may be a router, a switch, or a network device having a forwarding function. The second network device can implement functions of the second network device in the foregoing embodiment. For a specific execution step, refer to the foregoing method embodiment. Details are not described herein. Referring toFIG.10, an embodiment provides a network device1000. The network device1000may be a router, a switch, or a network device having a forwarding function. The network device1000can implement functions of the first network device or the second network device in the foregoing method embodiment. The network device1000includes a processor1003, a network interface1002, and a memory1001. The memory may be configured to store program code and data of the network device, and the processor1003is configured to invoke a program instruction in the memory1001to perform the method shown in the foregoing embodiment. For a specific execution step, refer to the foregoing embodiment. Details are not described herein. Referring toFIG.11, an embodiment provides a network device1100. The network device1100may be a router, a switch, or a network device having a forwarding function. The network device1000can implement functions of the first network device or the second network device in the foregoing method embodiment. The network device1100includes a main control board1101and an interface board1102. The main control board1101includes a processor1103and a memory1104. The interface board1102includes a processor1105, a memory1106, and an interface card1107. The main control board1101is coupled to the interface board1102. The memory1104may be configured to store program code of the main control board1101, and the processor1103is configured to invoke the program code in the memory1104to perform a corresponding operation of packet processing. The memory1106may be configured to store program code of the interface board1102, and the processor1105is configured to invoke the program code in the memory1106to perform a corresponding operation of packet receiving or sending. In an example implementation, an inter-process communication IPC control channel is established between the main control board1101and the interface board1102. An embodiment further provides a computer storage medium, configured to store a computer software instruction used by the first network device or the second network device in the embodiment shown inFIG.2, where the computer software instruction includes a program used to perform the method in the foregoing method embodiment. “First” in the first network device in the embodiments is merely used as a name identifier, and does not represent the first in sequence. For the words “second” and “third”, this rule also applies. Methods or algorithm steps described in combination with the content disclosed in the present disclosure may be implemented by hardware, or may be implemented by a processor by executing a software instruction. The software instruction may include a corresponding software module. The software module may be stored in a random access memory (RAM), a flash memory, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), a hard disk, a removable hard disk, a compact disc, or any other form of storage medium well-known in the art. For example, a storage medium is coupled to a processor, so that the processor can read information from the storage medium and write information into the storage medium. Certainly, the storage medium may alternatively be a component of the processor. The processor and the storage medium may be located in an ASIC. In addition, the ASIC may be located in a core network interface device. Certainly, the processor and the storage medium may exist in the core network interface device as discrete components. A person skilled in the art should be aware that in the foregoing one or more examples, functions described in the present disclosure may be implemented by hardware, software, firmware, or any combination thereof. When the functions are implemented by software, the functions may be stored in a computer-readable medium or transmitted as one or more instructions or code in a computer-readable medium. The computer-readable medium includes a computer storage medium and a communications medium, where the communications medium includes any medium that facilitates transmission of a computer program from one place to another. The storage medium may be any available medium accessible to a general-purpose or special-purpose computer. In the foregoing example implementations, the objectives, technical solutions, and beneficial effects of the present disclosure are further described in detail. It should be understood that the foregoing descriptions are merely specific implementations of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure shall fall within the protection scope of the present disclosure.
58,129
11863441
DETAILED DESCRIPTION This disclosure presents computer-implemented processes for generating data packets in smart network interface controllers (“SNIC”) of hosts.FIG.1shows an example of a data center102. The data center102comprises a management server computer104and any of various computers, such as PC106, on which a virtual-data-center management user interface may be displayed to system administrators and other users. Objects of the physical data center102additionally include server computers, called “hosts,” such as hosts108-111and122-125, mass-storage devices112and126, switches114and116, and a top of rack (“TOR”) switch118. In the example ofFIG.1, the switch114interconnects the hosts108-111and mass-storage devices112, and the switch116interconnects the hosts122-125and the mass-storage devices126. The TOR switch118interconnects the hosts108-111to the hosts122-125, the internet, the virtual-data-center management server104, the PC106, and other server computers and mass-storage appliances of the data center (not shown). Physical data centers may include a multitude of hosts, data storage devices, networking components, and devices connected according to many different types of connection topologies. Virtualization has made a major contribution to enterprises, governments, and other organizations moving data processing and data storage services to data centers. Virtualization provides for the creation of software-based, or virtual, representations of server computers, data-storage devices, and networks. For example, a virtual computer system, known as a virtual machine (“VM”), is a self-contained application and operating system implemented in software. Software components of a distributed application may be executed separately in VMs. A VM may be created or destroyed on demand and may be migrated from one physical server computer to another in a data center. Virtualization has enabled scaling of distributed applications and distributed computing systems to meet changing user demand. FIG.2shows an example architecture of a conventional host200that executes five VMs201-205. The host200includes three fundamental layers: (1) a hardware layer206, (2) a kernel space208, and (3) a user space212. The hardware layer206includes one or more processors214, system memory216, a network interface controller (“NIC”)216, and a mass-storage device218. The hardware layer206also includes other components, including power supplies, internal communication links and busses, specialized integrated circuits, many different types of processor-controlled or microprocessor-controlled peripheral devices and other controllers. Each VM includes an application program or other higher-level computational entities packaged together with an operating system, called a “guest operating system.” Each guest operating system interfaces to the kernel space208rather than to the hardware layer206. The kernel space208includes a virtual-machine-monitor module210(“VMM”) that virtualizes a pool of physical processors and memory in the hardware layer206to create virtual processors, virtual memory, and a virtual network that are available to each of the VMs201-205. The VMM210includes a packet generator220implemented in a kernel module that performs packet generation, a kernel module with NIC drivers222, and in this example, a kernel model that implements a TCP/IP protocol that specifies how data is packetized and addressed. In the example ofFIG.2, the VMs201and202send data to packet generator220, which partitions the data into segments (i.e., payloads) and attaches a header to each payload to form data packets. The header contains routing information, such as the IP address of the host that generated the packet and the IP address of a destination. The destination can be another host in the data center102or a computer system on the internet. The header also contains information about the packet length and a protocol identifier that identifies the networking protocol used to generate the packet. The data packets are sent to the NIC drivers223and then to the NIC216, which places the packets on the data center network (“DCN”). The VMs203-205include packet generators, such as iperf and/or netperf, that generate data packets for the applications running in the VMs203-205. In this example, the packets are sent to a TCP/IP protocol222, then to the NIC drivers223, and finally to the NIC216, which places the packets on the DCN. The flow rate of data packets leaving the host200is determined by where a packet generator is located within the host. A packet generator may run in the user space of a host or in a module of the kernel space of the host. For example, a packet generator running in the user space sends packets to a protocol stack, such the TCP/IP protocol222, and then directly to a driver of a network interface controller (“NIC”), and finally to the NIC216where the packets are placed on the DCN. By contrast, packets generated by a packet generator running in the kernel space are sent directly to the NIC driver223and then to the NIC216, without using a protocol stack, which eliminates the protocol stack from impacting performance of the packet generator. However, generating packets in the user space212or the kernel space208as described above with reference toFIG.2are susceptible to variations in the time delays between data packets sent to the NIC216. This variation is called packet jitter. Packet jitter is created in the kernel208space by variations in the speed of the PCI bus of the host, CPU speed, memory latency, and direct memory access latency. Large disruptions (i.e., latency) in the flow of packets to a destination often leads to artifacts that degrade the quality of data assembled at the destination. In contrast to the conventional approach of placing packet generators in the kernel space or in the user space as illustrated by example inFIG.2, this disclosure presents computer-implemented processes for generating data packets in an SNIC of a host. Placing packet generators in the SNIC eliminates packet jitter due to PCI bus speed, CPU speed, memory latency, and direct memory access latency of the host, thereby generating data packets with a consistent packet flow rate. FIG.3shows an example architecture of a host that contains an SNIC. The host contains multiple central processing units (“CPUs”)302-305, one or more electronic memories308interconnected with the CPUs by a CPU/memory-subsystem bus310or multiple busses, a first bridge312that interconnects the CPU/memory-subsystem bus310with additional busses314and316, or other types of high-speed interconnection media, including multiple, high-speed serial interconnects. The busses or serial interconnections, in turn, connect the CPUs and memory with specialized processors, such as a graphics processor318, and with one or more additional bridges320, which are interconnected to a trusted platform module, a SNIC322, and multiple controllers323-327. The controllers322-327are connected to the bridge320with high-speed serial links, such as peripheral component interconnect express (“PCIe”) serial expansion busses. The controllers323-327are expansions cards that interface with different types of peripheral devices. The SNIC322is a component that connects the host to a DCN as shown inFIG.1. An example implementation of the SNIC322is described below with reference toFIG.4. The controller327interfaces with a computer-readable medium328. The other controllers323-326can interface with electronic displays, input devices, and other such components, subcomponents, and computational resources. The electronic displays, including visual display screen, audio speakers, and other output interfaces, and the input devices, including mice, keyboards, touch screens, and other such input interfaces, together constitute input and output interfaces that allow the host to interact with human users. The computer-readable medium328is a data-storage device, including electronic memory, optical or magnetic disk drive, a magnetic tape drive, USB drive, flash memory and other such data-storage device. FIG.4shows an example architecture of the SNIC322shown inFIG.3. The SNIC322comprises a CPU402that is connected to a programmable accelerator404via a high-speed interconnect406mounted on a printed circuit board400. The SNIC322includes memory408that is mounted on the circuit board400and connected to the CPU402. In this example, the CPU402is connected to an RJ45 modular ethernet connector410. The programmable accelerator404is connected to two small form-factor pluggable (“SFP”) connectors412and414that may be used to connect with fiber-optic cables. The circuit board400includes an array of pins416that are inserted into an electrical connector, such as an expansion slot, of a mother board of the host. The SNIC322includes non-volatile memory that stores virtual device functions418, such as a virtual network adapter that provides high performance in virtual machines (“VMs”) running on the SNIC322. In this example, the CPU402comprises four cores, denoted by core0, core1, core2, and core3, that are connected by a bus interface420. The SNIC322is not limited to a CPU with just four cores. In other implementations the CPU402may contain as few as two cores. While in still other implementations the CPU402may contain more than four cores. Computer-implemented processes for generating data packets in the SNIC322of a host include a smart packet generator (“spktgen”) controller that runs in the user space of the host.FIG.5Ashows an example architecture of a host500that runs a spktgen controller502. In this example, the host500runs five VMs504-508. The host500includes a hardware layer510, a kernel space512, and a user space514. The hardware layer510includes the processors302-305, memory308, SNIC322, and mass-storage device328. The hardware layer also includes other components (not shown), such as power supplies, internal communication links and busses, specialized integrated circuits, many different types of processor-controlled or microprocessor-controlled peripheral devices and other controllers identified inFIG.3. Each VM includes an application program or other higher-level computational entities packaged together with a guest operating system that interfaces with the kernel space512rather than directly to the hardware layer510. The kernel space512includes a VMM516that virtualizes the physical processors and memory in the hardware layer510to create virtual processors, virtual memory, and a virtual network that are available to the VMs504-508. The VMM516includes a kernel module that runs the SNIC driver518that sends data generated by the VMs504-508to the SNIC322, where the data is packetized as described below prior to being placed on the DCN. The spktgen controller502is implemented as a script program that provides users, such as a systems administrator, with a command line for entering commands that control how the SNIC322performs packet generation. For example, a systems administrator or systems manager may access the command line of the spktgen controller502via the PC106. Figure SB shows another example architecture of a host520that runs the spktgen controller502. In this example, the host520runs five applications522-526. The host520includes the hardware layer510described above with reference toFIG.5A. The host520includes a kernel space528and a user space530. In this example, the user space530comprises the applications522-526and the spktgen502. The kernel space528is a host operating system532that runs the SNIC driver518described above in a kernel module of the operating system. The spktgen controller502is implemented as a script program that provides users, such as a systems administrator, with a command line for entering commands that control how the SNIC322performs packet generation. For example, a systems administrator or systems manager may access the command line of the spktgen controller502via the PC106. Computer-implemented processes for generating data packets in the SNIC322of hosts are implemented with a spktgen daemon and a spktgen engine that run in a core of the CPU402of the SNIC322. The core that runs the spktgen daemon and the spktgen engine is called the “control core.” The remaining cores of the CPU402execute packet generating protocols in accordance with instructions from the spktgen engine running in the control core. The cores that perform packet generation are called “data cores.” FIG.6shows examples of commands input to the spktgen controller502and shows a spktgen daemon602and a spktgen engine604running in core0 of the CPU402of the SNIC322. For the sake of brevity, other components of the SNIC322shown inFIG.4are not shown inFIG.6. In this example, core0 is designated as the control core for performing the operations of the spktgen daemon602and the spktgen engine604described below with reference toFIG.7. The spktgen daemon602and the spktgen engine604can perform three different modes of packet generation at the data cores as selected by a user. The three modes are called “interactive mode,” “non-interactive performance mode,” and “non-interactive function mode.” The spktgen controller502displays a command prompt in a user interface of the PC106that enables a user to input a user command606that designates the mode and the parameters associated with the mode. The general form of the user command606input at the command prompt is given by:˜$ spktgen mode+parameters The command “spktgen mode” identifies either an interactive mode or a non-interactive mode performed by the spktgen engine604. The “parameters” portion of the command comprises parameters that the spktgen engine604uses to control how packets are generated at the data cores. If the user desires to execute a non-interactive mode, the command “spktgen mode” designates one of two sub-modes identified as non-interactive performance mode and non-interactive function mode. FIG.6shows an example format of a non-interactive performance mode command608. The performance mode command608includes a parameter610that identifies the command as non-interactive performance mode, a parameter611that indicates the amount of time for sending packets or total number of packets (i.e., count) to send, and a parameter612designating the number of data cores to use. The command608includes a parameter613designating the type of packet generator to use for packet generation, such as a media access control (“MAC”) packet generator, a IPv4/IPv6 packet generator, a TCP packet generator, or a user data protocol (“UPD”) packet generator. The command608also includes a parameter614designating payload length (e.g., number of bytes in the payload) and a parameter615identifying a port number of the SNIC322used to place the data packets onto the DCN. FIG.6shows an example format of a non-interactive function mode command616. The function mode command616includes a parameter618that identifies the command as a non-interactive function mode, an instruction619designating the type of packet generator (e.g., MAC packet generator, a IPv4 or IPv6 packet generator, TCP packet generator, or a UPD packet generator), an instruction614designating payload length, and an instruction621identifying a port number of the SNIC322used to place the data packets onto the DCN. FIG.6shows an example format of an interactive mode command622. The interactive mode command622includes a parameter624that identifies the command as an interactive mode, a parameter625that sets the interactive packet generation as iperf or netperf, and a parameter626that represents any of various other parameters, such as packet type, payload length, and port number of the SNIC for placing data packets onto the DCN. FIG.7shows a flow diagram of the operations performed by the spktgen controller502, the SNIC driver518, the spktgen daemon602, and the spktgen engine604. The spktgen controller502is a shell script that enables a user to input a command described above with reference toFIG.6. A user can input an interactive or non-interactive mode command described above with reference toFIG.6. The spktgen controller502receives701the command and sends702the command to the SNIC driver518. The SNIC driver518calls a thread that executes a communication function or API for exchanging information between the host and the SNIC322. The CPU402loads the spktgen daemon602and the spktgen engine604from the memory408into one of the cores, such as core0 in FIG.6. As a result, core0 becomes the control core for the process of generating data packets. The spktgen daemon602extracts the command to determine the mode (i.e., interactive mode, non-interactive performance mode, or non-interactive function mode) and other packet parameters recorded in the command, such as type of data packet generator, packet length, and port number. The spktgen daemon602forwards705the mode and packet parameter instructions to the spktgen engine604. In accordance with the instructions received from the spktgen daemon602, the spktgen engine604creates706threads that run in each of the data cores. For example, the spktgen engine604sets the number of cores to generate packets in accordance with the performance mode command608. Each thread comprises a set of instructions for generating packets in accordance with the parameters of the command and a transmit (“TX”) function for outputting packets in regular time intervals. The spktgen engine604controls execution of each thread. For each thread, the spktgen engine604creates and sends707a corresponding header to be added to each data packet output from a data core. The spktgen engine604sends708data to the data cores where packets are generated by each data core according to a schedule executed by the spktgen engine604. FIG.8shows an example of the spktgen engine604of the control core0 executing packet generation on data generated by sources within the host500. For example, the data may be generated by any of the applications running in the VMs504-508in the host500. For example, Data1, Data2, and Data3 represents blocks or streams of data generated by three different data sources running in the host500, such as VMs504,505, and506. The spktgen engine604receives the data generated by the three different sources and assigns the data to threads running in each of the data cores: core1, core2, and core3. The spktgen engine604forwards Data1 to a packet generating thread running in core1, forwards Data2 to a packet generating thread running in core2, and forwards Data3 to a packet generating thread running in core3. As shown in the example ofFIG.8, each thread applies a user selected packet generating protocol and applies a TX function that outputs the data packets in regularly spaced time intervals. For example, the thread in core1 applies a packet generating protocol801to Data1. If the user has entered an interactive mode, the packet generator can be iperf or netperf. On the other hand, if the user has entered a non-interactive mode, the packet generator can be MAC packet generator, a IPv4 or IPv6 packet generator, TCP packet generator, or a UPD packet generator. The thread ensures that packets are output from the packet generator801in accordance with the user-selected packet parameters. After the packets have been generated by the packet generator, the thread applies the TX function802to output the data packets Packets1from the CPU402on a port selected by the user in the command. The spktgen engine604can execute any of many different types of scheduling. In one implementation, the spktgen engine604assigns different data sources to certain data cores, such that the data cores generate data packets in parallel. For example, core1 may be used to generate data packets for the data generated by VMs504and505, core2 may be used to generate data packets for the data generated by VMs506and507, and core3 may be assigned to generate data packets for the data generated by VM508. In another implementation, the spktgen engine604assigns generating packets according to a round robin schedule in which each data core generates packets for a different interval of time.FIG.9shows an example of a round-robin schedule for generating data packets with core1, core2, and core3. Horizontal axis902represents time. Vertical axis904represents the cores. Bars, such as bar908, represent time intervals in which each core generates data packets for the host. In another implementation, the spktgen engine604assigns generating packets according to a round robin schedule in which each data core generates a fixed number of data packets before packet generation moves to a next data core in the schedule. Round robin scheduling may be used where the SNIC has fewer outputs than data cores in order to prevent congestion and dropped packets at the SNIC connectors. In another implementation, the spktgen engine604assigns one or more data cores to exclusively generate packets for one or more data sources and assigns the remaining data cores to generate data packets according to a round robin schedule.FIG.10shows an example of core1 and core2 generating packets according to a round-robin schedule in which the cores take turns generating packets in alternating time intervals and core3 generates packets in time intervals1001-1003that are not coordinated with the round robin scheduling of core1 and core2. In another implementation, the spktgen engine604uses one or more data cores to output data packets in one SNIC connector while other data cores are used in parallel to generate packets that are output in an another SNIC connector. For example, packets generated by core1 and core2 may be output from the SNIC322with the RJ45 connector410while packets are generated by core3 and output from the SNIC with the SFP connectors412and414. Alternatively, packets generated by core1 and core2 may be output from the SNIC322with the SFP connectors412and414while packets generated by core3 are output from the SNIC with the RJ45 connector410. It is appreciated that the previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these embodiments will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
22,648
11863443
DETAILED DESCRIPTION The following detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations, in accordance with exemplary embodiments. These exemplary embodiments, which are also referred to herein as “examples,” are described in enough detail to enable those skilled in the art to practice the present subject matter. The embodiments can be combined, other embodiments can be utilized, or structural, logical, and electrical changes can be made without departing from the scope of what is claimed. The following detailed description is therefore not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents. In this document, the terms “a” and “an” are used, as is common in patent documents, to include one or more than one. In this document, the term “or” is used to refer to a nonexclusive “or,” such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. The embodiments disclosed herein may be implemented using a variety of technologies. For example, the methods described herein may be implemented in software executing on a computer system containing one or more computers, or in hardware utilizing either a combination of microprocessors or other specially designed application-specific integrated circuits (ASICs), programmable logic devices, or various combinations thereof. In particular, the methods described herein may be implemented by a series of computer-executable instructions residing on a storage medium, such as a disk drive, or computer-readable medium. The embodiments described herein relate to mechanisms for providing regional network configurations of a virtual overlay Software-Defined Wide Area Network (SD-WAN). Embodiments of the present disclosure provide a unique solution to help enterprises build large scale SD-WAN. This is accomplished by allowing division of a large SD-WAN into regions, each region being a subnetwork of enterprise sites deployed with a specially programmed network computing appliance. Each network appliance at a site belongs to one and only one region. Thus, a Software-Defined Wide Area Network of the present disclosure is composed of multiple interconnected regional subnetworks. In embodiments of the present disclosure, SD-WAN can be utilized to seamlessly meld together private and public communication networks, so that the SD-WAN provider can carry network traffic over the Internet without the use of any ISPs. Public communication networks today typically create point to point links, to stitch together a network manually over IPSec to make it seem like a private network to an end user. With the use of an SD-WAN fabric, each individual router and network does not need to be manually configured to carry network traffic. Rather, an end user simply transmits to an edge router of the SD-WAN fabric and does not need to worry about the intermediate hops to the destination computer. In this way, a source can transmit traffic over a public Internet (and not have to rely on an expensive ISP connection) in a scalable, secure, seamless, and deployment free manner. Destination sites can be added or removed to the SD-WAN fabric, without any change necessary from the source side or the destination side of the network traffic. With the specialized SD-WAN fabric created by the present invention, an entire proprietary SD-WAN fabric may appear as a single neighboring router to a peer computing device outside of the SD-WAN. Embodiments of the present disclosure can be implemented via a physical or virtual network appliance with at least two standard interfaces (vLAN or Native) on different subnetworks. The interfaces can be LAN or WAN interfaces. The network appliance can be configured in a number of different ways. Depending upon the network topology configuration, each network appliance in a region builds either a full mesh, partial mesh or hub and spoke topology with all the other appliances in a region. Each network appliance can play a specific role in the SD-WAN, depending on the topology of the region. For example, in the case of a regional network with hub and spoke topology, each appliance in the region is configured as either a hub or a spoke for the region. The hub network appliances provide an intermediate network hop to connect two different spokes in the same region (intra-region). Inter-region connectivity is also provided by embodiments of the present disclosure. With inter-region connectivity, a hub in one region is connected to a hub in another region. This forms a second level of hub-to-hub network topology (inter-region network topology). For a site connecting from one region to another region, an exemplary SD-WAN path may be: source node→regional hub→hub of the other region→destination node. In embodiments of the present disclosure, regions of an enterprise SD-WAN are configured by the enterprise itself, and are thus customizable. An enterprise may have hundreds or thousands of offices spread across a globe. In a typical network architecture, these sites are connected to one another via MPLS and the enterprise must configure rules about how each site connects with each other. With the present invention, regions are created within an SD-WAN, and a special purpose computing network appliance belongs to a specific region, creating a logical topology. Hubs operate as interconnect points across regions. Further, forming a virtual network across thousands of sites and forming a full mesh architecture between them is very costly and requires building millions of tunnels to facilitate communications. With the regional SD-WAN architecture disclosed herein, the network burden is reduced. Only hubs of each region need to be fully meshed, significantly reducing a number of tunnels required in the virtual overlay network. With an SD-WAN, data transport is virtualized, including Internet links, LTE, etc. Once the virtual network is created, it serves as a raw network. Embodiments of the present disclosure provide for the creation of user defined topologies on the virtual network. Previously this was only done on an MPLS network. In the present disclosure, MPLS serves as a raw data transfer pipe for an SD-WAN, whereas the virtual network provides intelligence for the data transfer. By dividing the communication network into regions, an enterprise can define applicable business policies region by region. For example, an enterprise may have a business policy that traffic traversing over the Internet in one region is sent to Zscaler cloud security service, while traffic traversing in a second region is sent to a local cloud security service provider (e.g., Symantec) for that second region. In this way, business policies for an enterprise can be customized on a regional basis. Further, each region of the SD-WAN can have different combinations of transport links with different link qualities and link bandwidths. Regional network architecture allows enterprises to configure link usage region by region. For example, one region can use MPLS links to carry specific application traffic while for the same application, Internet links can be used in the other regions. Similar to a single region (i.e. no region) SD-WAN, the multi-region SD-WAN disclosed herein is centrally orchestrated from a single orchestrator. The orchestrator translates user policies into network configuration policies, assigns hub and spoke roles to each appropriate appliance, and pushes corresponding network polices to all of the appliances in the network. This allows a network administrator to configure regional policies and global network wide policies to meet regional and corporate business goals. In some embodiments, local configuration aspects of the region can be restricted to be managed by regional network administrative teams, while the global aspects on the configuration can be managed by global network administrative teams. Global policies like application to BIO mappings, VRFs (network wide layer-3 segmentation) and end to end security policies are required for the entire network. In the regional network architecture, the global policies are honored by packets carrying sufficient information for inter-region traffic flows. I. System Setup FIG.1illustrates an exemplary system100, within which the present disclosure can be implemented. The exemplary system100includes a first location110, a second location120, and communication networks130A-130D. While four communication networks are depicted in exemplary system100, there can be any number of communication networks, including just one. Additionally, system100can include many locations, though only two are depicted in the exemplary figure for simplicity. In the exemplary embodiment depicted inFIG.1, the first location110includes computers140and a first appliance150. In the first location110, the computers140are linked to the first appliance150. While only one appliance is depicted in first location110, there can be multiple appliances, physical and/or virtual, at first location110. In some embodiments, the first location is a branch location of an enterprise. While not depicted here, first location110can also comprise additional elements such as routers, switches, or any other physical or virtual computing equipment. Computers140may be any type of computing device capable of accessing a communication network, such as a desktop computer, laptop computer, server, mobile phone, tablet, or any other “smart” device configurable for connection to a communication network. The first appliance150comprises hardware and/or software elements configured to receive data and optionally perform any type of processing before transmitting data across one or more communication networks. As illustrated, the first appliance150is configured in-line (or serially) between the computers140and the router160. The first appliance150intercepts network traffic between the computers140and the servers170, in either direction. In other embodiments, the first appliance150can be configured as an additional router, gateway, bridge, or be transparent on some or all interfaces. As a router, for example, the first appliance150appears to the computers140as an extra hop before the router160. In some embodiments, the first appliance150provides redundant routing or peer routing with the router160. Additionally, the first appliance150may provide failure mechanisms, such as, fail-to-open (e.g., no data access) or fail-to-wire (e.g., a direct connection to the router160). If an appliance has multiple interfaces, it can be transparent on some interfaces, or act like a router, or act like a bridge on others. Alternatively, the appliance can be transparent on all interfaces, or appear as a router or bridge on all interfaces. InFIG.1, the first appliance150is linked to a router160, which is coupled to communication networks130A and130B. While only one router160is depicted in exemplary system100, there can be multiple routers, switches, or other equipment (physical or virtual) present in system100, either within the first location110or outside of the first location110. Typically, router160would be located within first location110. In various embodiments, first appliance150may be in communication with communication networks130C and130D directly (on separate interfaces), instead of through router160. While router160is depicted as being connected to two communication networks and first appliance150is also depicted as being connected to two communication networks, a person of ordinary skill in the art would understand that there can be any number of communication networks (including just one communication network) connected to the first location110, either via router160, via first appliance150, or via another computing device. To illustrate that each of the access links is possible but not required in every embodiment, the access links125are shown as dashed lines inFIG.1. The second location120in exemplary system100includes servers170. While the term “server” is used herein, any type of computing device may be used in second location120, as understood by a person of ordinary skill in the art. The server may also be a virtual machine. While not depicted inFIG.1, second location120can optionally include at least one second appliance in addition to, or instead of, servers170. Second location120can also include other components not depicted inFIG.1, such as routers, switches, load-balancers or any other physical or virtual computing equipment. In some embodiments, the second location120is a central location or data center for an enterprise. In other embodiments, the second location120is a data center hosting a public web service or application. The servers170are depicted inFIG.1as being linked to the communication networks130A-130D via destination access links145. In some embodiments, servers170may actually be in communication with the one or more of the communication networks through a router, switch, second appliance, or other physical or virtual equipment. Further, while four destination access links145are depicted inFIG.1, for four communication networks (130A-130D), there may actually be fewer (such as just one) or more communication networks connected to second location120. To illustrate that each of the destination access links145is possible but not required in every embodiment, the destination access links145are shown as dashed lines inFIG.1. The communication networks130A-130D comprise hardware and/or software elements that enable the exchange of information (e.g., voice, video and data) between the first location110and the second location120. Some examples of the communication networks130A-130D are a private wide-area network (WAN), the public Internet, Multiprotocol Label Switching (MPLS) network, and wireless LTE network. Typically, connections from the first location110to the communication networks130A-130D (e.g., from router160and first appliance150) are T1 lines (1.544 Mbps), or broadband connections such as digital subscriber lines (DSL) and cable modems. Other examples are MPLS lines, T3 lines (43.232 Mbps), OC3 (155 Mbps), OC48 (2.5 Gbps), fiber optic cables, or LTE wireless access connection. In various embodiments, each of the communication networks130A-130D may be connected to at least one other communication network via at least one Inter-ISP link155. For example, communication network130A may be connected to communication network130B,130C, and/or130D via one or more inter-ISP links. Data may traverse more than one communications network along a path from first location110to second location120. For example, traffic may flow from the first location110to communication network130A, over inter-ISP link155to communication network130B, and then to the second location120. The router160and first appliance150are optionally connected to the communication networks130A-130D via access links125, sometimes also referred to herein as network access links. The communication networks130A-130D consist of routers, switches, and other internal components that make up provider links135. The provider links135are managed by the network service providers such as an Internet Service Provider (ISP). The second location120can be connected to communication networks130A-130D via destination access links145. Access links125, provider links135, and destination access links145can be combined to make various network paths along which data travels between the first location110and the second location120. The exemplary embodiment ofFIG.1depicts two paths along various provider links135through each communication network. However, as understood by persons of ordinary skill in the art, there can be any number of network paths across one or more communication networks. In addition, communication networks may be in communication with one another via inter-ISP link(s)155. For example, data traveling through communication network130A may also travel through communication network130C before reaching second location120. In various embodiments, data can travel through any one or more of the communication networks130A-130D from first location110to second location120, and vice versa. Generally, an inter-ISP link connects communication networks of different internet service providers, such as a link connecting Verizon LTE wireless network with Comcast broadband network. In some embodiments, an inter-ISP link can connect communication networks from the same internet service provider, such as a link connecting Verizon LTE wireless network with the Verizon Fire network. The first appliance150, along with any other appliances in system100can be physical or virtual. In the exemplary embodiment of a virtual appliance, it can be in a virtual private cloud (VPC), managed by a cloud service provider, such as Amazon Web Services, or others. An appliance in a customer data center can be physical or virtual. Similarly, the second location120may be a cloud service such as Amazon Web Service, Salesforce, or others. As discussed herein, the communication networks130A-130D can comprise multiple provider links, made up of routers and switches, connecting networked devices in different locations. These provider links, which together form various paths, are part of one or more core networks, sometimes referred to as an underlay network. In addition to these paths, there can also be tunnels connecting two networked devices. A virtual network, sometimes called an overlay network, can be used to transmit data across an underlay network, regardless of which Service Provider manages the routes or provider links. Data from connected devices can travel over this overlay network, which can consist of any number of tunnels or paths between each location. In an exemplary embodiment, data from computers140at first location110may include voice, video, and data. This information can be transmitted by first appliance150over one or more communication networks130A-130D to second location120. In some embodiments, voice, video, and data may be received and transmitted on separate LAN or vLAN interfaces, and first appliance150can distinguish the traffic based on the LAN/vLAN interface at which the data was received. In some embodiments, the system100includes one or more secure tunnels between the first appliance150and servers170, or optionally a second appliance at the second location. The secure tunnel may be utilized with encryption (e.g., IPsec), access control lists (ACLs), compression (such as header and payload compression), fragmentation/coalescing optimizations, and/or error detection and correction provided by an appliance. In various embodiments, first location110and/or second location120can be a branch location, central location, private cloud network, data center, or any other type of location. In addition, multiple locations can be in communication with each other. As understood by persons of ordinary skill in the art, any type of network topology may be used. The principles discussed herein are equally applicable to multiple first locations (not shown) and to multiple second locations (not shown). For example, the system100may include multiple branch locations and/or multiple central locations coupled to one or more communication networks. System100may also include many sites (first locations) in communication with many different public web services (second locations). Branch location/branch location communication, central location/central location communication, central location/cloud appliance communication, as well as multi-appliance and/or multi-node communication and bi-directional communication are further within the scope of the disclosure. However, for the sake of simplicity,FIG.1illustrates the system100having a single first location110and a single second location120. FIG.2illustrates a block diagram of an appliance250(also referred to herein as a network appliance), in an exemplary implementation of the invention. Appliance250may be similar to appliance220ofFIG.2and first appliance150ofFIG.1, as discussed herein. Each appliance is at a “site”, which may be a branch, a data center, or a virtual instance in a cloud. The appliance250includes a processor210, a memory220, a WAN communication interface230, a LAN communication interface240, and database(s)290. A system bus280links the processor210, the memory220, the WAN communication interface230, the LAN communication interface240, and the database(s)290. When deployed in a branch location, line260links the WAN communication interface230to the router160(inFIG.1), and line270links the LAN communication interface240to the computers140inFIG.1. The database(s)290comprises hardware and/or software elements configured to store data in an organized format to allow the processor210to create, modify, and retrieve the data. The hardware and/or software elements of the database(s)290may include storage devices, such as RAM, hard drives, optical drives, flash memory, and magnetic tape. In some embodiments, some appliances comprise identical hardware and/or software elements. Alternatively, in other embodiments, some appliances, such as a second appliance, may include hardware and/or software elements providing additional or specialized processing, communication, and storage capacity. Embodiments of the present invention also allow for centrally assigned policies to be implemented throughout an organization's entire network, to secure and control all WAN traffic for the organization. Software defined WAN (SD-WAN) overlay networks can be created independently from the physical network, and from each other, and in multiple layers. Topology, security, and forwarding rules can be specified independently for each overlay. This design allows for high-scale and secure application segmentation. Each overlay scales automatically as endpoints are added to the SD-WAN fabric, and configuration integrity is maintained as each site maps a local profile into a global overlay. All of the overlay networks, labels, and corresponding ports, subnets and vLANs can be maintained in one or more databases in communication with an orchestrator device, as depicted inFIG.3. The orchestrator310can be hardware and/or software, and be in communication with each of the networked devices, such as the network appliances, as well as in communication with the database(s)320. In exemplary embodiments, the orchestrator310may maintain information regarding the configuration of each appliance at each location (physical or virtual). In this way, the orchestrator310can create, manage and implement policies for network traffic throughout the network of connected appliances. For example, if a higher priority is designated for voice traffic, the orchestrator310can automatically configure the corresponding network appliances at all relevant locations accordingly. By having knowledge of the configuration of each appliance in the network, the orchestrator310can also create and manage tunnels in the enterprise network, including tunnels to carry a particular type of network traffic between each source-destination appliance pair. The orchestrator310can automatically configure the enterprise network by determining which tunnels need to be set up, and automatically creating them based on the network nodes and overlays. The orchestrator310can also configure policies based on the application classification techniques to preferentially steer certain types of applications over one path rather than over another path. In exemplary embodiments, network interfaces of a network appliance250can be designated on the WAN side and LAN side as processing a specific type of traffic, or traffic from specific applications. For example, a first WAN interface may connect to the public Internet, while a second WAN interface connects to an MPLS service. Both WAN interfaces can support encryption and the Internet uplink can be configured for Network Address Translation (NAT). II. Regional Overlay Networks FIG.4depicts an example network topology with three regions in a Software Defined Wide Area Network (SD-WAN). While three regions are depicted in the exemplary figure, there may be any number of regions in a single SD-WAN in various deployments. Regardless of the number of regions, the entire SD-WAN may be hidden and encapsulated into a single virtual interface to a computing device outside of the overlay network. In this way, a computing device outside of the overlay network created by the appliances can exchange messages with an appliance and create a state with an appliance of the overlay network without needing to know or understand the complex overlay network connecting the appliances. While not expressly depicted for simplicity, there can be any number of other routers or other computers in the environment than those shown inFIG.4. In the exemplaryFIG.4, West Region470has four appliances: one appliance is configured as a hub420for the region, and three appliances are configured as spokes for the region (spoke405, spoke410, and spoke415). ExemplaryFIG.4also depicts East Region475with four appliances. One appliance is configured as hub425for the region, and three appliances are configured as spokes for the region—spoke430, spoke435, and spoke440. There is also Central Region465depicted in exemplaryFIG.4. One appliance is configured as a hub for the region—hub445. Three appliances are configured as spokes for the region—spoke450, spoke455, and spoke460. While four appliances are depicted in each of these three regions for simplicity, there can be any number of appliances present in each region. However, each region has at least one hub in exemplaryFIG.4, since each region is configured in a hub and spoke topological configuration. Each of the appliances may be part of a centralized data center or a branch center. Each appliance may or may not have a private network connection as well. Further, each appliance may be connected to an orchestrator (such as orchestrator310ofFIG.3), even though not depicted in the exemplary environment ofFIG.4. As discussed herein, the orchestrator has a global view of the whole network across the geographical area, and all of the configurations and deployment can occur at the orchestrator itself. Further, the orchestrator configures the network, such as an exemplary network ofFIG.4. In order to transmit data, first a secure interface channel is created between the different appliances (also referred to herein as an overlay network). A virtual interface is overlaid on the overlay network. In this way, data can be transmitted in a secure manner regardless of the security of the underlying physical network. As discussed herein, a central orchestrator for the SD-WAN can configure each network appliance in each region to act as either a hub for the region or a spoke for the region. The orchestrator can automatically update and reassign appliances to each of these roles in a dynamic fashion. For example, if an appliance configured as a hub in a first region goes down for any reason, a spoke in the region can be reconfigured to take over as a hub for the region. In other embodiments, a hub in a second region can be reconfigured to serve as a hub for the first region. Alternatively, a local or global human network administrator can update and reassign appliances to each of these roles in a dynamic fashion. Network traffic may traverse between each appliance in a region (intra-region), and/or traverse in an inter-region fashion via the hubs. That is, each hub in a region is capable of communicating with each hub in a different region, as depicted in exemplaryFIG.4. For example, an exemplary network path for data flow can be: spoke405→hub420→hub445→spoke460. Another exemplary network path for data flow can be: spoke450→hub445→hub420→spoke430. FIG.5depicts another example network topology with three regions in a SD-WAN. While three regions are depicted in the exemplary figure, there may be any number of regions in a single SD-WAN. In the exemplaryFIG.5, West Region570has five appliances. Two appliances are configured as hubs for the region (hub520and hub580), and three appliances are configured as spokes for the region (spoke505, spoke510, and spoke515). ExemplaryFIG.5also depicts East Region575with five appliances. Two appliances are configured as hubs for the region (hub525and hub590), and three appliances are configured as spokes for the region—spoke530, spoke535, and spoke540. There is also Central Region565depicted in exemplaryFIG.5. Two appliances are configured as a hub for the region (hub545and hub585). Three appliances are configured as spokes for the region—spoke550, spoke555, and spoke560. While five appliances are depicted in each of these three regions for simplicity, there can be any number of appliances present in each region. However, each region has at least one hub since each region is configured as a hub and spoke topology. Typically, a hub is a point of failure for a region, which may make it desirable to have two hubs in a region. If one hub fails, the other is still available for connectivity to the region. As discussed herein, a central orchestrator for the SD-WAN can configure each network appliance in each region to act as either a hub for the region or a spoke for the region. The orchestrator can update and reassign appliances to each of these roles in a dynamic fashion. Alternatively, a local or global human network administrator can update and reassign appliances to each of these roles in a dynamic fashion. Network traffic may traverse between each appliance in a region (intra-region), and/or traverse in an inter-region fashion via the hubs. That is, each hub in a region is capable of communicating with each hub in a different region, as depicted in exemplaryFIG.5. Thus, hub525in East Region575is capable of communicating with either hub in West Region570and either hub in Central Region565. As such, hub525is connected to four other hubs in the exemplary environment ofFIG.5. Similarly, each of the hubs inFIG.5are connected to four other hubs. Notably, a hub in one region may typically not communicate with another hub in its same region, to prevent routing loops. An exemplary network path for data flow in this environment can be: spoke505→hub520→hub525→spoke530. Another exemplary network path for data flow can be: spoke510→hub580→hub590→spoke530. FIG.6depicts another example network topology with three regions in a SD-WAN. While three regions are depicted in the exemplary figure, there may be any number of regions in a single SD-WAN. In the exemplaryFIG.6, West Region670has four appliances: one appliance is configured as a hub620for the region, and three appliances are configured as spokes for the region (spoke605, spoke610, and spoke615). ExemplaryFIG.6also depicts East Region675with four appliances. One appliance is configured as hub625for the region, and three appliances are configured as spokes for the region—spoke630, spoke635, and spoke640. There is also Central Region665depicted in exemplaryFIG.6. One appliance is configured as a hub for the region—hub645. Three appliances are configured as spokes for the region—spoke650, spoke655, and spoke660. While four appliances are depicted in each of these three regions for simplicity, there can be any number of appliances present in each region. However, each region has at least one hub since each region is configured in a hub and spoke topology. As discussed herein, a central orchestrator for the SD-WAN can configure each network appliance in each region to act as either a hub for the region or a spoke for the region. The orchestrator can update and reassign appliances to each of these roles in a dynamic fashion. Alternatively, a local or global human network administrator can update and reassign appliances to each of these roles in a dynamic fashion. The network ofFIG.6is configured to provide one hop access to spokes in another region. Also, hub nodes are in a full mesh configuration. For example, the figure depicts hub645of Central Region665in communication with hub620of West Region670via link680and also in communication with hub625of East Region675via link690. Further, hub645is in communication directly with the spokes of West Region670(spoke605, spoke610, and spoke615). Thus, hub645in Central Region665can communicate with any appliance in West Region670via a single hop, whether it be a hub of West Region670or a spoke of West Region670. By building a tunnel (virtual network) directly between a spoke of West Region670and hub645in Central Region, latency of data traffic is reduced. Network traffic may traverse between each appliance in a region (intra-region), and/or traverse in an inter-region fashion. For example, an exemplary network path for data flow can be: spoke605→hub620→hub625→spoke630. Another exemplary network path for data flow can be: spoke650→hub645→spoke615. III. Regional Routing To create regional networks as part of a SD-WAN on a virtual overlay network, specialized software is utilized. A proprietary protocol called subnet sharing protocol is utilized to carry routing information and addresses (such as IP addresses) across all sites of the SD-WAN. To configure subnet sharing protocol for a multi-region SD-WAN, every route needs to have its own identity of which region it is coming from, so it can be properly placed in a correct routing table. The present disclosure is primarily directed to hub and spoke network topologies, or to full-mesh SD-WAN topologies. However, as would be understood by persons of ordinary skill in the art, other network topologies are also within the scope of this disclosure. In a typical scenario, a hub acts as the clearing-house for SD-WAN traffic. Traditionally, a hub advertises a default subnet to each of its spokes. However, with embodiments of the present disclosure, more intelligent routing between hubs can be achieved, and more specificity regarding where each spoke is routed to can be achieved as well. In full-meshed topologies, when a hub is defined, the hub redistributes all learned routes with all peers in the region. Redistribution can also occur between hubs in different regions, but not typically between two hubs in the same region (to prevent routing loops). In embodiments of the present disclosure, each network appliance has a defined role, and the appliance must know explicitly that it is a hub or a spoke for the region. Spoke appliances only share locally learned subnets (configured, auto, or dynamic). Hubs can redistribute spoke learned routes to other spokes in the same region or to hubs in other regions. Further, each network appliance explicitly knows in which region it resides. In a defined region, a hub may be configured to communicate with all spokes in the same region via any means (such as a tunnel). While there may be multiple hubs in a region, the hubs in a same region are typically not configured to communicate with one another. That is, while a hub may see all other appliances in its region, it does not share information with another hub in its same region. With multiple regions, subnet sharing can also be enhanced among network appliances. Embodiments of the present disclosure provide for a network appliance to be configured to support redistribution of appliance learned subnet routes across multiple hub and spoke regions. Subnet sharing is one strategy utilized to auto-optimize IP traffic, automatically directing flows to the appropriate tunnel. Auto-optimization strategies reduce the need to create explicit route map entries to optimize traffic. With subnet sharing, each appliance builds a subnet table from entries added automatically by the system or manually by a user. When two appliances are connected by a tunnel, they exchange this information (“learn” it) and use it to route traffic to each other. In exemplary embodiments, subnet sharing takes care of optimizing IP traffic based on the destination IP address alone. A route policy, or a global route policy template can be applied for data traffic flows that are to be sent pass-through (shaped or unshaped), dropped, configured for a specific high-availability deployment, and/or routed based on application, ports, VLAN, DSCP, or ACL (access control list). The Multi-Region Subnet Sharing (MRSS) leverages enhancements from UserSpace Routing (USR) and extends them across multiple hub and spoke regions established using business intent overlays from a central Orchestrator. For more discussion on business intent overlays, please see co-owned U.S. patent application Ser. No. 16/414,774 entitled “Data Transmission via a Virtual Wide Area Network Overlay”, which is hereby incorporated by reference. In various embodiments, spokes in a region do not redistribute any learned routes, regardless of where that route was learned from. This helps to prevent routing loops in the SD-WAN. However, hubs in a region can redistribute valid MRSS learned routes with all of the spokes in the same region, filtering out a local spoke's routes, to prevent routing loops. A hub can also redistribute its region's routes with a hub in a different region or a spoke in a different region if it has a tunnel to that other hub or other spoke. In an exemplary use case, there may be fifty appliances forming a virtual network amongst themselves on a WAN side. As discussed herein, all of these appliances also have a LAN side. There is some routing information on the LAN side. Routes that are not specific to the virtual overlay network are not connected to IP addresses of the appliance interfaces, and are located in the LAN network. These routes from the LAN side need to be learned by each network appliance and advertised on the SD-WAN fabric, so that all appliances in the SD-WAN know that a specific application is in a specific data center. Thus, when any appliance in the SD-WAN desires to access that specific application, the appliance knows the routing information to reach the relevant appliance in the data center that is operating that application. Typical network appliances in communication with one another in a SD-WAN may share subnets with a peer device in a single hop. With embodiments of the present disclosure, if subnet sharing is enabled on a network appliance, then all peer learned subnets from devices outside of the network appliance (automatic, configured, or dynamic) are shared with all connected peers, regardless of how many hops away from the network appliance the peer is located. Thus, the total number of routes that are shareable can be up to 30,000 for a large scale SD-WAN deployment. Subnets are shared as a list of subnets with a subnet message header. In implementing subnet sharing protocol, a control packet is shared between two nodes, where a “node” may be an appliance or other computing device not part of the overlay network. The control packet shared between the nodes exchanges routing information. An example control packet header is depicted inFIG.7. As would be understood by persons of ordinary skill in the art, there may be fewer or additional fields than those depicted inFIG.7, in various embodiments.FIG.7depicts a message header comprising fields such as a message type, message length, transaction identifier, system identifier, and umac checksum. In order to compact and conserve space, as many subnets as possible are placed inside a 1K buffer of the header, as depicted in exemplaryFIG.8. Each of these buffers has a umac checksum to identify corruption at the receiver. In exemplary embodiments, the subnets have no transaction code—they are always “add”. A peer detects deleted subnets by flushing out all associated subnets from the sending peer based on its transaction identifier (stamp). That is, all previously learned subnets from that transmitting peer appliance are deleted from the local subnet table of the receiving appliance and replaced with the newly received subnets. Updates can occur as often as every 20 milliseconds with a refresh occurring after 15 minutes of inactivity. The subnet code may transmit to one peer as frequently as every 20 milliseconds. If a change in the subnet table is detected by a network appliance, the transaction value is updated and an update is sent by the network appliance to each peer. If the receiving peer has an older transaction stamp, it replaces its information from the newly updated subnets. If the receiving peer has a newer transaction stamp, it takes no action based on the received subnet message. In exemplary embodiments, an orchestrator may direct a network appliance to only send subnet update messages to peer appliances with subnets with older transaction stamps. A route redistributed by a hub typically has its metric increased by a known value. This allows for subnets learned from a non-spoke peer to take precedence over a hub learned route. The particular known value utilized in embodiments of the present disclosure may be a configurable value. To support MRSS while also supporting older systems and methods of subnet sharing, a unique proprietary message type and messaging format is used by the network appliances. In exemplary embodiments, the new messaging comprises an identifier, region, and role of an appliance along with a message count, total number of records, and a checksum. The message count, total number of records, and checksum aid a remote peer appliance in validating that it received a valid subnet sharing message, and whether it received all routes and all messages that it was sent. An exemplary message header for the subnet message is depicted inFIG.9. Modified existing header fields are shown in italicized text, and new header fields are shown in underlined text, for ease of identification. As would be understood by persons of ordinary skill in the art, there may be fewer or additional header fields in various embodiments, than what is depicted inFIG.9. The exemplary figure depicts a packet header comprising fields such as a number of subnets, a length of data packet, role of a sending appliance, region of a sending appliance, a system identifier, a number of messages in the data flow, a total number of messages, a total number of subnets, a transaction identifier, and an umac checksum. Region, Hub and Spoke Hub and spoke topology is one of the most commonly deployed Business Intent Overlays (BIO) in Orchestrator. This decreases the number of tunnels needed in the network, compared to a full mesh topology, since the connectivity is limited only between the hub and spoke. For example, five nodes in a network requires twenty tunnels if in a full mesh configuration, while it only requires eight tunnels in a hub and spoke configuration. Embodiments of the present disclosure allows for the grouping of the claimed network appliances into regions where, inside that region you can run a full mesh or hub and spoke topology. Connections between regions can be limited to only occur between hubs and there can be multiple hubs per region to support redundancy. Hubs are fully meshed. However, since a hub cannot redistribute any of its own learned routes from peer appliances, customers add a default route at each hub so that the spokes send traffic to the hub. However, a hub cannot know how to reach a spoke in another region and when there are more than two regions involved, the complexity of which hub to send the traffic to increases immensely. Typically in present systems, a peer cannot redistribute a subnet route to other peers. This causes issues with hub and spoke topologies since the hub cannot inform other hubs about which subnet routes it handles nor can it tell the other spokes connected to it about the subnet routes from a specific spoke. With Multi-Region Subnet Sharing, these issues can be resolved. This control plane feature will allow the automatic redistribution of routes between hub and spoke and hub to hub which the hubs can use to properly route the tunnel traffic, reducing the complexity of multiple regions. The hub will share the routes it learns from its spokes to the other spokes in its region. However, to avoid creating a routing loop, each spoke will only be given the other spoke's routes, the hub will not reflect its own routes back to the spoke. As discussed herein, hubs can also share the routes learned from its spokes with hubs in other regions, but not with hubs inside its own region. A hub can share the routes it learns from other hubs with its spokes but cannot share with other hubs. Spokes do not redistribute any internal learned routes. To support this functionality, an orchestrator configures each appliance such that: a hub must be connected to all spokes in its region; within a region, all appliances know which region they belong to; and all appliances know their role (either a hub or a spoke, with spoke being the default). If a region is using full mesh topology, instead of hub and spoke, then all appliances (sometimes referred to herein as nodes) are considered spokes in the region. When a route is redistributed (hub to spoke or hub to hub) a cost will be assigned so that the remote end can make a choice when it has multiple route paths. This may be accomplished by simply increasing the routing metric by a known amount for a particular route. When a route is advertised, a cost of the route is increased because a number of hops is increased. This is due to the fact that embodiments of the present disclosure create a hub-based, second hierarchical topology between hubs. Instead of going from a first appliance to a second appliance directly, the data traffic travels from the first appliance to a first hub, then to a second hub, and then to the second appliance (assuming the two appliances are in different regions). If a cost of going from the first appliance to the second appliance directly is attributed a value of 50, then the cost is increased to 150 when the traffic travels through the two intermediate hubs. In exemplary embodiments, if a same route is received by an appliance from two different peers, the cost value is evaluated to see which is the cheapest (and thus likely the fastest) route. Further, this allows for subnets learned from a spoke peer to take precedence over a hub learned route. That is, a direct route from a spoke is likely to be a better (lower cost, faster) route than a route through a hub. If a same route is learned by an appliance from multiple hubs, the shortest path is likely to be the chosen path. Because each hub increments a routing metric by a known value before sharing, a lower metric is likely to be the shorter (and hence faster) path, and is likely to be the one chosen by an appliance. The specific value amount for the routing metric is relative—the metric is mainly used to differentiate between various routes. Route Types Different policies can be applied to routes, based on how an appliance learns of the route. For example, a “locally learned route” is a route learned on an appliance by any one of the following means: auto route, static route, and dynamic route. An “auto route” is a route added to a local subnet table on the appliance automatically. For example, locally connected subnets are added for each datapath interface. As discussed herein, each appliance has a LAN side network and interfaces configured on the LAN side. Subnets and IP addresses assigned to a LAN interface of an appliance are “auto” routes of a “locally learned route”. A “static route” is a route that is manually added via configuration by an orchestrator or administrative user, and is stored in a local configuration database of an appliance. A “dynamic route” is a route that is learned via a known routing protocol, such as BGP or OSPF. An “enterprise learned route” is a learned route learned via subnet sharing. An “enterprise spoke learned route” is a route learned from a spoke at its hub. An “enterprise hub learned route” is a route learned from a hub. Subnet Table In exemplary embodiments, each appliance has a subnet table, either stored locally on a hard drive, or stored in a centrally accessible networked location. If the option is selected, an appliance can automatically include local subnets when it shares its subnet information with other appliances. For example, the local subnet(s) for the appliance's interfaces can be added to the subnet table. A local subnet is one that includes one of the appliance IP addresses. If the option is deselected, the system doesn't create entries in its subnet table for the appliance's local subnets. If these subnets are not listed in the subnet table, they cannot be shared with peer appliances for auto-optimization. An exemplary subnet table at an appliance may comprise a number of fields. For example, a subnet table may have a field for “subnet/mask”, which specifies the actual subnet to be shared/advertised so it can be learned by a peer appliance. A subnet table may also have a “metric”, similar to the routing metric discussed herein. In exemplary embodiments, the metric may be a value between 0 and 100 and indicates the priority of a given subnet. A default priority may be 50. When an appliance finds that more than one peer appliance is advertising the longest matching subnet, it chooses the peer that advertises the subnet with the lowest metric value, as the lower metrics have priority. As would be understood by persons of ordinary skill in the art, the metric may be expressed in a different format, such as any alphanumerical value, with priority denoted in different ways. A subnet table may also have a field to denote whether the listed subnet is local to the site. When a subnet is not local, a manually added subnet in the table is unavailable for auto-optimization in exemplary embodiments. The subnet table may further have a field for denoting whether or not the subnet is to be advertised to peers. When selected, the subnet information is shared with peers. When deselected, a subnet in the table is not divulged to peers. The subnet table may further have a field for denoting the type of subnet. An auto subnet is automatically added to the subnet table by the system, and comprises subnets of interfaces on the appliance. A subnet may also be manually added or configured for the appliance by a user. Further, a subnet may be learned from a peer, and added to the subnet table as a result of exchanging information with peer appliances. If learned from a peer appliance, the subnet table identifies the peer appliance that advertised the subnet information. Subnet Sharing Versioning As discussed herein, a proprietary subnet messaging type is utilized to implement the MRSS scheme disclosed. Prior to the use of the new subnet messaging type, an appliance determined the lowest common subnet version between itself and its peers. Thus, even if the appliance could support the latest subnet sharing feature, if any of its peers was configured as a lower (older) subnet version, then the lower subnet version is shared with all of its peers. With MRSS, the subnet sharing version is determined on a peer to peer basis. This allows the appliance to share subnets using the newer (higher) version features even when some of its spokes are still using an older subnet version. However, any routes learned from a peer using an older subnet version are not redistributed. Peers can still advertise their subnet sharing version via the keepalive message. Beginning Subnet Sharing FIG.10depicts an exemplary message sequence chart for two spokes and a hub sharing subnets within a single region. In the exemplary embodiment ofFIG.10, a packet flow occurs after tunnels are established between each spoke and hub. In message1005, spoke1045shares its locally learned routes with hub1050. Hub1050records in its internal memory that spoke1045is configured as a spoke for the regional network topology. Hub1050then shares its locally learned routes with spoke1045in message1010, and spoke1045records in its internal memory that hub1050is configured as a hub for the network topology. A task cycle later, hub1050shares its locally learned routes with spoke1055in message1015, and also records in its internal memory that spoke1055is configured as a spoke for the network topology. Spoke1055shares its locally learned routes with hub1050in message1020. In message1025, hub1050shares the routes learned from spoke1055with spoke1045. In message1030, hub1050shares the routes learned from spoke1045with spoke1055. In message1035, spoke1045sends a new set of routes to hub1050. Hub1050then shares these routes from spoke1045with spoke1055in message1040. In this manner, routes are shared between the hub and spokes within a single region. Notably, in this embodiment, spoke1045and spoke1055only exchange routes through hub1050. That is, they are not directly sharing routes with one another. In exemplary embodiments, when an appliance receives a routing update, it transmits this update to all connected peer appliances in a round robin fashion. If there is no change in routing information, then the appliance does not need to send a control packet with routing information to the other connected peer appliances. A routing update comprising a change in a route may be caused by a user adding a new route, a new route is learned from a hub or another peer appliance, or a new route is learned from an external source, such as BGP or OSPF protocol. Subnet Sharing Update FIG.11shows an exemplary embodiment of a subnet sharing update among appliances. In the example topology ofFIG.11, West Region1105is configured as having three spokes (spoke1115,1120, and1125) and two hubs (hub1130and1135). East Region1110is configured as having one hub (hub1140) and three spokes (spokes1145,1150, and1155). Further, spoke1115of West Region1105is depicted as being in communication with peer1160. In various embodiments, peer1160may be a router, switch, user device, or any computing device not part of the overlay network comprised of the peer appliances. Spoke1155of East Region1110is also depicted as being in communication with peer1165. In various embodiments, peer1165may be the same type of peer or different type of peer than peer1160. Peer1165may be a router, switch, user device, or any computing device not part of the overlay network comprised of the peer appliances. As discussed herein, each hub of each region is configured to be in communication with each of the spokes within its own region. Further, the hubs of each region are configured to be in communication with one another. That is, hubs1130and1135of West Region1105may be in communication with one another, as well as in communication with hub1140of East Region1110. In an exemplary embodiment, spoke1115receives an update from its BGP peer, peer1160. Upon reception of this BGP update, a subnet table at spoke1115is updated and the new routes as marked as BGP learned routes. Since its local subnet table has been updated, spoke1115sends a subnet sharing update message to both hubs in its region, hub1130and hub1135. Upon reception of the subnet update message from1115, each of hubs1130and1135update their own local subnet tables. The newly learned routes are marked as “spoke learned”, and each hub updates its connected spokes. That is, hub1130sends a subnet update message to spoke1120and spoke1125. Hub1135also sends a subnet update message to spokes1120and1125. Since the subnet update was received from spoke1115, there is no need for either hub to send the same subnet update back to that spoke. In this manner, the spokes of West Region1105are updated with the BGP routing update from spoke1115. Further, the subnets have their metric updated by a known amount from the hub. Upon receipt, these routes are marked by spokes1120and1125as “hub learned” and thus cannot be shared to any other appliance. It is also noted that subnet updates arrive from both hub1130and hub1135at spokes1120and1125as equal cost subnets. In exemplary embodiments, equal cost subnets have a same routing metric value. The routing metric value is influenced by a number of hops and/or user-configured rules, as discussed herein. Each spoke then uses peer priority to choose a preferred hub. A particular destination may be learned by an appliance from multiple peers in a network, providing the appliance with multiple paths to reach the destination. In order to select the best path to utilize to reach the destination, peer priority is used. In exemplary embodiments, peer priority is based on user configured rules. In an example embodiment, the USA may be one region within a SD-WAN. The region may have an East Coast hub and a West Coast hub. An appliance in California may choose the West Coast hub as a higher priority and the East Coast hub as a lower priority, due to geographical distance. In various embodiments, peer priority is defined by each appliance, and each appliance has a preferred hub priority. Each of hub1130and hub1135of West Region1105also send the subnet update to the East Region1110, via hub1140. When East Region1110hub1140receives the updates, the routes are marked as “hub learned”. Since the receiving peer appliance is a hub, it can redistribute these routes only to its spokes and to no other peer. It is also noted that hub1140receives the same routes from both hub1130and hub1135as equal cost. For routing purposes, hub1140may use peer priority to select the preferred West Region1105hub to receive traffic. Hub1140also sends the subnet update to each of its regional connected spokes, spokes1145,1150, and1155. Each spoke updates its own local subnet table. Further, since spoke1155is also in communication with peer1165, and the received subnet update contains BGP learned routes, spoke1155sends an update to its BGP peer, peer1165. Sharing with Non-Region Spokes In additional embodiments, a hub can share all of the routes it learned in its local region with a spoke in another region. The orchestrator can identify non-region hubs that a spoke can establish a tunnel with. This allows a method of providing a faster path to critical spoke appliances in another region. FIG.12depicts an exemplary embodiment of subnet sharing between hubs and non-region spokes. In the exemplary figure, spoke1115of West Region1105has a tunnel to hub1140in the East Region1110. Further, spoke1155of East Region1110has a tunnel to both hub1130and hub1135of West Region1105. In exemplary embodiments, the hubs only share regional routes with those spokes, and not any routes learned from another hub. For example, hub1140can share routes learned from spoke1145, spoke1150, and spoke1155with spoke1115of West Region1105. It is noted that spoke1115will also learn those same routes via its own hubs, hub1130and hub1135. But the same routes learned from its own hubs will have a larger metric (due to the extra hop), causing spoke1115to route directly to hub1140instead of going through its local region hubs. IV. Orchestrator Configuration As discussed herein, the Orchestrator is configured to create the virtual overlay network, create tunnels between appliances in the virtual overlay network, configure each appliance in the virtual overlay network, and many more functions. When creating regions within the SD-WAN, the orchestrator ensures that all hubs are connected to all spokes within a region. The orchestrator also informs each appliance which region it is a part of. In exemplary embodiments, the region contains a scalar. “Regions” as discussed herein may be delineated on any user-configured basis. That is, an enterprise may create a region for a geographical boundary, such as a continent, a country, a state, or any other desired boundary. In other embodiments, regions may be created based on business objectives for an enterprise. In further embodiments, regions may be created based on a desired division of appliances. As would be understood by persons of ordinary skill in the art, regions can be created based on any criteria. Further, virtual appliances may be configured to be in any region based on any criteria desired by the enterprise. Regional distribution of appliances may also be dynamic, such that an enterprise may reassign appliances from one region to another region, as desired. The orchestrator is further tasked with informing each appliance of its role. If there are any conflicts and an appliance is listed in one BIO as a hub but in another BIO as a spoke, its role is a hub. FIG.13depicts an exemplary user interface1300that a network administrator may view from an orchestrator, such as orchestrator310ofFIG.3. In the exemplaryFIG.13, six appliances are depicted as dots, with three appliances located in West Region1305and three appliances located in East Region1310. In an exemplary embodiment, both West Region1305and East Region1310are each configured with hub and spoke topology. Appliance1315of West Region1305is configured as a hub for the region, and appliances1320and1325are configured as spokes for the region. Appliance1320is not operational, so hub1315is only depicted as being in communication with spoke1325within its own region. Further, hub1315of West Region1305is depicted as being in communication with hub1330of East Region1310. Further, hub1330of East Region1310is in communication with each of spokes1340and1345within its region. In exemplary embodiments, a network administrator can designate a network topology to be applied to each region, and this can be changed dynamically. For example, a network administrator may decide to re-configure a region with hub and spoke topology to have a full mesh topology instead. Through a user interface on the orchestrator, the network administrator can direct to change East Region1310to be in a full mesh configuration. The orchestrator would then communicate with each appliance within East Region1310and remotely and dynamically reconfigure each appliance to operate according to a full mesh network. The orchestrator can also create tunnels between appliances substantially instantaneously to facilitate communications over the SD-WAN, if the tunnels do not already exist. FIG.14depicts another exemplary user interface1400that a network administrator may view from an orchestrator, with the same appliances described in connection withFIG.13. In the exemplaryFIG.14, West Region1305is configured with hub and spoke topology, and East Region1310is configured as a full mesh network. Appliance1315of West Region1305is configured as a hub for West Region1305, and appliances1320and1325are configured as spokes for the region. Further, hub1315of West Region1305is depicted as being in communication with hub1330of East Region1310. East region1310is configured as a full mesh, so each of hub1330and spokes1340and1345within East Region1310are all in communication with one another. FIG.15depicts another exemplary user interface1500that a network administrator may view from an orchestrator, with the same appliances described in connection withFIG.13. In the exemplaryFIG.15, West Region1305and East Region1310are configured in a full mesh topology, both within each region, and also across the regions. That is, all appliances within West Region1305are in full mesh with one another, all appliances within East Region1310are in a full mesh with one another, and all six appliances across both regions are in a full mesh with one another. In a full mesh topology, each appliance is configured by the orchestrator as having the role of a spoke. As discussed herein, an orchestrator for an enterprise SD-WAN can create and manage all of the regions that may be present in the enterprise SD-WAN.FIG.16depicts an exemplary user interface1600that a network administrator may view from an orchestrator, such as orchestrator310ofFIG.3. In the exemplaryFIG.16, an appliance identifier is depicted in field1605. Further, the user interface1600shows that there are “No regions found” as encompassing the appliance identified in field1605. An administrator can select a button on the interface to “Create Regions” to create one or more regions for the SD-WAN substantially instantaneously. FIG.17depicts an exemplary user interface1700that a network administrator may view from an orchestrator, such as orchestrator310ofFIG.3, to assign or re-assign an appliance to a region. In the exemplary interface, two regions are depicted—an East Region and a West Region. Six appliances are depicted under the “Hostname” column, and the region each appliance is currently assigned to is shown in the “Present” column. An administrator can select an appliance from the list and either “add” or “remove” it to the East Region or the West Region through the selectable objects. Further, when an appliance is reassigned to a different region, it may be shown in the “Changes” column of user interface1700. In this way, an administrator can seamlessly assign or reassign appliances to existing regions. The orchestrator automatically configures each appliance accordingly, and creates the corresponding communication tunnels. In one example, when an appliance is reassigned from a first region to a second region, the orchestrator sends an update message to the appliance informing it of its new region assignment, and also updates a role for the appliance within its new region, as applicable. That is, the appliance may change from a “hub” role to a “spoke” or vice versa. Thus, methods and systems for a multi-region Software-Defined Wide Area Network are disclosed. Because an SD-WAN of the present disclosure is comprised of multiple regions, there are different network administrators in each region who have administrative authority of their regional subnetwork. A local network administrator of one region in the SD-WAN may not have control, or even access, to appliances in other regions of the SD-WAN. In addition, a network administrator for the entire (global) enterprise SD-WAN can configure certain aspects of the network based on business goals, which cannot be changed by local regional administrators. The virtual overlay networks are created at a global level. At a local level, administrators may modify specific regional appliances, add rules, etc. Further, with embodiments of the present disclosure, applications can be processed differently in different regions. For example, one application may use MPLS links to send application traffic in one region, but a different link (such as LTE, public Internet, etc.) to send traffic from the same application in a different region, such as if MPLS is available. Each region may use a different cloud security service, based on availability and cost of the service. Security and networking policies may be configured on a regional basis. Although embodiments have been described with reference to specific examples, it will be evident that various modifications and changes can be made to these example embodiments without departing from the broader spirit and scope of the present application. Therefore, these and other variations upon the exemplary embodiments are intended to be covered by the present disclosure. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
67,570
11863444
DETAILED DESCRIPTION OF THE INVENTION To overcome the problems faced by the conventional network routing technologies, the present invention provides a decentralized system that continually measures latencies in the computer network and can dynamically determine better performing paths between nodes in the computer network based on the up-to-date measured latencies. In some embodiments, referring toFIGS.1A and1B, a distributed routing controller105(i.e. a protocol-instruction provider system) includes a server190, a computer memory192, and a processor195, in connection with a computer network100via the Internet. The computer network100includes a collective of interconnected computers or nodes110-150. The computer memory192stores computer codes that include instructions that define a distributed autonomous routing protocol (DARP), which enable nodes in the computer network100to form a group for continual measurements of data-transmission latencies and for determining better performing data routing pathways among a group nodes such as nodes110-150. The nodes110-150can receive the above described computer codes that contain the distributed autonomous routing instructions via download from the distributed routing controller105. The node110can include a computer memory115that stores the computer codes and a processor117that executes the computer codes and implement instructions under the distributed autonomous routing protocol. Once the computer codes are installed on the respective computers, the nodes110-150are formed into a pulse group160according to the distributed autonomous routing protocol, which establishes secure communications among the nodes110-150in the pulse group160. The distributed routing controller105can be a private service provider that originally develops the distributed autonomous routing protocol. In some embodiments, the computer codes can be implemented as dockers that are installed at each of the node computers within a pulse group. The dockers enable compartmentalization of the node computers, which allows the instructions for the distributed autonomous routing protocol to be executed without interfering with other operations of the node computers. The distributed routing controller105can be responsible for managing and updating versions of the dockers. To ensure proper operations, all the nodes110-150will run the same version of the computer codes to execute the instructions for the distributed autonomous routing protocol. In some embodiments, the above described computer codes are distributed and updated using encrypted software. The distributed routing controller105can also be implemented within a file repository system that is private or open to the public. In one implementation, the file repository system can be a public file repository; the original computer codes are provided by an entity that develops or establishes the distributed autonomous routing protocol. A portion of the computer codes can be contributed by many users or agents in the form of open source. Publically contributed codes can help the expansion and applications of pulse groups and the accomplishing faster and more reliable network routing. The distributed routing controller105can further include a commercial service provider that facilitates formation and/or maintenance of the pulse groups, and identification of better performing routing paths between nodes. Under the instructions of the distributed autonomous routing protocol in the installed computer codes, the nodes110-150continually send pulse messages comprising the nodes' state information to each other in peer-to-peer connections180. The state information includes a time stamp associated with the sending time of a pulse message sent by a particular node (e.g.110) in the pulse group160. In the present disclosure, the tem “pulse message” refers to the messages regularly sent between peer nodes in a common pulse group. Optionally, the state information can also include reception time stamps of the pulse messages previously received by the particular node (e.g.110) from other nodes (i.e.120-150). One-way latencies are calculated by subtracting the reception time stamp by the sending time stamp of each pulse message in a uni-direction between a pair of nodes in the pulse group160. In a pulse group comprising an integer n number of nodes, n*(n−1) number of one-way latencies can be continually measured and calculated for the pulse group. The One-Way Latencies (OWL) can be calculated by receiver nodes and shared with all members of the pulse group160. Specifically, each node can be responsible for updating the OWL values of the one way communications received by that node. For example, referring toFIG.2, the node D is responsible for updating the OWL values in the column “To D” in the OWL matrix200. InFIG.2, nodes A-E can respectively represent nodes110-150inFIGS.1A and1B. The pulse messages can be light weight and adds very little traffic overhead to the computer network. In some embodiments, each of the pulse messages can include a single data packet that contains the state information such as the first time stamp. The state information contained in pulse messages can be used for measurement purposes, that is, for recording time stamps and for calculating latencies. In some embodiments, as described below in conjunction withFIG.6, pulse messages can carry information for other operations of the pulse groups as well as for applications. All the measured one-way latencies within the pulse group160are reported by the nodes110-150to the pulse group160. The measured OWL values are combined and tabulated in OWL matrices (or OWL tables)115,125,135,145,155, which are stored in computer memories of the nodes110-150. The OWL matrices (or OWL tables)115,125,135,145,155are continually updated using the latest measured OWL values and shared among the nodes110-150in the pulse group160. Thus each node110-150has a full-mesh real-time one-way latency matrix within its pulse group160. The computer network100can include a public network, or a private network, or a combination of both. In a public network, once a host computer node sets up a pulse group, any node in the public network (e.g. the Internet) can connect to one of the nodes in the pulse group by receiving the computer codes containing the distributed autonomous routing protocol to join the pulse group. In a private network, one genesis node (such as node110inFIGS.1A and1B) starts a pulse group by inviting a few nodes to join a pulse group. As shown inFIG.1A, the genesis node such as node110includes the computer memory115that stores the computer codes and the processor117that executes the computer codes and implements instructions under the distributed autonomous routing protocol. The genesis node is part of the pulse group and it manages the population in its pulse group such as additions of nodes to the pulse group and deletions of nodes from the pulse group. An important advantage of the presently disclosed system and method is that clock synchronization is not required among the nodes110-150in the pulse group160. The clocks of the nodes110-150can have significant skew or offsets from each other, which will not affect the determination and the selection of the better routing paths among the nodes110-150in the pulse group160. In some embodiments, referring toFIG.2, measured one-way latency values in a pulse group comprising nodes A-E are tabulated in a distributed full-mesh OWL matrix200. The one way latencies from each of the five nodes A-E to other nodes result in 20 latency values (n(n−1), wherein the exemplified n number of nodes in the pulse group is 5) in the OWL matrix200. For examples, the latencies from node A, node B, node C and node E to node D are respectively 51, 53, 100, and 25 (msec); the latencies from node C to node A, node B, node D and node E are respectively 50, 34, 100, and 91 (msec). As discussed above, in one implementation, the OWL values in column “To A” are calculated and updated by node A; the OWL values in column “To B” are calculated and updated by node B, and so on. Moreover, latencies between two nodes can be different in forward and reverse directions. For example, the latency from node C to node D is 100 msec. and the latency from node D to node C is 85 msec. It should be noted that the latency numbers, the number of nodes with a pulse group, the number of pulse groups, and specific exemplified configurations inFIGS.1A-6are used only for the purpose of illustrating the disclosed systems and methods, which should not limit the scope of the disclosed invention. It should be further noted that the OWL values in the OWL matrix200are raw latency values derived from measured timestamps of the different node computers that are generally not synchronized. These raw latency values can be positive or negative, and the values can be significantly different from the true latency values measured between nodes having synchronized clocks. In some embodiments, the OWL matrix200can be used as a routing table for determining a better performing path between two nodes within the pulse group. The distributed autonomous routing protocol contained in the computer codes downloaded from the distributed routing controller105enables autonomous calculations and determinations of better performing paths within the pulse group. In one aspect, the better performing data routing path is measured by the lower or the lowest total latency from the sending node, via one or more relay or intermediary nodes, to the destination node. The total latency is the sum of the latencies of all node-to-node transmission segments along the routing path. From the OWL matrix200, the direct routing path (i.e. the shortest path) from node C to node D, which is recommended by a conventional centralized Internet protocol-instruction provider, has a latency of 100 msec. In contrast, the presently disclosed systems and methods can improve the performance of the data routing from node C to node D by allowing additional intermediary or relay nodes between node C and node D. Using the OWL matrix200, the presently disclosed methods explore and evaluate total latencies along other possible routing paths. For example, the path from node C to node A then from node A to node D has a total latency of 50+51=101 msec.; the path from node C to node E then from node E to node D has a total latency of 91+25=116 msec. Two above alternative paths would result in slower data transmissions, which are not good alternatives to the direct path. A better performing data routing path is found using node B as a relay node: the segments of node C to node B and then from node B to node D have a combined latency value of 34+53=87 msec., which is below the 100 msec. latency value of the direct path from node C to node D. Thus the path using node B in the pulse group as a relay node provides decreased latency comparing to conventional methods. In some embodiments, a better performing path can also include two or more relay nodes between the sending node and the destination node. In the above example, the better performing routing path is independent of clock skews. For example, if the clock at node D is skewed by minus 50 msec., the latencies from node A, node B, node C and node E to node D would now be respectively1,3,0, and −25 (msec); the latency values in the column to node D are all shifted down by 50 msec. The better performing routing path from node C to node D will still be from node C to node B, then from node B to node D because all alternative paths have their respective summed latency values all shifted down by the same amount (i.e. 50 msec of latency time). It should be noted that negative latency values are allowed in the OWL matrix, which do not affect the determination of the better performing routing paths as described above. In some embodiments, referring toFIGS.3and4, a plurality of pulse groups310-350can exchange their respective OWL matrices315-355to provide a global directory for all the nodes participating in the pulse groups310-350on the Internet across the globe. The form of the global directory is a partial mesh OWL matrix400as shown inFIG.4. The partial mesh OWL matrix400is a table of OWL matrices315-355, which can be used as the basis for algorithmic dynamic pulse group creation and for latency-based routing decisions. For example, when a node A in pulse group310is attempting to send data to node B in pulse group320, node A has the public key (or an IP address, a DNS name, or other identification information) of the destination node B and will inquire about node B at its genesis node in pulse group310. The genesis node in pulse group310communicates with other Genesis nodes of the other pulse groups320-350. Each of those Genesis nodes searches for the public key in their respective groups. The genesis node of pulse group320identifies node B using the public key and notifies the genesis node of pulse group310and node A. To establish latency measurements between node A and node B, the genesis node of pulse group320can invite node A to join pulse group320. The OWL matrix325is updated with latencies from and to node A, which allows calculations and determination of a better performing path from node A to node B. Alternatively, a new group can be formed that includes a hybrid of pulse group310and pulse group320. The new group includes node A and node B as well as some or all other nodes previously in the two groups. An OWL matrix is established and updated as described above. A better performing path can be determined from node A to node B. It should be noted that the nodes in pulse group310and pulse group320can join the new group while still staying in their respective original pulse groups. In other words, each node can simultaneously join multiple pulse groups. More details about algorithmic pulse group formation in response to data transport needs are described below in relation toFIGS.13-18. In some embodiments, the formation and operations of pulse groups can include one or more of the following steps. Referring toFIG.5, a pulse group that includes a plurality of nodes in a computer network is formed (step510). As described above, the plurality of nodes can first receive computer codes from a distributed routing controller. Once executed by nodes' respective computer processors (e.g.117inFIG.1), instructions from the codes establish secure communications among the nodes. Pulse messages (or pulse messages) are automatically sent between nodes in the pulse group (step520). The pulse message automatically sent from a first node to a second node in the pulse group (step520) includes a first time stamp associated with the sending time of the specific pulse message. The pulse message is received by the second node at a reception time associated with a second time stamp (step530). Next, a one-way latency from the first node to the second node is automatically calculated based on the first time stamp and the second time stamp (step540). In one implementation, the one-way latency from the first node to the second node is automatically calculated by the computer at the second node by subtracting the second time stamp by the first time stamp. In some embodiments, the pulse message sent by a first node can further include reception times of the pulse messages previously received by the first node from other nodes in the pulse group. In this way, each node in the pulse group will have the first time stamp and the second time stamp of the pulse messages in both directions between that node and other nodes in the pulse group. The availability of the first time stamp and the second time stamp to the sending and the receiving nodes of the pulse messages allow the nodes to independently calculate latencies in both sending and receiving directions. The redundant calculations of the one-way latencies can serve as validation of OWL in the pulse group and ensure reliability of the OWL data in the OWL matrix. The one-way latencies in both forward and reverse directions between each pair nodes in the pulse group are automatically recorded in a one-way latency matrix (step550) by the nodes in the pulse group. These measured values are latencies for the direct paths between nodes in the pulse group. Specifically, the one-way latency from the first node to the second node in the one-way latency matrix can be updated by the second node after it calculates the updated one-way latency value as described above. The OWL matrix is continually updated and shared in real time among all the nodes in the pulse group. For example, pulse messages can be sent by each node in the pulse group at a regular 1 second interval for the continued OWL measurements. Using the one-way latencies updated in real time in the OWL matrix, a better performing data routing path with a lower latency from the first node to the second node can be automatically calculated (step560). The better performing data routing path can include at least one relay node in the pulse group, a first transmission segment for the first node to the relay node, and a second transmission segment for the relay node to the second node. An example of such better performing data routing path is described above in the path from node C to node B and node B to node D in relation toFIG.2. In some embodiments, there could be more than one relay node in the better performing data routing path. The total sum of latencies in all the segments in the better performing data routing path is lower than the latency of the direct path from the first node to the second node. By managing the population of pulse groups, the disclosed systems and methods provide a buffer to the overall network load. Whenever or wherever a computer network is overburdened with traffic and experiencing high latencies, the disclosed systems and methods can autonomously identify alternative routing path and alleviate the traffic latency or congestion, which result in more consistency and reliability in the network's performance. Details of the operations of pulse groups (steps510-560inFIG.5) are now described. The computer codes downloaded from the distributed routing controller105and stored in the memory of each of the nodes in a pulse group (FIGS.1and2) includes the same instructions and configuration information (i.e. defined by the distributed autonomous routing protocol) to be run on the nodes in the pulse groups. Once installed, the nodes in the same pulse group are instantly connected to other nodes in the pulse group over a secure connection. As shown inFIGS.1A,1B, and6, a node (e.g.110) in a pulse group receives computer codes from a distributed routing controller105, which are installed on the node computer. The installed software enables the node computer to perform at least two logic functions: 1) Pulse Logic610for sending pulse messages to communicate state information to peer nodes in the same pulse group; and 2) Pulse Handle logic620for processing pulse messages (i.e. pulses) received from peer nodes within the pulse group. The node computer stores a pulse group Configuration Table630and a Latency measurement Table640among other information related to implementing the distributed autonomous routing protocol. The pulse group Configuration Table630includes information about the nodes in the pulse group: public keys, public IP addresses, and public ports for the nodes in the pulse group. The information in this table ensures connections and communications between peer nodes in the same pulse group. For each message received by a receiving node in the pulse group, the Latency measurement Table640lists the first time stamp (i.e. the sending time recorded by the sending node), the second time stamp (i.e. the reception time recorded by the receiving node), and the one-way latency (OWL) calculated from the two time stamps. Using the information in the pulse group Configuration Table630, the Pulse Logic610regularly sends out pulse messages to peer nodes in the same pulse group (using a public key assigned to a specific node at a specific public IP address via the specified public port). In each such pulse message, the Pulse Logic610records a time stamp according to the timer or the local clock of the node computer at node110and stores the time stamp in that pulse message. The time stamp serves as the first or the sending time stamp of the associated pulse message, which the node receiving the particular pulse message can use to calculate a one-way latency time from node110to the receiving node. As discussed above, pulse messages can generally include information for operations of the pulse groups as well as for applications. Information for operations can include state information that is used for measurement purposes, that is, for recording time stamps and for calculating latencies. In some embodiments, pulse messages can carry information for identifying and communicating with the nodes in the same pulse group. Pulse messages can also include information that ensures consistent operations of the pulse groups such as latency measurements and routing path selections such as software version of the computer codes and/or docker version shared between nodes for executing the distributed autonomous routing protocol. A1 nodes in a pulse group run the same version of software for the proper operations with the pulse group. The Pulse Handle logic620can receive different types of messages. When a new node joins the pulse group, the Pulse Handle logic620receives information (i.e. the public key, the public IP address, and the public port for the new node) that instantiates the pulse group, and adds the information the pulse group Configuration Table630(a new row for the node in630). Corresponding to the messages sent out, node110regularly receives pulse messages from peer nodes in the same pulse group. These received messages are also handled by Pulse Handle logic620. For each received message, the Pulse Handle logic620records a second or a reception time stamp based on the timer or the clock of the local node computer. Pulse Handle logic620extracts the first (sending) time stamp from the received pulse message and records both the first time stamp and the second time stamp in the Latency measurement Table640for that message (e.g. message1). Pulse Handle logic620then calculates a one-way latency (OWL) based on the first time stamp and the second time stamp. In one implementation, OWL is obtained by subtracting the first time stamp from the second time stamp. It should be noted, as described above, that the timer or the computer clock on node110may not be synchronized with the clocks on peer nodes120-150in the same pulse group. The clocks of the peer nodes can be skewed, or offset, such that the absolute OWL values may be different from the real latencies experienced in data transmission. The synchronicity however does not affect the determination of the better routing path. As the OWL values are calculated from the received pulse messages, Pulse Handle logic620updates and records the current OWL values in the OWL matrix200. In the specific configuration shown in OWL200(FIG.2), Pulse Handle logic620in node110is responsible for updating a column of the OWL values, which includes latency values for the one-way messages received from different peer nodes in the same group. As discussed above in relation toFIGS.2and5, better performing data routing paths can be determined using the most updated OWL matrix200. In some embodiments, referring toFIGS.7, in a computer network700, a pulse group710is formed including node A, node B, node C, node D, and node E. The pulse group710is in communication with a distributed routing controller750. In some embodiments, the distributed routing controller750can play a role in the initiation and ongoing management and performance of the pulse group710. Similar to the functions of the genesis node described above, the distributed routing controller750can also initiate and form new a pulse group by sending Ping messages to nodes over a computer network (e.g. the Internet) and receiving messages from some nodes. In some embodiments, based on the analyses of the received messages, the distributed routing controller105can invite qualified nodes to join a new pulse group and an existing pulse group. The distributed routing controller750can periodically communicate a node (such as the genesis node) in a pulse group to receive the group's collective network state. The distributed routing controller750can convert performance matrices of pulse groups into a database of available data routing paths and their recently measured network performance characteristics. As discussed above in connection withFIGS.1A-2,5, one-way latencies are continually measured between each pair of nodes A-E and recorded in an OWL matrix associated with the pulse group710. A better performing data routing path can be determined between nodes (e.g. from node C to node D as shown inFIG.2) in the pulse group710. A recordation and management system760is in communication with the distributed routing controller750and the pulse group710. After a data transmission has been conducted via one or more relay nodes along a better performing data routing path selected as described above, a payment is made by an entity (a digital wallet or an account) associated with the node that requested the data transfer to an entity (a digital wallet or an account) associated with the one or more relay nodes that provided the better performing data routing path. Such payment is defined in the distributed autonomous routing protocol that is installed on the nodes A-E and distributed by the distributed routing controller750. The node that requested the data transfer is normally the node that sends the data or the node that receives the data. The recordation and management system760can record these payment transactions, which provide economic incentives for the participation of the nodes in the pulse groups and for serving as relay nodes. The recordation and management system760can also keep track of the better performing data routing paths in different pulse groups, and the ratings of nodes in different pulse groups. In some embodiments, an exemplified implementation of the recordation and management system760is shown inFIG.8. The recordation and management system760includes a repository management system770and validator nodes771-774(i.e. V-nodes). The repository management system770stores and manages historic data for the pulse groups: the roster of nodes in each pulse group, the one-way latency matrices recorded by different pulse groups, the available and selected routing paths, the performance characteristics (e.g. the amount of latencies reduced), and transactions made between nodes. These data is stored in a database in the repository management system770. The validator nodes771-774provide a distributed ledger to record the above described historic data including transactions between nodes in pulse groups. In general, the nodes in the pulse groups that serve as relay nodes for better data routing paths can considered as suppliers of distributed data routing resources. Those nodes that are in need of transmitting data can be considered as consumers of the distributed data routing resources. Additionally, the payment transfer between nodes in a pulse group does not need to involve direct data exchange between the two nodes. The two nodes can each own a digital wallet over the computer network or a Cloud. Payments can be made by one node to another by transfers (e.g. utility tokens) between their corresponding digital wallets. Blockchain can be used to settle between the suppliers and the consumers of the collective resources of the distributed data routing in the pulse groups. The validator nodes771-774each includes a computer memory installed with blockchain codes and a processor executed the blockchain codes such that the validator nodes771-774can collectively validate and publish transactions between nodes in the pulse groups. Payments between nodes in pulse groups can be made in different forms, for example in utility tokens. Relay nodes of the better forming data routing paths can earn utility tokens from the nodes that will use or have used the better performing routing paths. Network data is continuously collected and stored by the distributed routing controller750in the form of a ‘ticket’ along with group statistics. The validator nodes771-774verify the network statistics that accompany the claim for reward, and add the transaction to the blockchain, which records the ledger of transfer of utility coins from the consumers to the suppliers for each use of alternative routing path selected as described above. The nodes A-E in the pulse group710can each have a digital wallet for storing utility tokens. Depending on their relative positions within data routing paths, each node can serve as a consumer or a supplier in a pulse group. Moreover, referring toFIGS.7and8, each node in the computer network700(FIG.7) can simultaneously participate in multiple pulse groups and play the role of a consumer or a supplier in different pulse groups. Furthermore, a validator node771-774can also be a node (e.g. node A-node E) in a pulse group. In other words, a node in the computer network700can serve as a consumer or a supplier of resources of the distributed data routing as well as providing validating services for recording the transactions between the consumers and suppliers. The validator nodes771-774can earn utility tokens for validating transactions under the rules of DARP as defined in the logics in the computer codes distributed among the validator nodes771-774. These payments are receivable in the respective digital wallets of the validator nodes771-774. For validating each transaction on the ledger, a validator node771-774can earn a small transaction fee, which is a small portion of the payment that a consumer pays for using a better-performing data route (most of payment goes to the supplier node(s) that provided the relay routing service). In addition, the validator nodes771-774can also earn dividend pool managed by the repository management system770. The transaction revenues and dividends can ensure the stability and liquidity of the utility tokens, which in turn enable the availability and healthy usage of the distributed data routing resources in the disclosed distributed system and method. The process of forming a pulse group, one-way latency measurements, the determination and selection of better performing data routing path, and the recording and payment transactions between nodes providing and using these routing paths can include one or more the following steps. Referring toFIG.9andFIGS.7-8, with details described above (FIGS.1A,1B,5,6), a pulse group is formed by a plurality of nodes in a computer network (step910). One-way latencies between each pair of nodes in the pulse group are automatically measured (step920) continually (FIGS.1A,1B,5,6). One-way latencies are automatically recorded (FIGS.1A-2,5,6) between nodes in the pulse group in a one-way latency matrix (step930). A lower latency data routing path from a first node to a second node via a relay node is automatically determined (FIGS.1A,1B,5,6) based on the one-way latencies in the one-way latency matrix (step940). According to the lower latency data routing path, data is sent from the first node to the second node via the relay node (step950). A payment transfer from an entity (a digital wallet or an account) associated with the first node or the second node to an entity (a digital wallet or an account) associated with the relay node is automatically recorded (step960) (FIGS.7-8). As described in relation toFIG.8above, the payment transfer can be recorded on a ledger by a plurality of validator nodes using blockchain technologies. In some embodiments, a premium data routing service can be provided by high-speed data gateways. For example, certain high-speed data gateways have been constructed for stock and commodity trading. These high-speed data gateways are connected with secure sockets and are dedicated to trading activities during trading hours. Off peak hours, these high-speed gateways have a lot of excess bandwidth capacity that can be utilized to earn revenue in providing data routing to nodes participating the presently disclosed pulse groups under DARP. By including high-speed data gateways and associated nodes, pulse groups can provide premium high-speed data routing paths that can drastically prove network routing performance. Referring toFIG.10, the pulse group710is formed similar to that is shown inFIG.7except for node B and node E are connected by a high-speed data gateway1020. Node B and node E are pre-stored with codes for DARP. They may be configured to participate and open up to join pulse groups only during certain hours. Referring toFIG.11, the one-way latency measurements therefore include OWL values (5 msec) from node B to node E and from node E to node B, which are both much lower than OWL values between other pairs of nodes. The one-way latencies between nodes on the high-speed data gateways are less than half, or less than 25%, of the average one-way latency value in a pulse group. Still referring toFIG.11and similar to the discussion above in relation toFIG.2, better performing data routing paths can be determined using an OWL matrix1100. The conventional direct routing path from node C to node D has a latency of 100 msec. A better performing data routing path is found using node B as a relay node: the segments of node C to node B and then from node B to node D have a combined latency value of 34+53=87 msec. In the present example, an even better data routing path is found using the high-speed data gateway from node B to node E: a first segment from node C to node B, a second segment from node B to node E along the high-speed data gateway in between, and a third segment from node E to node D, which result in a combined latency value of 34+5+25=64 msec. The faster data routing path C-B-E-D represent a premium service enabled by the high-speed data gateway and the associated nodes B and E. When such premium service is used for data transfer, the node (e.g. C or D) that requested the data transmission from node C to node D will send a payment to an entity associated with the high-speed data gateway. These transactions are recorded by the recordation and management system760. One feature of the presently disclosed system and method is that they provide better-performing lower-latency data routing path comprising multiple relay nodes (multiple hops). For example, a high-speed data gateway is built up between New York and Chicago for stock and commodity trading. During off-peak hours, nodes connected to the high-speed gateway (e.g. node B and nod E inFIGS.10and11) participate in pulse groups. They can initially participate in different pulse groups, respectively in neighborhoods of Chicago and New York. Using the directory service described above in relation toFIGS.3and4, the initiation and destination nodes for a data transfer can form a new pulse group includes the high-speed data gateway (e.g. including node B and nod E inFIGS.10and11). It should be noted that because the OWL values are usually not identical in forward and reverse directions between a pair of nodes in a pulse group, the relayed better performing data routing paths are dependent on the specific sequence of the relay nodes along the data routing path. For example, although a hypothetic data routing path C-E-B-D also includes the high-speed data gateway with a low latency of 5 msec. from node E to node, the total latency for the path is: 91+5+53=148 msec. which does not result in a better performing data routing path. It should be noted that one of the initiation node (e.g. node C inFIG.11) or the destination node (e.g. node D inFIG.11) can be connected to the high-speed data gateway. The better performing data routing path can include a segment on the high-speed data gateway connected to the initiation node or the destination node. For example, when a data transfer is requested from node C to node E, the direct routing path has a latency of 91 msec. A better performing routing path is found: node C to node B and node B to node E, with a total latency of 34+5=39 msec. The latter segment is on a high-speed data gateway (between node B and node E). In some embodiments, the pulse group710and the distributed routing controller750are configured to rank order possible data routing paths based on their respective performances such as the associated total latencies. The rank order function can be defined the distributed autonomous routing protocol contained the software distributed to nodes A-E. Between two nodes in a pulse group, more than one better performing data routing path can be found. These better performing data routing paths can each be associated with different transaction charges. For example, a premium service provided by a high-speed data gateway can charge a higher transactional fee for its faster data routing pathway than other better performing data routing pathways. The nodes that requested data transmission (e.g. node C or D) can selected one of the better performing data routing pathways based on their relative performance (the total latency) and relative cost. The process of providing a premium service using a high-speed data gateway in a pulse group can include one or more the following steps. Referring toFIG.12andFIGS.10-11, a pulse group is formed by a plurality of nodes in a computer network (step1210), with details described above (FIGS.1A,1B,5,6,9). The pulse group includes at least two nodes connected by a high-speed gateway. One-way latencies between each pair of nodes in the pulse group are automatically measured continually (step1220) (FIGS.1A,1B,5,6,9), including those between the nodes connected by the high-speed gateway. One-way latencies are automatically recorded (FIGS.1A-2,5,6,9) between nodes in the pulse group in a one-way latency matrix (step1230). The OWL values include those conducted along the high-speed gateway. The nodes connected by the high-speed gateway usually have much lower latency in between. A lower latency data routing path from a first node to a second node via a relay node is automatically determined (FIGS.1A,1B,5,6,9) based on the one-way latencies in the one-way latency matrix (step1240). The lower latency data routing path can include and pass through one or more relay nodes. According to the lower latency data routing path, data can be sent from the first node to the second node via the high-speed gateway (step1250). Either the first node or the second node can request the data transfer and pay for the improved data routing service. A payment transfer from an entity (a digital wallet or an account) associated with the first node or the second node to an entity associated with the high-speed data gateway is automatically recorded (step1260) (FIGS.10-11). The entity associated with the high-speed data gateway can be node B or E or both, or an entity that manages or operates the high-speed data gateway. As described in relation toFIG.8above, the payment transfer can be recorded on a ledger by a plurality of validator nodes using blockchain technologies. In some embodiments, as discussed above in conjunction withFIGS.3and4, a global directory is provided to facilitate data transmissions between nodes participating in a plurality of pulse groups across the Internet. The global directory can help two nodes in different pulse groups310-350, which are in need for data exchanges, to connect to each other. The present method and system provide ways to autonomously determine a lower-latency data routing path between nodes that have not been connected in a same pulse group, which enables nodes participating pulse groups under DARP protocol across the global to route data to each other in low latency paths. Referring toFIGS.13and14, node A and node Z have a need to send data packets to each other for an extended period of time. For example, node Z may be a content provider and node A may be a consumer of the content to be provided by node Z. Node A and node Z reside in different pulse groups PG1and PG2. The first pulse group PG1includes a genesis node G1, node A, node B, node C, and optional other nodes. The genesis node G1initiated the first pulse group PG1by connecting to and inviting the plurality of nodes to join the first pulse group PG1. The second pulse group PG2includes a genesis node G2, node X, node Y, node Z, and optional other nodes. The genesis node G2initiated the second pulse group PG2by connecting to and inviting the plurality of nodes to join the first pulse group PG2. The two different pulse groups PG1and PG2are formed in the Internet according to the description above in conjunction withFIGS.1-12(step1310). As described above, one-way latencies were continually measured between nodes respectively in the each of PG1and PG2groups, and are recorded in respective one-way latency matrixes for each of PG1and PG2. In the event that node A in PG1needs to (or requested to) send payload data packet to node Z in PG2(step1320), in response, node A automatically sends the identification and location information of node Z to the genesis node G1in PG1. The identification and location information can include an IP address, a public key, or a DNS (Domain Name System) name of node Z. The genesis node G1automatically sends a search request to genesis nodes of other pulse groups in a global directory1410to search for node Z using node Z's identification and location information (step1320). The global directory1410, as shown inFIG.14A(also described inFIGS.3and4), can include a list of top-level genesis nodes Ga, Gb . . . Gx on the global Internet. Under each genesis node in the list of top-level genesis nodes, there can be optionally one or more intermediate layers of genesis nodes Gi1, Gi2, Gi3. . . Gi1, Gim, which cover all the local pulse groups on the Internet. Using node Z's identification and location information, the genesis node G1sends queries to the genesis nodes in the level above such as the genesis node Gi1in the intermediate layer, and then the genesis node Ga in the top-level genesis nodes. The search request can be broadcasted to all top-level genesis nodes Ga, Gb . . . Gx. In the example shown inFIG.14A, the query reaches the top-level genesis node Ga, then a genesis node Gi3in intermediate layer, and then genesis node G2via the global directory1410. Each of genesis nodes at the top-level and the intermediate layer, as well as the lower level genesis nodes G1, G2. . . Gn, is associated with a pulse group. G2finds node Z in its own pulse group PG2, and reports the finding of Z node back G1in PG1. Genesis nodes G1and G2then help establish communications between node A and node Z (step1330). The steps of searching and establishing communications are conducted autonomously as defined by the above described distributed autonomous routing protocol (DARP) installed in the distributed software stored by computer memory (115inFIG.1) at all the participating nodes (e.g.110-150inFIG.1, nodes A, node B, node C, node X, node Y and node Z inFIG.14). In order to determine a low-latency data routing path from node A to node Z, node A and node Z need to join the same pulse group so that one-way latencies can be measured between nodes from which the lowest latency routing paths can be determined. A new pulse group PG3is automatically formed. PG3includes node A, node Z, and one or more additional nodes from the first pulse group PG1and the second pulse group PG2(step1340). The formation of pulse group PG3is based on the communications among node A, node Z, the genesis node G1, and the genesis node G2: the new pulse group PG3can be formed in different ways depending on the performance and geographic distributions of the pulse groups PG1, PG2, which are exchanged in the communication. In some embodiments, referring toFIG.15, the new pulse group PG3can be formed, in parallel to PG1and PG2, by merging nodes A, B, C and other nodes in PG1and nodes X, Y, Z and other nodes in PG2while the genesis nodes G1and G2are discarded (step1340). Node Z can be the genesis node G3of the new pulse group PG3: node Z can invite other nodes A, B. C. X, Y and other nodes in PG1and PG2to join PG3. It should be noted, as discussed above, that one node (such as Z node) can be simultaneously participating in two or more pulse groups (e.g. PG2and PG3). The exemplified formation of the new group PG3inFIG.15can be applicable when nodes in PG1and PG2are in geographically separate regions of the Internet. The inclusion of most of the nodes from both PG1and PG2can ensure low-latency routing paths to be discovered from a large number of possible routing paths. Another exemplified application of the new group PG3inFIG.15is when node Z is owned by a content service provider, which intends to act as a genesis node (G3) in the new pulse group PG3so that the content service provider can effectively manage the distribution and routing of content data to other nodes (e.g. node A). In some embodiments, referring toFIG.16, a new pulse group PG3′ is formed, in parallel to PG1and PG2, by merging node Z with nodes of PG1that node A resides in (step1340). Node Z and nodes A, B, C in PG1can be invited by the genesis node G1to form PG3′. G1can also function as the genesis node of the new pulse group PG3′. The formation of the new group PG3′ exemplified inFIG.16can be suitable when node Z geographically overlaps with nodes in pulse group PG1on the Internet. In some embodiments, referring toFIG.17, a new pulse group PG3″ is formed, in parallel to PG1and PG2, by merging node A with nodes in PG2that the designation node Z resides in (step1340). Node A and nodes X, Y, Z in PG2can be invited by the genesis node G2to form PG3″. G2can also function as the genesis node of the new pulse group PG3″. The formation of the new group PG3″ exemplified inFIG.17can be suitable when node A geographically overlaps with nodes in pulse group PG2on the Internet. It should be noted, as discussed above, that one node (such as A node) can be simultaneously in two pulse or more pulse groups (e.g. PG1and PG3). In some embodiments, some nodes in PG1and PG2are connected by a high-speed data gateway, which is incorporated into the newly formed pulse group to able low-latency data routing path from the first node (i.e. A node) to the second node (i.e. Z node) via the high-speed data gateway. The low-latency data routing path from the first node (i.e. A node) to the second node (i.e. Z node) can include one, two, or more relay nodes, as discussed above and shown inFIGS.10-12. For example, as shown inFIG.18A, a new group PG 4 includes node C and node B (both from PG1) that are connected by a high-speed data gateway1810. As described, the one-way latency between node B and node C on the high-speed data gateway is less than half, or less than 25%, of the average one-way latency value in PG1. When the new group PG4including nodes A, B, C and nodes X, Y, Z and optional other nodes from PG1and PG2is formed, as shown inFIGS.18A, node B and node C together with the high-speed data gateway1810are incorporated into PG4, which may provide a multi-relay low latency data routing path such as A>B>C>Z. In another example, as shown inFIG.18B, a new group PG4includes node Y and node Z (from PG2) that are connected by a high-speed data gateway1820. As described, the one-way latency between node Y and node Z on the high-speed data gateway is less than half, or less than 25%, of the average one-way latency value in PG2. When the new group PG5including nodes A, B, C and nodes X, Y, Z and optional other nodes from PG1and PG2is formed, as shown inFIGS.18B, node Y and node Z together with the high-speed data gateway1820are incorporated into PG5, which may provide a low latency data routing path such as A>Y>Z. In another example, as shown inFIG.18C, node B and node Z are connected by a high-speed data gateway1830across PG1and PG2. When a new group PG6is formed, as shown inFIGS.18C, node Y and node Z together with the high-speed data gateway1830are incorporated into PG6, which may provide a low latency data routing path such as A>B>Z. In PG6, the one-way latency between node B and node Z on the high-speed data gateway is less than half, or less than 25%, of the average one-way latency value in PG6. This exemplified is beneficial to the scenario that PG1and PG2are in two separate geographic regions of the Internet, for example, nodes in PG1are in Los Angeles and nodes in PG2are around New York. The high-speed data gateway1830acts as a data highway connecting the two pulse groups PG1and PG2such that low-latency data routing paths can be optimized within PG6across the two geographic regions of the Internet. Once the new pulse group PG3(or PG 3′, PG 3″, PG4, PG5, or PG6) is formed, one-way latencies are automatically measured between nodes in the new pulse group (step1350) that includes the node A, the node Z, and other nodes from the pulse groups PG1and PG2. The measured one-way latencies between nodes in PG3can be recorded between each pair of nodes in the pulse group in a one-way latency matrix for PG3. The measurements and recordings of one-way latencies between nodes in a pulse group are described in detail above in conjunction withFIGS.1-2,5-9. The one-way latency measurements are not affected by skews (or asynchronicity) of computer clocks at different nodes in the new pulse group. A first lower-latency data routing path can then be automatically determined from the node A to the node Z based on the one-way latencies in the newly formed pulse group (step1360). As described above (e.g.FIGS.2,5), the first lower-latency data routing path can pass through one or more relay nodes including a first relay node within the newly formed pulse group. Payload data packets can then be sent from the node A to the node Z via the first relay node along the lower-latency data routing path (step1370). Similarly, a second lower-latency data routing path can also be automatically determined from the node Z to the node A based on the one-way latencies in the newly formed pulse group (step1360). The second lower-latency data routing path passes through a second reply node, within the newly formed pulse group, that in general is not the same as the first relay node. Payload data packets can also be sent from the node Z to the node A along the lower-latency data routing path (step1370). Once the lower-latency data routing paths are established, the communication channel can stay open in both directions in a sustained period. For example, a content provider can send content data from node Z to a consumer at node A on a continuous base, and receives command and request data from node A to node Z. It should be noted that the above described steps1310-1370inFIG.13are autonomously executed by software stored in computer memory (115inFIG.1) at all the participating nodes (e.g.110-150inFIG.1, nodes A, node B, node C, node X, node Y and node Z inFIG.14). The DARP software instructions can be distributed a server (190inFIG.1) and stored in a memory (195inFIG.1) in the distributed routing controller (105inFIGS.1A-1B,750inFIGS.7and8). The above embodiments are only used to illustrate the technical solution of the present invention but not to limit it. Those skilled in the art can modify or equivalently replace the technical solution of the present invention without departing from the spirit and scope of the present invention. The scope of protection shall be subject to the claims.
52,020
11863445
DETAILED DESCRIPTION The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Some techniques for routing data packets in a network may include mapping a network address prefix (e.g., an internet protocol (IP) prefix) to an identifier (e.g., a label, an ID, and/or the like). The identifier may define, for the plurality of network nodes, a forwarding procedure for the packet. This may remove a need for each of the network nodes to inspect the network layer header of the data packet and may reduce an amount of computation performed by each of the of the plurality of network nodes to determine the forwarding procedure. A multiprotocol label switching (MPLS) technique, for instance, involves mapping a network address prefix to an MPLS label to identify routing procedures within an MPLS network. When a network node (e.g., a label edge router (LER), a label switch router (LSR), and/or the like) discovers a device associated with a network address prefix, the network node may map the network address prefix to an MPLS label. The network node may advertise the mapping, which may indicate to other network nodes in the MPLS network that the device associated with the network address prefix is available via the network node. The advertised mapping also indicates, to the other network nodes in the MPLS, that data packets intended for the device associated with the network address prefix may include the MPLS label, so that the network node will forward the data packets toward the device associated with the network address prefix. When the network node receives a data packet having the network address prefix, the network node may use the MPLS label to look up a routing procedure, which may indicate a next hop for routing the data packet and may indicate a new MPLS label for the data packet. The MPLS label may be based on an advertised mapping of the network address prefix to the new MPLS as received from a neighbor node associated with the next hop. The data packet may be forwarded across the MPLS network on a label switched path, based on MPLS labels. A segment routing network may use a process of mapping network address prefixes to a segment routing global block (SRGB) of segment identifiers (SIDs) to identify routing procedures within the segment routing network. Some of the SIDs may relate to instructions for routing data packets through segments (e.g., sub-paths of a total path through the segment routing network) between network nodes of the segment routing network. Multiple segments may be combined to form the total path through the segment routing network. A prefix SID may identify a path or sub-path based on a network address prefix of a destination network node. A node SID may identify the destination network node based on the network address prefix of the destination network node. However, conventional mapping of a network address prefix to an identifier may be selected dynamically and unpredictably. By having unpredictable identifier mappings, the network may risk mapping an identifier to multiple network address prefixes, which may result in incorrectly routing through the network. This may result in unnecessary consumption of computing resources (e.g., processor resources, memory resources, communication resources, and/or the like) of network nodes and consumption of network resources as data packets are incorrectly forwarded by one or more network nodes of the network. Additionally, if a network is changed (e.g., by adding or removing network nodes or segments), a mapping process may need to be performed again for each network node. This may consume computing resources used to generate new identifier mappings and may consume network resources to advertise the new identifier mappings throughout the network. Additionally, the unpredictable mappings may result in an inefficient use of SRGBs because of unused identifiers within the SRGBs, since the network generates mappings from network address prefixes to nonsequential identifiers. According to some implementations described herein, a network node may be configured with a policy to map network address prefixes within a range of network address prefixes to identifiers within a range of identifiers. For example, the network node may receive an indication of a range of network address prefixes and a range of identifiers to which the range of network address prefixes are to be mapped when the network node discovers a device associated with a network address prefix within the range of network address prefixes. The network node may generate a policy for mapping network address prefixes in the range of network address prefixes to identifiers in the range of identifiers. For example, the policy may indicate that a particular network address prefix, having an ordered position within the range of network address prefixes, is to be mapped to a particular identifier, based on the particular identifier having an ordered position within the range of identifiers that corresponds to the ordered position within the range of network address prefixes. For example, a network address prefixes that is in the sixth position of the range of network address prefixes may be mapped to an identifier that is in the sixth position of the range of identifiers. The policy may be applied to map a network address prefix associated with a device that is discovered by the network node. For example, the network node may receive an advertisement from a neighbor node, the advertisement including the network prefix associated with the device and indicating that the device can be forwarded data via the neighbor node. The network node may receive the advertisement and perform the mapping based on the policy. By using the policy to map network address prefixes to identifiers, computing resources and network resources may be conserved which might otherwise be used to generate new identifiers when a network changes, check for duplication of mappings, recover from incorrect routing, and/or the like. Additionally, by using the policy to map network address prefixes to identifiers only after a need arises (e.g., when the network node discovers a device having a network address prefix within the range of network address prefixes), the network node may conserve computing resources that would otherwise be used to generate and store mappings for network addresses that are not accessible via the network node. This may also improve latency by reducing entries in a data structure storing the mappings, where the data structure may be queried when the network node receives a data packet with an identifier. FIGS.1A-1Care diagrams of one or more example implementations100described herein. As shown inFIGS.1A-1C, the example implementation(s)100may include a network node, a network device, one or more neighbor network nodes, a device associated with a network address, and/or the like. The network node, network device, and/or neighbor network nodes may comprise hardware, firmware, or a combination of hardware and software and may be, for example, switches, routers, security devices, devices implementing virtual machines, cloud computing resources, and/or the like. As shown inFIG.1A, and by reference number102, the network node may receive an indication of a range of network address prefixes and a corresponding range of identifiers for mapping. The network node may comprise a label edge router, a label switch router, and/or the like, which may be used to forward and/or route data packets through a network. The network node may be associated with a network address prefix and/or may be configured to discover devices associated with network address prefixes. In some implementations, the network node may receive the indications via a network device, such as a device associated with provisioning the network node. The network device may provide the indication as part of an initial provisioning process, an upgrading process, or based on receiving input from another device to provide the indication. In some implementations, the network device receives, from a network administrator, input that defines the range of network address prefixes and the corresponding range of identifiers. In some implementations, an identifier of the range of identifiers may comprise an MPLS label for use in an MPLS domain, an SID for use in a network implementing segment routing procedures, and/or the like. In some implementations, the identifier may be an integer that can be used to represent a network address prefix in any type of network for any purpose. The range of identifiers may be defined by a lowest-ordered identifier (e.g., a first identifier in a sequentially ordered range) and a highest-ordered identifier (e.g., a last identifier in the sequentially ordered range). In some implementations, the range of identifiers may be defined by the lowest-ordered identifier, with the range further defined to have a quantity of identifiers based on a quantity of network address prefixes in the range of network address prefixes. For example, the definition of the range of identifiers may be based on a lowest-ordered identifier, a highest-ordered identifier, and/or the like, and based on the range of network address prefixes. In some implementation, the range of identifiers may include a range of sequential identifiers, which may be integers. The range of network address prefixes may be in the form A.B.C.D/L0 and/or may relate to internet protocol version 4, internet protocol version 6, and/or the like. A range in the form A.B.C.D/L0 includes network address prefixes having a first quantity L0 of bits that match the first quantity L0 of bits of the A.B.C.D/L0 network address prefix (e.g., in binary form). The indication may further define the range of network address prefixes by a prefix length range from a lower end L1 to a higher end L2. The prefix length range may further define the range of network address prefixes as including those within the range A.B.C.D/L0 and having a prefix length L that is between L1 and L2. The network address prefixes within the range of network address prefixes may correspond to identifiers within the range of identifiers with a one-to-one correlation. As shown by reference number104, the network node may generate a policy for mapping network address prefixes to identifiers based on the indication. For example, the policy may include instructions for mapping network address prefixes, including those within the range of network address prefixes, to identifiers, including those within the range of identifiers. The policy may be used to map a network address prefix to a corresponding identifier only when the network address, or a device associated with the network address, is discovered to be directly or indirectly accessible to the network node. In other words, the policy may not require that the network node determines a mapping for a network address prefix within the range of network address prefixes until a need arises (e.g., when the network node discovers that a device associated with the network address prefix is accessible). In this way, the network node may generate predictable mappings without determining the mappings for each network address prefix within the range of network address prefixes, storing the mappings in an unnecessarily large data structure, advertising the mappings to neighbor nodes, and/or the like. This may conserve computing resources of the network node and/or network resources of the network. Additionally, this may conserve computing resources and improve latency for the network because the network node can, when a data packet comprising an identifier arrives, search a relatively small set of entries within a data structure storing mappings, for only the network address prefixes for devices that are accessible to the network node. As shown inFIG.1B, and by reference number106, the network node may discover a device associated with a network address having a network address prefix within the range of network address prefixes. In some implementations, the network address has a prefix length within the prefix length range. In some implementations, the network address prefix may be associated with an ordered position within the range of network address prefixes. For example, the network address prefix may be associated with a sixth ordered position of the range of network address prefixes. As shown by reference number108, the network node may apply the policy to map the network address prefix to an identifier within the corresponding range of identifiers. The network node may apply the policy to map the network address prefix based on the network address prefix being within the range of network address prefixes and/or based on the network address prefix having a length within the prefix length range. The identifier may be associated with an ordered position within the range of sequential identifiers that corresponds to the ordered position of the network address prefix within the range of network address prefixes. To perform the mapping, the network node may determine an index of the network address prefix within the range of network address prefixes, where the index indicates, for the network address prefix, the ordered position within the range of network address prefixes. For example, if the ordered position of the network address prefix is sixth, the index may be five, to indicate that the network address is five positions away from a lowest-ordered network address prefix of the range of network address prefixes. In some implementations, the network node determines the identifier based on a sum of the index and a lowest-ordered identifier, which has a lowest-ordered position of the range of identifiers. The lowest-ordered identifier may be defined in a range [X, Y] as X. For example, if the lowest-ordered identifier is 1,000 and the index is five, the identifier is 1,005, which is the sixth ordered position within the range of identifiers. In an example mapping procedure, a range of network address prefixes is defined as 1.1.0.0/16. Here, L=16 and a first part of a determination of whether a network address prefix is within the range of network address prefixes includes determining if a network address prefix is in the form 1.1.C.D/L, where L>16. For the example, the range of identifiers [X, Y] is defined as [1,000, Y], where 1,000 is the lowest-ordered identifier, and Y is to be based on a quantity of internet address prefixes within the range 1.1.0.0/16. For this example, the prefix length range is defined as L1=25 to L2=28. This means that a second part of a determination of whether a network address prefix is within the range of network address prefixes includes determining if a length L of the network address prefix is between 25 and 28. In the example, the network node discovers a first device associated with a first network address having a first network address prefix 1.1.1.3/32. The network node may determine that the first network address prefix is in the form 1.1.C.D/L, where L>16. This satisfies the first part of the determination. However, the network node may determine that the prefix length L is not within the prefix length range because L>L2. Because 1.1.1.3/32 is not within the prefix length range, the network node may map the first network address prefix to an identifier outside of the range of identifiers. In the example, the network node discovers a second device associated with a second network address having a second network address prefix 1.1.3.0/25. The network node may determine that the second network address prefix is in the form 1.1.C.D/L, where L>16. Additionally, the network node may determine that the prefix length L is within the prefix length range because L1>L>L2. This means that the network node has determined that the second network address prefix will be mapped to an identifier within the range of identifiers. The network node may map the second network address prefix to an identifier within the range of identifiers based on the ordered position of the second network address prefix within the range 1.1.0.0/16. In an example process for determining the ordered position of the second network address prefix, the network node may determine an index of the second network address prefix. In an example, the index may be determined as a sum of a base index B and an index offset S. The base index B may be defined as B=P& (232−L0−1)»(32−L),where P is 1.1.3.0, L is 25, & is a bit and operator, and » is a shift of 32−L bits to the right in binary form. The index offset S may be defined as: S=2L−L0−2L1−L0. Evaluating the definition of B results in: B=P& (232−16−1)»(32−25); B=1.1.3.0 & 0XFFF»(7); B=0.0.3.0»7; B=00000000.00000000.00000011.00000000»7 B=00000000.00000000.00000000.00000110=6 Evaluating the definition of S results in: S=225−16−225−16=0 The index is equal to the sum of B+S=6. The identifier may be determined as a sum of the index and the lowest-ordered identifier. Therefore, the identifier is 1,000+6=1006, which is mapped to the second network address prefix 1.1.3.0/25. Other example mappings from the above example may, if the network node discovers devices associated with the listed network addresses, include the following pairs: (1.1.0.0/25, 1000), (1.1.0.128/25, 1001), (1.1.1.0/25, 1002), (1.1.1.128/25, 1003), (1.1.2.0/25, 1004), (1.1.2.128/25, 1005), (1.1.3.0/25, 1006), . . . (1.1.255.0/25, 1510), (1.1.255.128/25, 1511), (1.1.255.128/25, 1512), (1.1.0.0/26, 1512), (1.1.0.64/26, 1513), (1.1.0.128/26, 1514), (1.1.0.192/26, 1515). As shown inFIG.1C, and by reference number112, the network node may receive, from a first neighbor node, a data packet comprising the identifier. In some implementations, the first neighbor node may provide the data packet comprising the identifier based on the first neighbor node receiving the advertisement from the network node indicating the mapping. As shown by reference number114, the network node may determine a forwarding action based on the identifier. For example, the network node may locate the identifier in a data structure to determine the forwarding action. The network node may determine the forwarding action to include removing the identifier, replacing the identifier with a new identifier, forwarding the data packet toward the network address identified by the network address prefix via an identified segment, forwarding the data packet toward the network address identified by the network address prefix via a receiving network node, forwarding the data packet to a network identified by the network address prefix, and/or the like. In some implementations, the network node may replace the identifier with the new identifier based on an advertised mapping of the network address prefix to the new identifier by the receiving network node. As shown by reference number116, the network node may forward the data packet to a second neighbor node. The second neighbor node may perform a similar process of determining a forwarding action based on the new identifier. The data packet may continue to be forwarded within the network until it reaches the device associated with the network address or a device associated with a network that is associated with the network address (e.g., a router, switch, and/or the like that is local to the device associated with the network address). In some implementations, each network node of the network (e.g., the network node, the one or more neighbor network nodes, and/or the like) may receive different indications of ranges of network address prefixes and corresponding ranges of identifiers for mapping. In some implementations, some or all of the network nodes of the network receive a same indication of a range of network address prefixes and a corresponding range of identifiers for mapping. In some implementations, a network device may perform the mapping for one or more of the network nodes of the network and/or may provide the mappings to the network nodes of the network. As indicated above,FIGS.1A-1Care provided merely as one or more examples. Other examples may differ from what is described with regard toFIGS.1A-1C. FIG.2is a diagram of an example environment200in which systems and/or methods described herein may be implemented. As shown inFIG.2, environment200may include a network node210, a first neighbor node220, a second neighbor node230, a network device240, a network250, and a device260. Devices of environment200may interconnect via wired connections, wireless connections, or a combination of wired and wireless connections. Network node210includes one or more devices capable of receiving, storing, generating, processing, forwarding, and/or transferring information, such as data packets. For example, network node210may include a switch, a router, a security device, one or more devices implementing virtual machines, cloud computing resources, a gateway, a bridge, a network interface controller (NIC), and/or the like. In some implementations, network node210may be a physical device implemented within a housing, such as a chassis. In some implementations, network node210may be a virtual device implemented by one or more computer devices of a cloud computing environment or a data center. First neighbor node220includes one or more devices capable of receiving, storing, generating, processing, forwarding, and/or transferring information, such as data packets. For example, first neighbor node220may include a switch, a router, a security device, one or more devices implementing virtual machines, cloud computing resources, a gateway, a bridge, an MC, and/or the like. In some implementations, first neighbor node220may be a physical device implemented within a housing, such as a chassis. In some implementations, first neighbor node220may be a virtual device implemented by one or more computer devices of a cloud computing environment or a data center. Second neighbor node230includes one or more devices capable of receiving, storing, generating, processing, forwarding, and/or transferring information, such as data packets. For example, second neighbor node230may include a switch, a router, a security device, one or more devices implementing virtual machines, cloud computing resources, a gateway, a bridge, an MC, and/or the like. In some implementations, second neighbor node230may be a physical device implemented within a housing, such as a chassis. In some implementations, second neighbor node230may be a virtual device implemented by one or more computer devices of a cloud computing environment or a data center. Network device240includes one or more devices capable of receiving, storing, generating, processing, forwarding, and/or transferring information, such as control information for configuring mapping policies to one or more of network node210, first neighbor node220, second neighbor node230, and/or the like. For example, network device240may include a bootstrap device, such as a server device, a collection of server devices, one or more computing resources of a cloud computing environment, a device within a data center, and/or the like. In some implementations, network device240may include a communication and/or computing device, such as a mobile phone (e.g., a smart phone, a radiotelephone, and/or the like), a laptop computer, a tablet computer, a handheld computer, a desktop computer, a gaming device, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, and/or the like), or a similar type of device that can provide input (e.g., the indication of the range of network address prefixes and/or the range of identifiers) to network node210and/or other network nodes in the network250. In some implementations, network device240may be a physical device implemented within a housing, such as a chassis. In some implementations, network device240may be a virtual device implemented by one or more computer devices of a cloud computing environment or a data center. Network250includes one or more wired and/or wireless networks. For example, network250may include a cellular network (e.g., a long-term evolution (LTE) network, a code division multiple access (CDMA) network, a 3G network, a 4G network, a 5G network, another type of next generation network, and/or the like), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, and/or the like, and/or a combination of these or other types of networks. Device260may be a device on another network configured to provide and/or receive data packets via the network250. Device260is associated with a network address and a network address prefix, which identifies device260to the network250and/or other devices. Device260may include a server device, a router, a switch, one or more computing resources of a cloud computing environment, a device within a data center, and/or the like. In some implementations, device260may include a communication and/or computing device, such as a mobile phone (e.g., a smart phone, a radiotelephone, and/or the like), a laptop computer, a tablet computer, a handheld computer, a desktop computer, a gaming device, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, and/or the like), or a similar type of device that can communicate over the network250. The number and arrangement of devices and networks shown inFIG.2are provided as one or more examples. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown inFIG.2. Furthermore, two or more devices shown inFIG.2may be implemented within a single device, or a single device shown inFIG.2may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of environment200may perform one or more functions described as being performed by another set of devices of environment200. FIG.3Ais a diagram of example components of a device300. Device300may correspond to network node210, first neighbor node220, second neighbor node230, network device240, and/or device260. In some implementations, network node210, first neighbor node220, second neighbor node230, network device240, and/or device260may include one or more devices300and/or one or more components of device300. As shown inFIG.3A, device300may include a bus305, a processor310, a memory315, a storage component320, an input component325, an output component330, and a communication interface335. Bus305includes a component that permits communication among the components of device300. Processor310is implemented in hardware, firmware, or a combination of hardware and software. Processor310takes the form of a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component. In some implementations, processor310includes one or more processors capable of being programmed to perform a function. Memory315includes a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use by processor310. Storage component320stores information and/or software related to the operation and use of device300. For example, storage component320may include a hard disk (e.g., a magnetic disk, an optical disk, a magneto-optic disk, and/or a solid state disk), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, along with a corresponding drive. Input component325includes a component that permits device300to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, and/or a microphone). Additionally, or alternatively, input component325may include a sensor for sensing information (e.g., a global positioning system (GPS) component, an accelerometer, a gyroscope, and/or an actuator). Output component330includes a component that provides output information from device300(e.g., a display, a speaker, and/or one or more light-emitting diodes (LEDs)). Communication interface335includes a transceiver-like component (e.g., a transceiver and/or a separate receiver and transmitter) that enables device300to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. Communication interface335may permit device300to receive information from another device and/or provide information to another device. For example, communication interface335may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, or the like. Device300may perform one or more processes described herein. Device300may perform these processes based on processor310executing software instructions stored by a non-transitory computer-readable medium, such as memory315and/or storage component320. A computer-readable medium is defined herein as a non-transitory memory device. A memory device includes memory space within a single physical storage device or memory space spread across multiple physical storage devices. Software instructions may be read into memory315and/or storage component320from another computer-readable medium or from another device via communication interface335. When executed, software instructions stored in memory315and/or storage component320may cause processor310to perform one or more processes described herein. Additionally, or alternatively, hardwired circuitry may be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. The quantity and arrangement of components shown inFIG.3Aare provided as an example. In practice, device300may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.3A. Additionally, or alternatively, a set of components (e.g., one or more components) of device300may perform one or more functions described as being performed by another set of components of device300. FIG.3Bis a diagram of example components of a device350. Device350may correspond to one or more of network node210, first neighbor node220, second neighbor node230, network device240, and/or device260. In some implementations, one or more of network node210, first neighbor node220, second neighbor node230, network device240, and/or device260may include one or more devices350and/or one or more components of device350. As shown inFIG.3B, device350may include one or more input components355-1through355-B (B≥1) (hereinafter referred to collectively as input components355, and individually as input component355), a switching component360, one or more output components365-1through365-C (C≥1) (hereinafter referred to collectively as output components365, and individually as output component365), and a controller370. Input components355may be points of attachment for physical links and may be points of entry for incoming traffic, such as packets. Input components355may process incoming traffic, such as by performing data link layer encapsulation or decapsulation. In some implementations, input components355may send and/or receive packets. In some implementations, input components355may include an input line card that includes one or more packet processing components (e.g., in the form of integrated circuits), such as one or more interface cards (IFCs), packet forwarding components, line card controller components, input ports, processors, memories, and/or input queues. In some implementations, device350may include one or more input components355. Switching component360may interconnect input components355with output components365. In some implementations, switching component360may be implemented via one or more crossbars, via busses, and/or with shared memories. The shared memories may act as temporary buffers to store packets from input components355before the packets are eventually scheduled for delivery to output components365. In some implementations, switching component360may enable input components355, output components365, and/or controller370to communicate. Output component365may store packets and may schedule packets for transmission on output physical links. Output component365may support data link layer encapsulation or decapsulation, and/or a variety of higher-level protocols. In some implementations, output component365may send packets and/or receive packets. In some implementations, output component365may include an output line card that includes one or more packet processing components (e.g., in the form of integrated circuits), such as one or more IFCs, packet forwarding components, line card controller components, output ports, processors, memories, and/or output queues. In some implementations, device350may include one or more output components365. In some implementations, input component355and output component365may be implemented by the same set of components (e.g., an input/output component may be a combination of input component355and output component365). Controller370includes a processor in the form of, for example, a CPU, a GPU, an APU, a microprocessor, a microcontroller, a DSP, an FPGA, an ASIC, and/or another type of processor. The processor is implemented in hardware, firmware, or a combination of hardware and software. In some implementations, controller370may include one or more processors that can be programmed to perform a function. In some implementations, controller370may include a RAM, a ROM, and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, an optical memory, and/or the like) that stores information and/or instructions for use by controller370. In some implementations, controller370may communicate with other devices, networks, and/or systems connected to device300to exchange information regarding network topology. Controller370may create routing tables based on the network topology information, create forwarding tables based on the routing tables, and forward the forwarding tables to input components355and/or output components365. Input components355and/or output components365may use the forwarding tables to perform route lookups for incoming and/or outgoing packets. Controller370may perform one or more processes described herein. Controller370may perform these processes in response to executing software instructions stored by a non-transitory computer-readable medium. A computer-readable medium is defined herein as a non-transitory memory device. A memory device includes memory space within a single physical storage device or memory space spread across multiple physical storage devices. Software instructions may be read into a memory and/or storage component associated with controller370from another computer-readable medium or from another device via a communication interface. When executed, software instructions stored in a memory and/or storage component associated with controller370may cause controller370to perform one or more processes described herein. Additionally, or alternatively, hardwired circuitry may be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. The quantity and arrangement of components shown inFIG.3Bare provided as an example. In practice, device350may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.3B. Additionally, or alternatively, a set of components (e.g., one or more components) of device350may perform one or more functions described as being performed by another set of components of device350. FIG.4is a flow chart of an example process400for mapping a prefix range to an identifier range. In some implementations, one or more process blocks ofFIG.4may be performed by a network node (e.g., network node210). In some implementations, one or more process blocks ofFIG.4may be performed by another device or a group of devices separate from or including the network node, such as a first neighbor node (e.g., first neighbor node220), a second neighbor node (e.g., second neighbor node230), a network device (e.g., network device240), and/or the like. As shown inFIG.4, process400may include receiving an indication of a range of network address prefixes and a corresponding range of sequential identifiers (block410). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may receive an indication of a range of network address prefixes and a corresponding range of sequential identifiers, as described above. As further shown inFIG.4, process400may include generating a policy for mapping respective network address prefixes, having ordered positions within the range of network address prefixes, to respective identifiers having corresponding ordered positions within the corresponding range of sequential identifiers (block420). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may generate a policy for mapping respective network address prefixes, having ordered positions within the range of network address prefixes, to respective identifiers having corresponding ordered positions within the corresponding range of sequential identifiers, as described above. As further shown inFIG.4, process400may include discovering a device associated with a network address having a network address prefix at an ordered position within the range of network address prefixes (block430). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may discover a device associated with a network address having a network address prefix at an ordered position within the range of network address prefixes, as described above. As further shown inFIG.4, process400may include mapping, based on the policy, the network address prefix to an identifier having an ordered position within the corresponding range of sequential identifiers, wherein the ordered position within the corresponding range of sequential identifiers corresponds to the ordered position within the range of network address prefixes (block440). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may map, based on the policy, the network address prefix to an identifier having an ordered position within the corresponding range of sequential identifiers, as described above. In some implementations, the ordered position within the corresponding range of sequential identifiers corresponds to the ordered position within the range of network address prefixes. As further shown inFIG.4, process400may include advertising, to one or more neighbor nodes, the mapping of the network address prefix to the identifier (block450). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may advertise, to one or more neighbor nodes, the mapping of the network address prefix to the identifier, as described above. Process400may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. In a first implementation, process400further include: receiving, from a neighbor node of the one or more neighbor nodes, a data packet comprising the identifier; and forwarding, based on the data packet comprising the identifier, the data packet toward the device associated with the network address. In a second implementation, alone or in combination with the first implementation, the identifier comprises a multiprotocol label switching (MPLS) label for use in an MPLS domain. In a third implementation, alone or in combination with one or more of the first and second implementations, the identifier comprises a segment routing identifier for use in a network of nodes implementing segment routing procedures. In a fourth implementation, alone or in combination with one or more of the first through third implementations, the range of network address prefixes corresponds to the corresponding range of sequential identifiers with a one-to-one correlation. In a fifth implementation, alone or in combination with one or more of the first through fourth implementations, the network node is a label edge router or a label switch router. In a sixth implementation, alone or in combination with one or more of the first through fifth implementations, mapping the network address prefix to the identifier comprises: determining an index of the network address prefix within the range of network address prefixes, determining the identifier based on a sum of the index and a lowest-ordered identifier, and binding the network address prefix to the identifier. AlthoughFIG.4shows example blocks of process400, in some implementations, process400may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.4. Additionally, or alternatively, two or more of the blocks of process400may be performed in parallel. FIG.5is a flow chart of an example process500for mapping a prefix range to an identifier range. In some implementations, one or more process blocks ofFIG.5may be performed by a network node (e.g., network node210). In some implementations, one or more process blocks ofFIG.5may be performed by another device or a group of devices separate from or including the network node, such as a first neighbor node (e.g., first neighbor node220), a second neighbor node (e.g., second neighbor node230), a network device (e.g., network device240), and/or the like. As shown inFIG.5, process500may include receiving, from a network device, an indication of a range of network address prefixes, a prefix length range, and a corresponding range of sequential identifiers (block510). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may receive, from a network device, an indication of a range of network address prefixes, a prefix length range, and a corresponding range of sequential identifiers, as described above. As further shown inFIG.5, process500may include generating a policy for mapping respective network address prefixes, having ordered positions within the range of network address prefixes and having prefix lengths within the prefix length range, to respective identifiers having corresponding ordered positions within the corresponding range of sequential identifiers (block520). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may generate a policy for mapping respective network address prefixes, having ordered positions within the range of network address prefixes and having prefix lengths within the prefix length range, to respective identifiers having corresponding ordered positions within the corresponding range of sequential identifiers, as described above. As further shown inFIG.5, process500may include discovering a device associated with a network address having a network address prefix at an ordered position within the range of network address prefixes and having a prefix length within the prefix length range (block530). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may discover a device associated with a network address having a network address prefix at an ordered position within the range of network address prefixes and having a prefix length within the prefix length range, as described above. As further shown inFIG.5, process500may include mapping, based on the policy, the network address prefix to an identifier having an ordered position within the corresponding range of sequential identifiers, wherein the ordered position within the corresponding range of sequential identifiers corresponds to the ordered position within the range of network address prefixes (block540). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may map, based on the policy, the network address prefix to an identifier having an ordered position within the corresponding range of sequential identifiers, as described above. In some implementations, the ordered position within the corresponding range of sequential identifiers corresponds to the ordered position within the range of network address prefixes. As further shown inFIG.5, process500may include advertising, to one or more neighbor nodes, the mapping of the network address prefix to the identifier (block550). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may advertise, to one or more neighbor nodes, the mapping of the network address prefix to the identifier, as described above. Process500may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. In a first implementation, process500further includes receiving, from a neighbor node of the one or more neighbor nodes, a data packet comprising the identifier; and forwarding, based on the data packet comprising the identifier, the data packet toward the device associated with the network address. In a second implementation, alone or in combination with the first implementation, the identifier comprises an MPLS label for use in an MPLS domain. In a third implementation, alone or in combination with one or more of the first and second implementations, the identifier comprises a segment routing identifier for use in a network of nodes implementing segment routing procedures. In a fourth implementation, alone or in combination with one or more of the first through third implementations, the range of network address prefixes comprises a range of internet protocol prefixes using internet protocol version 4. In a fifth implementation, alone or in combination with one or more of the first through fourth implementations, the range of network address prefixes corresponds to the corresponding range of sequential identifiers with a one-to-one correlation. In a sixth implementation, alone or in combination with one or more of the first through fifth implementations, the network node is a label edge router or a label switch router. In a seventh implementation, alone or in combination with one or more of the first through sixth implementations, process500further includes: determining an index of the network address prefix within the range of network address prefixes, wherein the index indicates, for the network address prefix, the ordered position within the range of network address prefixes; determining the identifier based on a sum of the index and a lowest-ordered identifier, wherein the lowest-ordered identifier has lowest-ordered position of the corresponding range of sequential identifiers; and mapping the network address prefix to the identifier. AlthoughFIG.5shows example blocks of process500, in some implementations, process500may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.5. Additionally, or alternatively, two or more of the blocks of process500may be performed in parallel. FIG.6is a flow chart of an example process600for mapping a prefix range to an identifier range. In some implementations, one or more process blocks ofFIG.6may be performed by a network node (e.g., network node210). In some implementations, one or more process blocks ofFIG.6may be performed by another device or a group of devices separate from or including the network node, such as a first neighbor node (e.g., first neighbor node220), a second neighbor node (e.g., second neighbor node230), a network device (e.g., network device240), and/or the like. As shown inFIG.6, process600may include generating a policy for mapping respective network address prefixes, having ordered positions within a range of network address prefixes, to respective identifiers having corresponding ordered positions within a corresponding range of sequential identifiers (block610). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may generate a policy for mapping respective network address prefixes, having ordered positions within a range of network address prefixes, to respective identifiers having corresponding ordered positions within a corresponding range of sequential identifiers, as described above. As further shown inFIG.6, process600may include discovering a device associated with a network address having a network address prefix at an ordered position within the range of network address prefixes (block620). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may discover a device associated with a network address having a network address prefix at an ordered position within the range of network address prefixes, as described above. As further shown inFIG.6, process600may include mapping, based on the policy, the network address prefix to an identifier having an ordered position within the corresponding range of sequential identifiers, wherein the ordered position within the corresponding range of sequential identifiers corresponds to the ordered position within the range of network address prefixes (block630). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may map, based on the policy, the network address prefix to an identifier having an ordered position within the corresponding range of sequential identifiers, as described above. In some implementations, the ordered position within the corresponding range of sequential identifiers corresponds to the ordered position within the range of network address prefixes. As further shown inFIG.6, process600may include advertising, to one or more neighbor nodes, the mapping of the network address prefix to the identifier (block640). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may advertise, to one or more neighbor nodes, the mapping of the network address prefix to the identifier, as described above. As further shown inFIG.6, process600may include receiving, from a neighbor node of the one or more neighbor nodes, a data packet comprising the identifier (block650). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may receive, from a neighbor node of the one or more neighbor nodes, a data packet comprising the identifier, as described above. As further shown inFIG.6, process600may include forwarding, based on the data packet comprising the identifier, the data packet toward the device associated with the network address (block660). For example, the network node (e.g., using processor310, memory315, storage component320, input component325, output component330, communication interface335, controller370, and/or the like) may forward, based on the data packet comprising the identifier, the data packet toward the device associated with the network address, as described above. Process600may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. In a first implementation, the identifier comprises an MPLS label for use in an MPLS domain. In a second implementation, alone or in combination with the first implementation, the identifier comprises a segment routing identifier for use in a network of nodes implementing segment routing procedures. In a third implementation, alone or in combination with one or more of the first and second implementations, the range of network address prefixes comprises a range of internet protocol prefixes using internet protocol version 4. In a fourth implementation, alone or in combination with one or more of the first through third implementations, the range of network address prefixes corresponds to the corresponding range of sequential identifiers with a one-to-one correlation. AlthoughFIG.6shows example blocks of process600, in some implementations, process600may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.6. Additionally, or alternatively, two or more of the blocks of process600may be performed in parallel. The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise forms disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the implementations. As used herein, the term “component” is intended to be broadly construed as hardware, firmware, and/or a combination of hardware and software. As used herein, the term traffic or content may include a set of packets. A packet may refer to a communication structure for communicating information, such as a protocol data unit (PDU), a network packet, a datagram, a segment, a message, a block, a cell, a frame, a subframe, a slot, a symbol, a portion of any of the above, and/or another type of formatted or unformatted unit of data capable of being transmitted via a network. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be designed to implement the systems and/or methods based on the description herein. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).
58,584
11863446
DESCRIPTION OF EMBODIMENTS The following describes the technical solutions of this disclosure in detail by using specific embodiments. FIG.1is a schematic diagram of a structure of a communication network according to an embodiment of this disclosure. The communication network includes a plurality of network devices. The communication network may be, for example, an IP network. Specifically, the communication network may be an SRv6-based communication network, to be specific, the communication network may transmit and process an SRv6 packet. For example, as shown inFIG.1, the communication network includes a first network device and a second network device. The first network device communicates with the second network device through a communication link. In a possible implementation, the communication link between the first network device and the second network device is a physical communication link. The physical communication link may be a cable, an optical fiber, or a wireless link. A port connecting the first network device to the communication link may be a physical port, and a port connecting the second network device to the communication link may be a physical port. In another possible implementation, the communication link between the first network device and the second network device is a direct link. The direct link means that two devices (for example, the first network device and the second network device) are directly connected through a link, and the link between the two devices does not include another forwarding device or processing device but may include a transparent transmission device. The first network device may be a router or a layer-3 switch. The second network device may be a router or a layer-3 switch. In different types of communication networks, roles of the first network device and the second network device may be different. For example, in a campus network, the first network device and the second network device may be edge switches. For another example, in a core network, the first network device and the second network device may be provider edge (PE) devices. The first network device may be connected to at least one user equipment. As shown inFIG.1, the first network device is connected to first user equipment and third user equipment. Similarly, the second network device may be connected to at least one user equipment. As shown inFIG.1, the second network device is connected to second user equipment and fourth user equipment. A link relationship between the first network device and the first user equipment and a link relationship between the first network device and the third user equipment are used as an example for description. In a possible implementation, a communication link between the first network device and the first user equipment and a communication link between the first network device and the third user equipment are physical communication links. The physical communication link may be a cable, an optical fiber, or a wireless link. A port connecting the first network device to the communication link may be a physical port, and ports connecting the first user equipment and the third user equipment to the communication link may be physical ports. In another possible implementation, the communication link between the first network device and the first user equipment and the communication link between the first network device and the third user equipment are direct links. In addition, the communication link between the first network device and the first user equipment may include another network device, for example, a customer edge (CE) device. Similarly, the communication link between the first network device and the third user equipment may also include another network device. In an implementation of this disclosure, a specific form of the user equipment inFIG.1is not limited. For example, the user equipment inFIG.1may be a network device used in a home network or a public network, for example, terminal devices such as a mobile phone, a personal computer, and a PAD. For another example, the user equipment inFIG.1may be a computer or a server in an enterprise network. As shown inFIG.1, the communication network may be an SRv6-based communication network. The first network device may send an SRv6 packet to the second network device. Specifically, the first network device receives a service packet from the first user equipment or the third user equipment. In addition, the first network device encapsulates the service packet into an SRv6 packet. Then, the first network device sends the SRv6 packet to the second network device. In a possible implementation, an SRv6 tunnel is included between the first network device and the second network device. The first network device sends the SRv6 packet to the second network device through the SRv6 tunnel. The second network device receives the SRv6 packet, and decapsulates the SRv6 packet to obtain the service packet. Then, the second network device forwards the service packet to the second user equipment or the fourth user equipment. However, in an existing SRv6-based communication network, data forwarding based on a user group and a group policy is not supported for SRv6 traffic. Further, in the existing SRv6-based communication network, an implementation solution in which a transmitting end and a receiving end of SRv6 traffic jointly determine a forwarding policy for the user group cannot be implemented. In a related technology, a group policy identifier is carried in a virtual extensible local area network (VXLAN) packet. For example, refer to IETF drafts: Generic Protocol Extension for VXLAN (draft-ietf-nvo3-vxlan-gpe-10) and Group Policy Encoding with VXLAN-GPE and LISP-GPE (draft-lemon-vxlan-lisp-gpe-gbp-02). An implementation of the drafts may be referred to as generic protocol extension for virtual extensible local area network (VXLAN-GPE). A VXLAN-GPE packet carries a generic protocol extension (GPE) header, and the GPE header includes group policy identifier information. In addition, a reserved field in a VXLAN header of the VXLAN-GPE packet is set, to indicate that the VXLAN-GPE packet includes the GPE header. A transmitting end device and a receiving end device implement isolation between user equipment by transmitting the VXLAN-GPE packet. VXLAN-GPE is implemented in a way similar to that of an access control list (ACL). Compared with ACL, the implementation of VXLAN-GPE reduces workload of rule configuration. However, VXLAN-GPE can be implemented only in a VXLAN-based network scenario. In addition, the existing VXLAN protocol needs to be reconstructed and existing network devices need to be upgraded. However, according to VXLAN basic protocol IETF request for comments (RFC) 7348 (for example, referring to Chapter 5 in RFC 7348), remaining seven bits (specified as “R”) are a reserved field, need to be set to 0 during transmission, and are ignored during reception. Therefore, in a VXLAN network scenario, in order to comply with a stipulation of RFC 7348, a network device may drop a VXLAN-GPE packet because a reserved field in a VXLAN header of the VXLAN-GPE packet is set to a non-zero value, and a GPE header may not be identified. Further, the foregoing drafts merely disclose how to carry a group policy identifier in an existing VXLAN packet, to implement isolation of user equipment. However, the foregoing drafts do not disclose a specific implementation of a group policy, and this cannot implement an implementation solution in which a transmitting end and a receiving end of data traffic jointly determine a forwarding policy for a user group. To resolve the foregoing problem, this disclosure provides a corresponding solution. InFIG.1, the first network device receives the service packet from the first user equipment. The first network device determines, based on information that is about the source user equipment and that is carried in the service packet, a user group to which the first user equipment belongs. In addition, the first network device determines group information corresponding to the service packet. The group information indicates an interworking policy that is determined by the first network device based on the user group and that is for transmitting the service packet between the first user equipment and the second user equipment. The second user equipment is a destination of the service packet. The first network device encapsulates the service packet to obtain the SRv6 packet, and the SRv6 packet further includes the group information. The first network device sends the SRv6 packet to the second network device based on the determined group information. Therefore, in the foregoing implementation, the SRv6 packet carries the group information, so that the first network device serving as a transmitting end device may participate in control of determining the forwarding policy for the user group. For the receiving end device, after receiving the SRv6 packet, the second network device decapsulates the SRv6 packet. The second network device determines, based on information about the destination user equipment of the service packet, a user group to which the second user equipment belongs. Then, the second network device determines, based on the user group to which the second user equipment belongs, an interworking policy for transmitting the service packet between the first user equipment and the second user equipment. Correspondingly, the second network device may learn of, based on the group information carried in the service packet, the interworking policy that is determined by the first network device based on the user group and that is for transmitting the service packet between the first user equipment and the second user equipment. In this way, the second network device determines, according to the interworking policy determined by the first network device (the transmitting end) and the interworking policy determined by the second network device (the receiving end), a forwarding policy for forwarding the service packet to the second user equipment. Therefore, in the foregoing implementation, the SRv6 packet carries the group information, so that the second network device serving as the receiving end device can control the forwarding policy of the user group according to the interworking policy determined by the transmitting end device and the interworking policy determined by the receiving end device. FIG.2is a flowchart of a packet forwarding method according to an embodiment of this disclosure. The method shown inFIG.2may be applied to the structure of the network shown inFIG.1. In this implementation of this disclosure, interaction between the first network device and the second network device inFIG.1is described. It can be understood that another network device may be included on a communication link between the first network device and the second network device. Specifically, the method includes the following steps. S101: The first network device receives a first service packet sent by the first user equipment, where the first service packet includes information about the first user equipment. As shown inFIG.1, the first network device communicates with the first user equipment. In a possible implementation, another network device is included on a communication link between the first network device and the first user equipment. The first user equipment generates the first service packet. An encapsulation format of the first service packet is not limited in this implementation of this disclosure. For example, the first service packet may be a layer 2 Ethernet frame. For another example, the first service packet may be an IP packet. The first service packet includes the information about the first user equipment. The information about the first user equipment indicates the first user equipment. In a possible implementation, the information about the first user equipment is address information. Specifically, the information about the first user equipment includes a media access control (MAC) address or an IP address. The first user equipment is a transmitting end device of the first service packet. Therefore, the MAC address included in the information about the first user equipment is a source MAC address of the first service packet, and the IP address included in the information about the first user equipment is a source IP address of the first service packet. The first service packet may further include information about the second user equipment. As shown inFIG.1, the first service packet sent by the first user equipment is sent to the second user equipment. In this case, the information about the second user equipment indicates the second user equipment. In a possible implementation, the information about the second user equipment is address information. Specifically, the information about the second user equipment includes a MAC address or an IP address. The second user equipment is a receiving end device of the first service packet. Therefore, the MAC address included in the information about the second user equipment is a destination MAC address of the first service packet, and the IP address included in the information about the second user equipment is a destination IP address of the first service packet. The first service packet includes a packet header and a payload. The packet header in the first service packet carries the information about the first user equipment and the information about the second user equipment. The payload in the first service packet is service data that the first user equipment expects to send to the second user equipment. The first network device receives the first service packet sent by the first user equipment. In an actual service scenario, the first network device receives a service flow sent by the first user equipment. The service flow includes a plurality of service packets. The first service packet may be understood as any one or more of the service packets in the service flow. Therefore, this implementation of this disclosure may be understood as an implementation of forwarding on a data flow from user equipment based on a group policy and a user group. S102: The first network device determines whether the first network device includes a first user group corresponding to the information about the first user equipment, where the first user group is a user group to which the first user equipment belongs. S103: The first network device determines, based on a determining result of whether the first network device includes the first user group corresponding to the information about the first user equipment, a value of first group information, and generates a first SRv6 packet, where the first SRv6 packet includes the first group information and the first service packet, and the first group information indicates an interworking policy that is determined by the first network device based on the first user group and that is for transmitting the first service packet between the first user equipment and the second user equipment. The first network device receives the first service packet sent by the first user equipment. After receiving the first service packet, the first network device parses the first service packet, and obtains the information about the first user equipment in the first service packet. The first network device determines whether the first network device includes the first user group corresponding to the information about the first user equipment. The first user group is a user group to which the first user equipment belongs. In other words, the user group is an implementation of user equipment isolation. For example, the first network device is connected to user equipment1, user equipment2, user equipment3, and user equipment4. The user equipment1and the user equipment2belong to a user group1, the user equipment3belongs to a user group2, and the user equipment4belongs to a user group3. Therefore, one user group may include one or more user equipments. The first network device may determine, based on information in a service packet received from the user equipment, a specific user group to which the user equipment sending the service packet belongs. During specific implementation, the first network device may store at least one entry, and each of the at least one entry includes a correspondence between information about user equipment and a user group. The information about the user equipment is information about the user equipment that sends a service packet, for example, a source MAC address or a source IP address. For ease of description, information about user equipment in Table 1 is referred to as information about source user equipment, the user equipment in Table 1 is referred to as the source user equipment, and a user group in Table 1 is referred to as a source user group. As shown in Table 1, the information about the first user equipment corresponds to the first user group, indicating that the first user equipment belongs to the first user group; and information about the third user equipment corresponds to a third user group, indicating that the third user equipment belongs to the third user group. It is to be noted that a representation manner of Table 1 is to clearly show a user group to which the source user equipment belongs. During implementation, an entry stored in the first network device may not include the first column of information (source user equipment) in Table 1. TABLE 1Source userInformation about theSourceequipmentsource user equipmentuser groupFirst userInformation about theFirst user groupequipmentfirst user equipmentThird userInformation about theThird user groupequipmentthird user equipmentUser equipment 1Information about theFirst user groupuser equipment 1User equipment 4Information about theThird user groupuser equipment 4. . .. . .. . . After obtaining the information about the first user equipment in the first service packet, the first network device queries, based on the information about the first user equipment, the at least one entry (as shown in Table 1) stored in the first network device. The first network device determines, based on a correspondence between the information about the first user equipment and the first user group, that a user group corresponding to the information about the first user equipment is the first user group. Therefore, the first network device may determine that the first user equipment belongs to the first user group. In a possible implementation, the source user group in Table 1 may be represented in a form of a group identifier. For example, the first user group may be represented by a group identifier Group_ID_1, and the third user group may be represented by a group identifier Group_ID_3. In a possible implementation, the group identifier may be represented by 16-bit data. Correspondingly, when storing the entry shown in Table 1, the first network device may store the group identifier as the source user group. Therefore, that the group identifier indicates the user group may alternatively be understood as that the group identifier indicates a user group to which the user equipment belongs. The first network device may generate the first SRv6 packet based on the received first service packet. The first SRv6 packet is a packet obtained by encapsulating the first service packet. The first service packet may be encapsulated in the first SRv6 packet as the payload. The first SRv6 packet further includes the first group information. The first group information indicates the interworking policy that is determined by the first network device based on the first user group and that is for transmitting the first service packet between the first user equipment and the second user equipment. The first group information indicates that the first network device determines an interworking policy between the first network device and the second network device for the first service packet that belongs to the first user group. The interworking policy determined by the first network device is a forwarding policy expected by the first network device. In other words, the first network device expects that after receiving the first service packet, the second network device forwards the first service packet to the destination of the first service packet according to a rule of the interworking policy provided by the first network device. For example, the first network device determines a value of the first group information, to indicate that the first service packet can be forwarded by the second network device to the second user equipment after reaching the second network device. For another example, the first network device determines a value of the first group information, to indicate that the first service packet can be dropped (not forwarded to the second user equipment) by the second network device after reaching the second network device. Therefore, the first group information is determined by the first network device, and affects whether the first service packet can be forwarded to the second user equipment. In a possible implementation, the first group information includes a first group identifier. For example, the first group identifier is used as the first group information. The first network device determines, based on the entry shown in Table 1, that the first user equipment sending the first service packet belongs to the first user group. The first user group is represented by the first group identifier. Therefore, the first group identifier indicates that the first user equipment belongs to the first user group. The first network device determines the value of the first group information as the first group identifier, and the first network device generates the first SRv6 packet that includes the first group information. It may be understood that, when the value of the first group information is the first group identifier, an indicated specific interworking policy is: The first network device forwards the first service packet and expects the second network device to forward the first service packet to the second user equipment. For example, the first network device may fail to find a corresponding user based on a received service packet. For example, the first network device cannot find, based on the first service packet, a corresponding user group from the entry shown in Table 1. This indicates that the first user equipment does not belong to any user group. In this case, the first network device determines a value of the first group identifier as an invalid value. For example, the value is set to all 0s. Correspondingly, the value of the first group information is indicated as an invalid value, that is, all 0s. Then, the first network device generates the first SRv6 packet that includes the first group information (invalid value). It may be understood that, when the value of the first group information is “invalid”, an indicated specific interworking policy is: The first network device forwards the first service packet and expects the second network device to drop the first service packet. In another possible implementation, the first group information includes the first group identifier and a first group policy identifier.FIG.3shows a format of group information. The group information inFIG.3includes a group identifier and a group policy identifier. For example, a total length of the group information is 16 bits, where three high-order bits represent the group policy identifier, and the remaining 13 bits represent the group identifier.FIG.3shows an implementation in which the group identifier and the group policy identifier are in a same field. It can be understood that the group identifier and the group policy identifier may be set to be in different fields in an implementation. With reference to the foregoing descriptions, the first group identifier in the first group information indicates a user group to which the first user equipment belongs, and the first group policy identifier in the first group information indicates a specific interworking policy. In other words, the first group policy identifier indicates a specific interworking policy that is determined by the first network device based on the first user group and that is for transmitting the first service packet between the first user equipment and the second user equipment. In an actual scenario, according to the foregoing implementation, the first network device may determine, based on the information about the first user equipment in the first service packet, that the first user equipment belongs to the first user group, to determine the value of the first group identifier. Then, the first network device determines a value of the first group policy identifier according to the interworking policy stored in the first network device and the first group identifier. Therefore, the first network device can determine the value of the first group information, so that the first network device can determine, based on the first user group, the interworking policy for transmitting the first service packet between the first user equipment and the second user equipment. For example, the first group information includes the first group identifier and the first group policy identifier, and the first group policy identifier includes a first identifier. The first identifier indicates that the first network device includes the first user group and the second network device does not include a second user group. The second user group is a user group to which the second user equipment belongs. The second user equipment is user equipment that receives the first service packet. With reference to the foregoing descriptions, the first network device determines, based on the first service packet, that the first user equipment matches the first user group. The first network device indicates the first group identifier in the first group information as the first user group, and the first network device determines a specific interworking policy based on a result indicating that the first user equipment can match the first user group. According to the foregoing descriptions, a total length of the group information is 16 bits, where three high-order bits represent a group policy identifier. Specifically, the 1stbit (for example, the highest bit in three high-order bits) represents the first identifier. A meaning of the first identifier is: “Source user equipment has a user group, and destination user equipment has no user group”. Because the first network device determines that the first user equipment can match the first user group, the first network device enables the first identifier (the 1stbit) to be valid. Further, the first network device may set a value of the first identifier according to a locally stored interworking policy. The value of the first identifier indicates a forwarding policy that the first network device expects the second network device to use on the first service packet. For example, if the value of the first identifier is 1, it indicates that the first network device expects the second network device to forward the first service packet to the second user equipment. For another example, if the value of the first identifier is 0, it indicates that the first network device expects the second network device to drop the first service packet. For example, the first group information includes the first group identifier and the first group policy identifier, and the first group policy identifier includes a second identifier and a third identifier. The second identifier indicates that the first network device does not include the first user group and the second network device includes a second user group. The third identifier indicates that the first network device does not include the first user group and the second network device does not include the second user group. With reference to the foregoing descriptions, the first network device determines, based on the first service packet, that no user group that can match the first user equipment exists in the entry stored in the first network device. The first network device indicates the first group identifier in the first group information as “invalid”, and the first network device determines a specific interworking policy based on a result indicating that the first user equipment does not match a user group. According to the foregoing descriptions, a total length of the group information is 16 bits, where three high-order bits represent a group policy identifier. Specifically, the 2ndbit (for example, the second highest bit in three high-order bits) represents the second identifier, and the 3rdbit represents the third identifier. A meaning of the second identifier is: “Source user equipment has no user group, and destination user equipment has a user group”. A meaning of the third identifier is: “Source user equipment has no user group, and destination user equipment has no user group”. Because the first network device determines that the first user equipment does not match a user group, and the first network device does not know whether a user group can be matched after the second network device receives the first service packet, the first network device enables the second identifier (the 2ndbit) and the third identifier (the 3rdbit) to be valid. Further, the first network device may set a value of the second identifier and a value of the third identifier according to a locally stored interworking policy. The value of the second identifier indicates a forwarding policy that the first network device expects the second network device to use on the first service packet. The value of the third identifier indicates a forwarding policy that the first network device expects the second network device to use on the first service packet. For example, if the value of the second identifier is 1 and the value of the third identifier is 0, it indicates that the first network device expects the second network device to forward the first service packet to the second user equipment when a user group is matched based on the first service packet, and the first network device expects the second network device to drop the first service packet when no user group is matched based on the first service packet. For another example, if the value of the second identifier is 0 and the value of the third identifier is 1, it indicates that the first network device expects the second network device to drop the first service packet when a user group is matched based on the first service packet, and the first network device expects the second network device to forward the first service packet to the second user equipment when no user group is matched based on the first service packet. In the foregoing implementation, the first group policy identifier indicates a situation in which the first network device matches a user group based on the first service packet, and further indicates a situation in which the second network device matches a user group based on the service packet. In this way, after obtaining the first service packet, the second network device may learn of, by parsing the first group policy identifier in the first group information, an interworking policy configured by the first network device. Therefore, the second network device does not need to parse the first group identifier in the first group information. In this way, a processing speed of the second network device for the first service packet is improved. In still another possible implementation, the first group information includes the first group identifier and a first group policy identifier. The first group policy identifier indicates a situation in which the second network device matches a user group based on the service packet, but does not indicate a situation in which the first network device matches a user group based on the first service packet. For example, the first group policy identifier includes a fourth identifier. For an implementation of the fourth identifier, refer to the foregoing implementation of the first identifier. Different from the first identifier, the fourth identifier indicates that the second network device does not include a second user group, and the fourth identifier does not indicate the situation in which the first network device matches a user group based on the first service packet. A meaning of the fourth identifier is: “Destination user equipment has no user group”. In this way, a meaning indicated by both the first group identifier and the fourth identifier is: “Source user equipment has a user group, and destination user equipment has no user group”. For example, the first group policy identifier includes a fifth identifier and a sixth identifier. For an implementation of the fifth identifier, refer to the foregoing implementation of the second identifier. Different from the second identifier, the fifth identifier indicates that the second network device includes a second user group, and the fifth identifier does not indicate the situation in which the first network device matches a user group based on the first service packet. A meaning of the fifth identifier is: “Destination user equipment has a user group”. In this way, a meaning indicated by both the first group identifier and the fifth identifier is: “Source user equipment has no user group, and destination user equipment has a user group”. Correspondingly, for an implementation of the sixth identifier, refer to the foregoing implementation of the third identifier. Different from the third identifier, the sixth identifier indicates that the second network device does not include a second user group, and the sixth identifier does not indicate the situation in which the first network device matches a user group based on the first service packet. A meaning of the sixth identifier is: “Destination user equipment has no user group”. In this way, a meaning indicated by both the first group identifier and the sixth identifier is: “Source user equipment has no user group, and destination user equipment has no user group”. In the foregoing implementation, after obtaining the first service packet, the second network device may learn of, by parsing the first group identifier and the first group policy identifier in the first group information, an interworking policy configured by the first network device. With reference to the foregoing descriptions, the first group information is carried in the first SRv6 packet. The first SRv6 packet is a packet obtained through encapsulating the first service packet by the first network device.FIG.4shows a header format of an SRv6 packet according to an embodiment of this disclosure. As shown inFIG.4, an SRv6 header includes an IPv6 header and a segment routing header. Optionally, the SRv6 header may further include a hop-by-hop options header and/or a destination options header. In this application, the segment routing header may be represented by an SRH, and the hop-by-hop options header may be represented by an HBH options header. The first group information may be carried in the IPv6 header; or the first group information may be carried in the HBH options header; or the first group information may be carried in the destination options header; or the first group information may be carried in the SRH. In a possible implementation, the first group information includes the first group identifier. Therefore, the first group identifier may be carried in the IPv6 header, the HBH options header, the destination options header, or the SRH. In another possible implementation, the first group information includes the first group identifier and the first group policy identifier. Therefore, the first group identifier and the first group policy identifier may be carried in the IPv6 header, the HBH options header, the destination options header, or the SRH. In addition, the first group identifier and the first group policy identifier may be carried in a same field, as shown inFIG.3. The first group identifier and the first group policy identifier may alternatively be carried in different fields of a same header, or may be carried in different fields of different headers. For example, the first group identifier is carried in the IPv6 header, and the first group policy identifier is carried in the SRH. The following provides specific descriptions by using the first group information as an example. For example, according to the definition of RFC 8200 (section 3 of RFC 8200), the IPv6 header includes next header information, which may also be referred to as a next header field. If a value of the next header information in the IPv6 header is 0, it indicates that a next header of the IPv6 header is the HBH options header. That the HBH options header is a next header of the IPv6 header is that the HBH options header immediately follows the IPv6 header. Specifically, the HBH header is encapsulated between the IPv6 header and the payload, and is adjacent to the IPv6 header. According to the explanation in section 4.3 of RFC 8200, the HBH options header is processed by a network device of each hop on a path for transmitting the SRv6 packet. Further, the HBH options header includes option information, and the option information is processed by the network device of each hop on the path for transmitting the SRv6 packet. The IPv6 header further includes version information, traffic class information, flow label information, payload length information, hop limit information, source address information, and destination address information. A length of the flow label information is 20 bits. In a possible implementation, the flow label information carries the first group information. Specifically, a part of the length (for example, 16 bits) of the flow label information is used as the first group information. A remaining length (4 bits) of the flow label information maintains an original flow label function. In addition, one flag bit (with a length of 1 bit) may be further set in the traffic class information, and the flag bit indicates that the flow label information includes the first group information. For example, according to the explanation in section 4.3 of RFC 8200, the HBH options header includes next header information, header extension length (hdr ext len) information, and options. A first option is defined in the options. The first option carries the first group information. Specifically, the first option includes option type information, option data length (opt data len) information, and option data, where the option data carries the first group information. In an implementation in which the HBH options header carries the first group information, the second network device needs to enable a configuration of processing the option. Correspondingly, when there is another network device before the first network device and the second network device, the another network device may not enable the configuration of processing the option. In addition, the flag bit mentioned above may also indicate that the HBH options header includes the first group information. For example, according to the explanation in section 4.6 of RFC 8200, the destination options header includes next header information, header extension length (hdr ext len) information, and options. A second option is defined in the options. The second option carries the first group information. Specifically, the second option includes option type information, option data length (opt data len) information, and option data, where the option data carries the first group information. In addition, the flag bit mentioned above may also indicate that the destination options header includes the first group information. For example, according to the explanation in section 2 of RFC 8754, the SRH includes next header information, header extension length (hdr ext len) information, routing type information, segments left information, last entry information, flag, tag information, and segment list information. Optionally, the SRH may further include SRH TLV (type-length-value) information. In a possible implementation, the first group information may be carried in the tag. Further, a part of a length (for example, 16 bits) of the tag may be used as the first group information. A remaining length of the tag maintains an original tag function. In addition, one flag bit (with a length of 1 bit) may be set in the flag, and the flag bit indicates that the SRH includes the first group information. In another possible implementation, the first group information may be carried in the SRH TLV. In still another possible implementation, the first group information may be carried in the segment list information. S104: The first network device sends the first SRv6 packet to the second network device. S105: The second network device receives the first SRv6 packet sent by the first network device. The first network device generates the first SRv6 packet according to the implementations of S102and S103. The first SRv6 packet includes the first group information and the first service packet. The information about the first user equipment may be a source IP address included in the first service packet, or the information about the first user equipment may be a source MAC address included in the first service packet. In a possible implementation, an SRv6 tunnel is included between the first network device and the second network device. The first network device sends the first SRv6 packet to the second network device through the SRv6 tunnel. The second network device receives the first SRv6 packet. The SRv6 tunnel between the first network device and the second network device may include another network device. S106: The second network device determines whether the second network device includes the second user group corresponding to the information about the second user equipment, where the second user group is a user group to which the second user equipment belongs. S107: The second network device determines, based on the first group information and a determining result of whether the second network device includes the second user group corresponding to the information about the second user equipment, a forwarding policy for forwarding the first service packet to the second user equipment. After receiving the first SRv6 packet, the second network device decapsulates the first SRv6 packet to obtain the first group information and the first service packet. A destination of the first service packet is the second user equipment. As shown inFIG.1, the second user equipment communicates with the second network device. The first service packet includes the information about the second user equipment. The information about the second user equipment indicates the second user equipment. In a possible implementation, the information about the second user equipment is address information. Specifically, the information about the second user equipment includes a MAC address or an IP address. The second user equipment is a receiving end device of the first service packet. Therefore, the MAC address included in the information about the second user equipment is a destination MAC address of the first service packet, and the IP address included in the information about the second user equipment is a destination IP address of the first service packet. The second network device determines whether the second network device includes the second user group corresponding to the information about the second user equipment, where the second user group is a user group to which the second user equipment belongs. During specific implementation, as shown in Table 2, the second network device may store at least one entry, and each of the at least one entry includes a correspondence between information about user equipment and a user group. The information about the user equipment is information about the user equipment that receives a service packet, for example, a destination MAC address or a destination IP address. For ease of description, information about user equipment in Table 2 is referred to as information about destination user equipment, user equipment in Table 2 is referred to as destination user equipment, and a user group in Table 2 is referred to as a destination user group. As shown in Table 2, the information about the second user equipment corresponds to the second user group, indicating that the second user equipment belongs to the second user group; and information about fourth user equipment corresponds to a fourth user group, indicating that the fourth user equipment belongs to the fourth user group. It is to be noted that a representation manner of Table 2 is to clearly show a user group to which the destination user equipment belongs. During implementation, an entry stored in the second network device may not include the first column of information (destination user equipment) in Table 2. TABLE 2Destination userInformation about theDestinationequipmentdestination user equipmentuser groupSecond userInformation about theSecond user groupequipmentsecond user equipmentFourth userInformation about theFourth user groupequipmentfourth user equipmentUser equipment 2Information about the userSecond user groupequipment 2User equipment 3Information about the userFourth user groupequipment 3. . .. . .. . . After obtaining the information about the second user equipment in the first service packet, the second network device queries, based on the information about the second user equipment, the at least one entry (as shown in Table 2) stored in the second network device. The second network device determines, based on a correspondence between the information about the second user equipment and the second user group, that a user group corresponding to the information about the second user equipment is the second user group. Therefore, the second network device may determine that the second user equipment belongs to the second user group. In a possible implementation, the destination user group in Table 2 may be represented in a form of a group identifier. For example, the second user group may be represented by a group identifier header2, and the fourth user group may be represented by a group identifier Group_ID_4. In a possible implementation, the group identifier may be represented by 16-bit data. Correspondingly, when storing the entry shown in Table 2, the second network device may store the group identifier as the destination user group. Therefore, that the group identifier indicates the user group may alternatively be understood as that the group identifier indicates a user group to which the user equipment belongs. According to the foregoing descriptions, the first service packet includes the first group information. The first group information indicates the interworking policy that is determined by the first network device based on the first user group and that is for transmitting the first service packet between the first user equipment and the second user equipment. In other words, by parsing the first service packet, the second network device can learn of the interworking policy that is determined by the first network device based on the first user group and that is for transmitting the first service packet between the first user equipment and the second user equipment. Correspondingly, based on the determining result of whether the second network device includes the second user group corresponding to the information about the second user equipment, the second network device determines, based on the second user group, an interworking policy that is for transmitting the first service packet between the first user equipment and the second user equipment. Then, the second network device determines, according to the interworking policy determined by the first network device and the interworking policy determined by the second network device, the forwarding policy for forwarding the first service packet to the second user equipment. In the foregoing implementation, in a process of determining the forwarding policy for forwarding the first service packet to the second user equipment, the second network device considers both the interworking policy determined by the first network device and the interworking policy determined by the second network device. The interworking policy determined by the first network device indicates that the first network device expects the second network device to forward the first service packet according to the interworking policy determined by the first network device. The interworking policy determined by the second network device indicates an interworking policy determined by the second network device according to a local policy and based on a status of matching between a destination address in the first service packet and a user group. In a possible implementation, the second network device stores at least one entry, and the at least one entry indicates a correspondence between a “source end interworking policy”, a “destination end interworking policy”, and a “forwarding policy”, as shown in Table 3. TABLE 3Source endDestination endForwardinginterworking policyinterworking policypolicyCan communicateCan communicateForwardingwith each otherwith each otherCan communicateCannot communicateRandomwith each otherwith each otherdroppingCannot communicateCan communicateRate-limitedwith each otherwith each otherforwardingCannot communicateCannot communicateDroppingwith each otherwith each other As shown in Table 3, the “source end interworking policy” indicates the interworking policy that is determined by the first network device based on the first user group and that is for transmitting the first service packet between the first user equipment and the second user equipment. For a specific implementation, refer to the foregoing implementation. The interworking policy determined by the first network device can reflect whether the first network device expects the first service packet to be forwarded by the second network device. For example, in an implementation in which the first group information includes the first group identifier, when the value of the first group identifier indicates the first user group, and the first network device expects the first service packet to be forwarded by the second network device, the source end interworking policy is “can communicate with each other”. Correspondingly, when the value of the first group identifier indicates an invalid value, and the first network device does not expect the first service packet to be forwarded by the second network device, the source end interworking policy is “cannot communicate with each other”. For another example, in an implementation in which the first group information includes the first group identifier and the first group policy identifier, when a value of an identifier included in the first group policy identifier is 1, and the first network device expects the first service packet to be forwarded by the second network device, the source end interworking policy is “can communicate with each other”. Correspondingly, when a value of an identifier included in the first group policy identifier is 0, and the first network device does not expect the first service packet to be forwarded by the second network device, the source end interworking policy is “cannot communicate with each other”. As shown in Table 3, the “destination end interworking policy” indicates the interworking policy that is determined by the second network device based on the second user group and that is for transmitting the first service packet between the first user equipment and the second user equipment. The interworking policy determined by the second network device can reflect whether the second network device expects the first service packet to be forwarded by the second network device. In a possible implementation, the second network device determines the interworking policy according to a local policy and based on the second user group. In another possible implementation, the second network device determines the interworking policy according to a local policy and based on the first user group and the second user group. It can be understood that a specific forwarding policy shown in Table 3 is an example. The following separately provides descriptions based on different implementations of the first group information. For example, the first group information includes the first group identifier. After receiving the first SRv6 packet, the second network device obtains the first group information in the first SRv6 packet. The second network device determines the source end interworking policy based on the first group identifier included in the first group information. For example, the value of the first group identifier indicates the first user group, and the second network device can determine that the source end interworking policy is “can communicate with each other”. For another example, the value of the first group identifier indicates an invalid value, and the second network device can determine that the source end interworking policy is “cannot communicate with each other”. The second network device determines, based on the first service packet, whether the second network device includes the second user group corresponding to the information about the second user equipment. If the second network device determines that the second network device includes the second user group corresponding to the information about the second user equipment, the second network device can determine that the destination end interworking policy is “can communicate with each other”. If the second network device determines that the second network device does not include the second user group corresponding to the information about the second user equipment, the second network device can determine that the destination end interworking policy is “cannot communicate with each other”. After the second network device determines the source end interworking policy and the destination end interworking policy, the second network device can determine, according to the implementation of Table 3, the forwarding policy for forwarding the first service packet to the second user equipment. For example, if the source end interworking policy is “can communicate with each other”, and the destination end interworking policy is “can communicate with each other”, the forwarding policy determined by the second network device is “forwarding”. In other words, the second network device forwards the first service packet to the second user equipment. For another example, if the source end interworking policy is “can communicate with each other”, and the destination end interworking policy is “cannot communicate with each other”, the forwarding policy determined by the second network device is “random dropping”. In other words, the second network device forwards the first service packet to the second user equipment in a random drop manner. The “in a random drop manner” is that the second network device determines, based on a preset random parameter, whether the first service packet is sent to the second user equipment. Therefore, there is a probability that the first service packet is sent to the second user equipment. Similarly, there is a probability that the first service packet is dropped by the second network device. For example, the first group information includes the first group identifier and the first group policy identifier, and the first group policy identifier includes the first identifier. The first identifier indicates that the first network device includes the first user group and the second network device does not include the second user group. The second network device determines that the second network device does not include the second user group corresponding to the information about the second user equipment. Therefore, the second network device may learn that the first identifier meets a result determined by the second network device. If the value of the first identifier is 1, the second network device determines, based on the value of the first identifier, that the source end interworking policy is “can communicate with each other”. If the value of the first identifier is 0, the second network device determines, based on the value of the first identifier, that the source end interworking policy is “cannot communicate with each other”. Correspondingly, the second network device may determine the destination end interworking policy according to a local policy and based on a matching status of the second user group. For example, the second network device determines that the information about the second user equipment can match the second user group, and the second network device determines that the destination end interworking policy is “can communicate with each other”. For another example, the second network device determines that the information about the second user equipment does not match a user group, and the second network device determines that the destination end interworking policy is “cannot communicate with each other”. The second network device may determine the destination end interworking policy according to a local policy and based on a matching status of the second user group and a matching status of the first user group. For example, the first user group can be matched and the second user group cannot be matched, and the second network device determines that the destination end interworking policy is “cannot communicate with each other”. For another example, the first user group can be matched and the second user group can be matched, and the second network device determines that the destination end interworking policy is “can communicate with each other”. After the second network device determines the source end interworking policy and the destination end interworking policy, the second network device can determine, according to the implementation of Table 3, the forwarding policy for forwarding the first service packet to the second user equipment. For example, if the source end interworking policy is “cannot communicate with each other”, and the destination end interworking policy is “cannot communicate with each other”, the forwarding policy determined by the second network device is “dropping”. In other words, the second network device drops the first service packet. For another example, if the source end interworking policy is “cannot communicate with each other”, and the destination end interworking policy is “can communicate with each other”, the forwarding policy determined by the second network device is “rate-limited forwarding”. In other words, the second network device forwards the first service packet to the second user equipment in a rate-limited forwarding manner. The “in a rate-limited forwarding manner” is that the second network device forwards the first service packet to the second user equipment, and limits a forwarding rate to be not greater than a specified rate. In the foregoing implementation, an indication identifier corresponding to that the first network device includes the first user group and the second network device includes the second user group is not described. The reason is that in this case, the first network device and the second network device may determine a final forwarding policy based on a group identifier. It can be understood that, in a specific implementation scenario, the foregoing identifier may alternatively be configured, to indicate the first user group and that the second network device includes the second user group. For determining of a specific interworking policy and forwarding policy, refer to the foregoing implementation. Details are not described herein. For example, the first group information includes the first group identifier and the first group policy identifier, and the first group policy identifier includes the second identifier and the third identifier. The second identifier indicates that the first network device does not include the first user group and the second network device includes the second user group. The third identifier indicates that the first network device does not include the first user group and the second network device does not include the second user group. The second network device determines, based on the first service packet, whether the second network device includes the second user group corresponding to the information about the second user equipment. If the second network device determines that the second network device includes the second user group corresponding to the information about the second user equipment, the second network device determines the source end interworking policy based on the second identifier. If the second network device determines that the second network device does not include the second user group corresponding to the information about the second user equipment, the second network device determines the source end interworking policy based on the third identifier. Further, if the value of the second identifier or the third identifier is 1, the second network device determines, based on the value of the second identifier or the third identifier, that the source end interworking policy is “can communicate with each other”. If the value of the second identifier or the third identifier is 0, the second network device determines, based on the value of the second identifier or the third identifier, that the source end interworking policy is “cannot communicate with each other”. Correspondingly, the second network device may determine the destination end interworking policy according to a local policy and based on a matching status of the second user group. For a specific implementation, refer to the foregoing implementation. Details are not described herein. After the second network device determines the source end interworking policy and the destination end interworking policy, the second network device can determine, according to the implementation of Table 3, the forwarding policy for forwarding the first service packet to the second user equipment. For example, the first group information includes the first group identifier and the first group policy identifier, the first group policy identifier includes the fourth identifier, and the fourth identifier indicates that the second network device does not include the second user group. For an implementation of determining the forwarding policy by the second network device, refer to the foregoing implementation related to the first identifier. Details are not described herein. For example, the first group information includes the first group identifier and the first group policy identifier, the first group policy identifier includes the fifth identifier and the sixth identifier, the fifth identifier indicates that the second network device includes the second user group, and the sixth identifier indicates that the second network device does not include the second user group. For an implementation of determining the forwarding policy by the second network device, refer to the foregoing implementations related to the second identifier and the third identifier. Details are not described herein. In the foregoing implementation, the source end interworking policy is a first group policy, and may be identified by the first group policy identifier. Correspondingly, the destination end interworking policy is a second group policy, and may be identified by a second group policy identifier. A specific group policy included in the second group policy may be a subpolicy. For example, the second group policy includes a first subpolicy. The first subpolicy indicates an interworking policy that is determined by the second user group when the first network device includes the first user group and the second network device does not include the second user group. For example, the second group policy includes a second subpolicy. The second subpolicy indicates an interworking policy that is determined by the second user group when the first network device does not include the first user group and the second network device includes the second user group. According to the foregoing implementation, an SRv6 packet transmitted between the first network device and the second network device carries group information, so that the second network device serving as a receiving end device may control a forwarding policy of a user group according to an interworking policy determined by a transmitting end device and an interworking policy determined by the receiving end device. FIG.5is a schematic diagram of a structure of a first network device1000according to an embodiment of this disclosure. The first network device1000shown inFIG.5may perform the corresponding steps performed by the first network device in the method in the foregoing embodiment. The first network device1000is deployed in a communication network, and the communication network further includes a second network device. As shown inFIG.5, the first network device1000includes a receiving unit1002, a processing unit1004, and a sending unit1006. The receiving unit1002is configured to receive a first service packet sent by first user equipment, where the first service packet includes information about the first user equipment, and a destination of the first service packet is second user equipment. The processing unit1004is configured to determine whether the first network device includes a first user group corresponding to the information about the first user equipment, where the first user group is a user group to which the first user equipment belongs. The processing unit1004is further configured to: determine, based on a determining result of whether the first network device includes the first user group corresponding to the information about the first user equipment, a value of first group information, and generate a first SRv6 packet, where the first SRv6 packet includes the first group information and the first service packet, and the first group information indicates an interworking policy that is determined by the first network device based on the first user group and that is for transmitting the first service packet between the first user equipment and the second user equipment. The sending unit1006is configured to send the first SRv6 packet to the second network device, where the second network device communicates with the second user equipment. Optionally, the first group information includes a first group identifier, the first group identifier indicates a user group to which the first user equipment belongs, and that the processing unit1004determines, based on the determining result of whether the first network device includes the first user group corresponding to the information about the first user equipment, the value of the first group information includes: in response to that the processing unit1004determines that the first network device includes the first user group corresponding to the information about the first user equipment, the processing unit1004is configured to determine that a value of the first group identifier indicates the first user group. Optionally, the first group information includes a first group identifier, the first group identifier indicates a user group to which the first user equipment belongs, and that the processing unit1004determines, based on the determining result of whether the first network device includes the first user group corresponding to the information about the first user equipment, the value of the first group information includes: in response to that the processing unit1004determines that the first network device does not include the first user group corresponding to the information about the first user equipment, the processing unit1004is configured to determine that a value of the first group identifier indicates “invalid”. Optionally, the first group information includes a first group identifier and a first group policy identifier, the first group identifier indicates a user group to which the first user equipment belongs, and the first group policy identifier indicates a specific interworking policy. Optionally, the first group policy identifier includes a first identifier, the first identifier indicates that the first network device includes the first user group and the second network device does not include a second user group, the second user group is a user group to which the second user equipment belongs, and that the processing unit1004determines, based on the determining result of whether the first network device includes the first user group corresponding to the information about the first user equipment, the value of the first group information includes: in response to that the processing unit1004determines that the first network device includes the first user group corresponding to the information about the first user equipment, the processing unit1004is configured to: determine that a value of the first group identifier indicates the first user group, and determine a value of the first identifier. Optionally, the first group policy identifier includes a second identifier and a third identifier, the second identifier indicates that the first network device does not include the first user group and the second network device includes a second user group, the third identifier indicates that the first network device does not include the first user group and the second network device does not include the second user group, the second user group is a user group to which the second user equipment belongs, and that the processing unit1004determines, based on the determining result of whether the first network device includes the first user group corresponding to the information about the first user equipment, the value of the first group information includes: in response to that the processing unit1004determines that the first network device includes the first user group corresponding to the information about the first user equipment, the processing unit1004is configured to: determine that a value of the first group identifier indicates “invalid”, and determine a value of the second identifier and a value of the third identifier. Optionally, the first group policy identifier includes a fourth identifier, the fourth identifier indicates that the second network device does not include a second user group, the second user group is a user group to which the second user equipment belongs, and that the processing unit1004determines, based on the determining result of whether the first network device includes the first user group corresponding to the information about the first user equipment, the value of the first group information includes: in response to that the processing unit1004determines that the first network device includes the first user group corresponding to the information about the first user equipment, the processing unit is configured to: determine that a value of the first group identifier indicates the first user group, and determine a value of the fourth identifier. Optionally, the first group policy identifier includes a fifth identifier and a sixth identifier, the fifth identifier indicates that the second network device includes a second user group, the sixth identifier indicates that the second network device does not include the second user group, the second user group is a user group to which the second user equipment belongs, and that the processing unit1004determines, based on the determining result of whether the first network device includes the first user group corresponding to the information about the first user equipment, the value of the first group information includes: in response to that the processing unit1004determines that the first network device includes the first user group corresponding to the information about the first user equipment, the processing unit1004is configured to: determine that a value of the first group identifier indicates “invalid”, and determine a value of the fifth identifier and a value of the sixth identifier. Optionally, the first group identifier is carried in any one of the following headers included in the first SRv6 packet: an IPv6 header, a hop-by-hop options header, a destination options header, and a segment routing header. Optionally, the first group policy identifier is carried in any one of the following headers included in the first SRv6 packet: an IPv6 header, a hop-by-hop options header, a destination options header, and a segment routing header. Optionally, the first SRv6 packet is transmitted through an SRv6 tunnel between the first network device and the second network device. Optionally, the information about the first user equipment is a source IP address included in the first service packet, or the information about the first user equipment is a source MAC address included in the first service packet. The first network device1000shown inFIG.5may perform the corresponding steps performed by the first network device in the method in the foregoing embodiment. An SRv6 packet sent by the first network device to the second network device carries group information, so that the first network device serving as a transmitting end device may participate in control of determining a forwarding policy for a user group. FIG.6is a schematic diagram of a hardware structure of a first network device1100according to an embodiment of this disclosure. The first network device1100shown inFIG.6may perform the corresponding steps performed by the first network device in the method in the foregoing embodiment. As shown inFIG.6, the first network device1100includes a processor1101, a memory1102, an interface1103, and a bus1104. The interface1103may be implemented in a wireless or wired manner. The processor1101, the memory1102, and the interface1103are connected through the bus1104. The interface1103may include a transmitter and a receiver, is configured to receive and send information between the first network device and the second network device in the foregoing embodiment, and is configured to receive and send information between the first network device and the first user equipment in the foregoing embodiment. For example, the interface1103is configured to support receiving a first service packet sent by the first user equipment. In addition, the interface1103is configured to support sending a first SRv6 packet to the second network device. For example, the interface1103is configured to support the processes S101and S104inFIG.2. The processor1101is configured to perform the processing performed by the first network device in the foregoing embodiment. For example, the processor1101is configured to perform an action of determining a user group to which the first user equipment belongs, an action of determining an interworking policy based on a determining result, and an action of generating the first SRv6 packet, and/or another process of the technology described in this specification. For example, the processor1101is configured to support the processes S102and S103inFIG.2. The memory1102is configured to store a program, code, or instructions, for example, store an operating system11021and an application program11022. When executing the program, the code, or the instructions, the processor or a hardware device can complete the processing process related to the first network device in the method embodiment. Optionally, the memory1102may include a read-only memory (ROM) and a random access memory (RAM). The ROM includes a basic input/output system (BIOS) or an embedded system, and the RAM includes an application program and an action system. When the first network device1100needs to run, a bootloader in the BIOS or the embedded system that is firmed in the ROM is used to boot a system to start, and boot the first network device1100to enter a normal running state. After entering the normal running state, the first network device1100runs the application program and the action system in the RAM, to complete the processing process related to the first network device in the method embodiment. It may be understood thatFIG.6shows merely a simplified design of the first network device1100. The first network device may include any quantity of interfaces, processors, or memories during actual application. FIG.7is a schematic diagram of a hardware structure of another first network device1200according to an embodiment of this disclosure. The first network device1200shown inFIG.7may perform the corresponding steps performed by the first network device in the method in the foregoing embodiment. As shown inFIG.7, the first network device1200includes a main control board1210, an interface board1230, a switching board1220, and an interface board1240. The main control board1210, the interface boards1230and1240, and the switching board1220are connected to a system backboard through a system bus for communication. The main control board1210is configured to complete functions such as system management, device maintenance, and protocol processing. The switching board1220is configured to exchange data between interface boards (where the interface board is also referred to as a line card or a service board). The interface boards1230and1240are configured to: provide various service interfaces (such as a POS interface, a GE interface, and an ATM interface), and forward a data packet. The interface board1230may include a central processing unit1231, a forwarding entry memory1234, a physical interface card1233, and a network processor1232. The central processing unit1231is configured to: control and manage the interface board, and communicate with a central processing unit on the main control board. The forwarding entry memory1234is configured to store a forwarding entry. The physical interface card1233is configured to receive and send traffic. The network processor1232is configured to control, based on the forwarding entry, the physical interface card1233to receive and send the traffic. Specifically, the physical interface card1233is configured to receive a first service packet sent by first user equipment. The physical interface card1233is further configured to send a first SRv6 packet to a second network device. After receiving the first service packet, the physical interface card1233sends the first service packet to the central processing unit1231. The central processing unit1231determines, based on information in a packet header of the first service packet, that the first service packet needs to be processed by the central processing unit1231. Correspondingly, the central processing unit1231processes the first service packet. Optionally, after receiving the first service packet, the physical interface card1233sends the first service packet to the central processing unit1231. The central processing unit1231determines, based on information in a packet header of the first service packet, that the first service packet needs to be processed by a central processing unit1211. The central processing unit1231sends the first service packet to the central processing unit1211, and the central processing unit1211processes the first service packet. The central processing unit1231is further configured to control the network processor1232to obtain the forwarding entry in the forwarding entry memory1234, and the central processing unit1231is further configured to control the network processor1232to send the first SRv6 packet to the second network device via the physical interface card1233. It can be understood that actions on the interface board1240are consistent with actions on the interface board1230in this embodiment of the present invention. For brevity, details are not described again. It can be understood that the first network device1200in this embodiment may correspond to the functions and/or the various implemented steps in the foregoing method embodiment. Details are not described herein again. In addition, it is to be noted that there may be one or more main control boards. When there are a plurality of main control boards, the main control boards may include an active main control board and a standby main control board. There may be one or more interface boards. A first network device having a stronger data processing capability provides more interface boards. There may also be one or more physical interface cards on the interface board. There may be no switching board or one or more switching boards. When there are a plurality of switching boards, load balancing and redundancy backup may be implemented together. In a centralized forwarding architecture, the first network device may not include a switching board, and the interface board undertakes a service data processing function of an entire system. In a distributed forwarding architecture, the first network device may have at least one switching board, and exchange data between a plurality of interface boards through the switching board, to provide a large-capacity data exchange and processing capability. Therefore, a data access and processing capability of the first network device in the distributed architecture is better than that of the device in the centralized architecture. A specific architecture to be used depends on a specific networking deployment scenario, and is not limited herein. In addition, an embodiment of this disclosure provides a computer storage medium, configured to store computer software instructions used by the foregoing first network device. The computer storage medium includes a program designed for performing the foregoing method embodiment. FIG.8is a schematic diagram of a structure of a second network device2000according to an embodiment of this disclosure. The second network device2000shown inFIG.8may perform the corresponding steps performed by the second network device in the method in the foregoing embodiment. The second network device is deployed in a communication network, and the communication network further includes a first network device. As shown inFIG.8, the second network device2000includes a receiving unit2002and a processing unit2004. The receiving unit2002is configured to receive a first SRv6 packet sent by the first network device, where the first SRv6 packet includes first group information and a first service packet, the first group information indicates an interworking policy that is determined by the first network device based on a first user group and that is for transmitting the first service packet between first user equipment and second user equipment, the first service packet is from the first user equipment, a destination of the first service packet is the second user equipment, the first user group is a user group to which the first user equipment belongs, and the first service packet includes information about the second user equipment. The processing unit2004is configured to determine whether the second network device includes a second user group corresponding to the information about the second user equipment, where the second user group is a user group to which the second user equipment belongs. The processing unit2004is further configured to determine, based on the first group information and a determining result of whether the second network device includes the second user group corresponding to the information about the second user equipment, a forwarding policy for forwarding the first service packet to the second user equipment. Optionally, the second network device further includes a sending unit2006, the first group information includes a first group identifier, the first group identifier indicates a user group to which the first user equipment belongs, and that the processing unit2004determines, based on the first group information and the determining result of whether the second network device includes the second user group corresponding to the information about the second user equipment, the forwarding policy for forwarding the first service packet to the second user equipment includes: in response to that the processing unit2004determines that the second network device includes the second user group corresponding to the information about the second user equipment and that a value of the first group identifier indicates the first user group, the sending unit2006is configured to send the first service packet to the second user equipment. Optionally, the second network device further includes a sending unit2006, the first group information includes a first group identifier, the first group identifier indicates a user group to which the first user equipment belongs, and that the processing unit2004determines, based on the first group information and the determining result of whether the second network device includes the second user group corresponding to the information about the second user equipment, the forwarding policy for forwarding the first service packet to the second user equipment includes: in response to that the processing unit2004determines that the second network device does not include the second user group corresponding to the information about the second user equipment and that a value of the first group identifier indicates the first user group, the sending unit2006is configured to send the first service packet to the second user equipment in a random drop manner or in a rate-limited forwarding manner. Optionally, the second network device further includes a sending unit2006, the first group information includes a first group identifier, the first group identifier indicates a user group to which the first user equipment belongs, and that the processing unit2004determines, based on the first group information and the determining result of whether the second network device includes the second user group corresponding to the information about the second user equipment, the forwarding policy for forwarding the first service packet to the second user equipment includes: in response to that the processing unit2004determines that the second network device includes the second user group corresponding to the information about the second user equipment and that a value of the first group identifier indicates “invalid”, the sending unit2006is configured to send the first service packet to the second user equipment in a random drop manner or in a rate-limited forwarding manner. Optionally, the first group information includes a first group identifier and a first group policy identifier, the first group identifier indicates a user group to which the first user equipment belongs, and the first group policy identifier indicates a specific interworking policy. Optionally, that the processing unit2004determines, based on the first group information and the determining result of whether the second network device includes the second user group corresponding to the information about the second user equipment, the forwarding policy for forwarding the first service packet to the second user equipment includes: the processing unit2004is configured to determine a second group policy based on the determining result, where the second group policy indicates an interworking policy that is determined by the second network device based on the second user group and that is for transmitting the first service packet between the first user equipment and the second user equipment; and the processing unit2004is further configured to determine, according to the second group policy and the interworking policy that is indicated by the first group policy identifier, the forwarding policy for forwarding the first service packet to the second user equipment. Optionally, that the processing unit2004determines, according to the second group policy and the interworking policy that is indicated by the first group policy identifier, the forwarding policy for forwarding the first service packet to the second user equipment includes: the processing unit2004is configured to determine that a value of a first identifier in the first group policy identifier is valid, where the first identifier indicates that the first network device includes the first user group and the second network device does not include the second user group; the processing unit2004is further configured to determine a first subpolicy in the second group policy based on the first identifier, where the first subpolicy indicates an interworking policy that is determined by the second user group when the first network device includes the first user group and the second network device does not include the second user group; and the processing unit2004is further configured to determine, according to the first subpolicy and an interworking policy that is indicated by a value of the first identifier, the forwarding policy for forwarding the first service packet to the second user equipment. Optionally, that the processing unit2004determines, according to the second group policy and the interworking policy that is indicated by the first group policy identifier, the forwarding policy for forwarding the first service packet to the second user equipment includes: the processing unit2004is configured to determine that values of a second identifier and a third identifier in the first group policy identifier are valid, where the second identifier indicates that the first network device does not include the first user group and the second network device includes the second user group, and the third identifier indicates that the first network device does not include the first user group and the second network device does not include the second user group; the processing unit2004is further configured to determine a second subpolicy in the second group policy based on the second identifier and the third identifier, where the second subpolicy indicates an interworking policy that is determined by the second user group when the first network device does not include the first user group and the second network device includes the second user group; and the processing unit2004is further configured to determine, according to the second subpolicy and an interworking policy that is indicated by a value of the second identifier, the forwarding policy for forwarding the first service packet to the second user equipment. Optionally, that the processing unit2004determines, based on the first group information and the determining result of whether the second network device includes the second user group corresponding to the information about the second user equipment, the forwarding policy for forwarding the first service packet to the second user equipment includes: the processing unit2004is configured to determine a second group policy based on the determining result, where the second group policy indicates an interworking policy that is determined by the second network device based on the second user group and that is for transmitting the first service packet between the first user equipment and the second user equipment; and the processing unit2004is further configured to determine, based on the first group identifier and according to the second group policy and the interworking policy that is indicated by the first group policy identifier, the forwarding policy for forwarding the first service packet to the second user equipment. Optionally, the forwarding policy is any one of the following forwarding policies: forwarding, dropping, forwarding in a random drop manner, and forwarding in a rate-limited forwarding manner. Optionally, the first group information is carried in any one of the following headers included in the first SRv6 packet: an IPv6 header, a hop-by-hop options header, a destination options header, and a segment routing header. Optionally, the first SRv6 packet is transmitted through an SRv6 tunnel between the first network device and the second network device. Optionally, the information about the second user equipment is a destination IP address included in the first service packet, or the information about the second user equipment is a destination MAC address included in the first service packet. The second network device2000shown inFIG.8may perform the corresponding steps performed by the second network device in the method in the foregoing embodiment. The second network device receives the first SRv6 sent by the first network device, and the second network device serving as a receiving end device may control a forwarding policy of a user group according to an interworking policy determined by a transmitting end device and an interworking policy determined by the receiving end device. FIG.9is a schematic diagram of a hardware structure of a second network device2100according to an embodiment of this disclosure. The second network device2100shown inFIG.9may perform the corresponding steps performed by the second network device in the method in the foregoing embodiment. As shown inFIG.9, the second network device2100includes a processor2101, a memory2102, an interface2103, and a bus2104. The interface2103may be implemented in a wireless or wired manner. The processor2101, the memory2102, and the interface2103are connected through the bus2104. The interface2103may include a transmitter and a receiver, and is configured to receive and send information or data between the second network device and the first network device in the foregoing embodiment. For example, the interface2103is configured to support receiving a first SRv6 packet sent by the first network device. For example, the interface2103is configured to support the process S105inFIG.2. The processor2101is configured to perform the processing performed by the second network device in the foregoing embodiment. For example, the processor2101is configured to: receive the first SRv6 packet sent by the first network device, determine a second user group, and determine, according to an interworking policy determined by the first network device and an interworking policy determined by the second network device, a forwarding policy for forwarding a first service packet, and/or perform another process of the technology described in this specification. For example, the processor2101is configured to support the processes S106and S107inFIG.2. The memory2102includes an operating system21021and an application program21022, and is configured to store a program, code, or instructions. When executing the program, the code, or the instructions, the processor or a hardware device can complete the processing process related to the second network device in the method embodiment. Optionally, the memory2102may include a read-only memory (ROM) and a random access memory (RAM). The ROM includes a basic input/output system (BIOS) or an embedded system, and the RAM includes an application program and an action system. When the second network device2100needs to run, a bootloader in the BIOS or the embedded system that is firmed in the ROM is used to boot a system to start, and boot the second network device2100to enter a normal running state. After entering the normal running state, the second network device2100runs the application program and the action system in the RAM, to complete the processing process related to the second network device in the method embodiment. It may be understood thatFIG.9shows merely a simplified design of the second network device2100. The second network device may include any quantity of interfaces, processors, or memories during actual application. FIG.10is a schematic diagram of a hardware structure of another second network device2200according to an embodiment of this disclosure. The second network device2200shown inFIG.10may perform the corresponding steps performed by the second network device in the method in the foregoing embodiment. As shown inFIG.10, the second network device2200includes a main control board2210, an interface board2230, a switching board2220, and an interface board2240. The main control board2210, the interface boards2230and2240, and the switching board2220are connected to a system backboard through a system bus for communication. The main control board2210is configured to complete functions such as system management, device maintenance, and protocol processing. The switching board2220is configured to exchange data between interface boards (where the interface board is also referred to as a line card or a service board). The interface boards2230and2240are configured to: provide various service interfaces (such as a POS interface, a GE interface, and an ATM interface), and forward a data packet. In a possible implementation, the second network device2200is a blade server. The interface board2230may include a central processing unit2231, a forwarding entry memory2234, a physical interface card2233, and a network processor2232. The central processing unit2231is configured to: control and manage the interface board, and communicate with a central processing unit2211on the main control board2210. The forwarding entry memory2234is configured to store a forwarding entry. The physical interface card2233is configured to receive and send traffic. The network processor2232is configured to control, based on the forwarding entry, the physical interface card2233to receive and send the traffic. Specifically, the physical interface card2233is configured to receive a first SRv6 packet sent by a first network device. The physical interface card2233is further configured to forward a first service packet. After receiving the first SRv6 packet, the physical interface card2233sends the first SRv6 packet to the central processing unit2231. The central processing unit2231determines, based on information in a packet header of the first SRv6 packet, that the first SRv6 packet needs to be processed by the central processing unit2231. Correspondingly, the central processing unit2231processes the first SRv6 packet. Optionally, after receiving the first SRv6 packet, the physical interface card2233sends the first SRv6 packet to the central processing unit2231. The central processing unit2231determines, based on information in a packet header of the first SRv6 packet, that the first SRv6 packet needs to be processed by the central processing unit2211. The central processing unit2231sends the first SRv6 packet to the central processing unit2211, and the central processing unit2211processes the first SRv6 packet. The central processing unit2231is further configured to control the network processor2232to obtain the forwarding entry in the forwarding entry memory2234, and the central processing unit2231is further configured to control the network processor2232to receive and send the traffic via the physical interface card2233. It can be understood that actions on the interface board2240are consistent with actions on the interface board2230in this embodiment of the present invention. For brevity, details are not described again. It can be understood that the second network device2200in this embodiment may correspond to the functions and/or the various implemented steps in the foregoing method embodiment. Details are not described herein again. In addition, it is to be noted that there may be one or more main control boards. When there are a plurality of main control boards, the main control boards may include an active main control board and a standby main control board. There may be one or more interface boards. A second network device having a stronger data processing capability provides more interface boards. There may also be one or more physical interface cards on the interface board. There may be no switching board or one or more switching boards. When there are a plurality of switching boards, load balancing and redundancy backup may be implemented together. In a centralized forwarding architecture, the second network device may not include a switching board, and the interface board undertakes a service data processing function of an entire system. In a distributed forwarding architecture, the second network device may have at least one switching board, and exchange data between a plurality of interface boards through the switching board, to provide a large-capacity data exchange and processing capability. Therefore, a data access and processing capability of the second network device in the distributed architecture is better than that of the device in the centralized architecture. A specific architecture to be used depends on a specific networking deployment scenario, and is not limited herein. In addition, an embodiment of this disclosure provides a computer storage medium, configured to store computer software instructions used by the foregoing second network device. The computer storage medium includes a program designed for performing the foregoing method embodiment. An embodiment of this disclosure further includes a network system. The network system includes a first network device and a second network device. The first network device is the first network device inFIG.5,FIG.6, orFIG.7, and the second network device is the second network device inFIG.8,FIG.9, orFIG.10. Method or algorithm steps described in combination with the content disclosed in this disclosure may be implemented by hardware, or may be implemented by a processor by executing software instructions. The software instructions may include a corresponding software module. The software module may be stored in a RAM memory, a flash memory, a ROM memory, an EPROM memory, an EEPROM memory, a register, a hard disk, a removable hard disk, a CD-ROM memory, or a storage medium in any other form well-known in the art. For example, a storage medium is coupled to a processor, so that the processor can read information from the storage medium and write information into the storage medium. Certainly, the storage medium may be a component of the processor. The processor and the storage medium may be disposed in an ASIC. In addition, the ASIC may be located in user equipment. Certainly, the processor and the storage medium may exist in the user equipment as discrete components. A person skilled in the art should be aware that in the foregoing one or more examples, functions described in this disclosure may be implemented by hardware or a combination of hardware and software. When the functions are implemented by the combination of hardware and software, the software may be stored in a computer-readable medium or transmitted as one or more instructions or code in the computer-readable medium. The computer-readable medium includes a computer storage medium and a communication medium, where the communication medium includes any medium that enables a computer program to be transmitted from one place to another. The storage medium may be any available medium accessible to a general-purpose or dedicated computer. The objectives, technical solutions, and beneficial effects of this disclosure are further described in detail in the foregoing specific implementations. It should be understood that the foregoing descriptions are merely specific implementations of this disclosure.
103,894
11863447
DESCRIPTION OF EMBODIMENTS FIG.1is a schematic diagram of an application scenario according to an embodiment of the present disclosure.FIG.1includes an autonomous system AS1, an AS2, an AS3, an AS4, and an AS5. The AS1includes border nodes R11and R12in the AS1. The AS2includes border nodes R21and R22in the AS2. The AS3includes border nodes R31and R33in the AS3, and also includes an internal node R32in the AS3. The AS4includes border nodes R41and R42in the AS4. The AS5includes border nodes R51and R52in the AS5. Each border node may be a network device having a route function. For example, the border node is a router or a switch having a route function. The following describes a case of an abnormal network flow direction in the current technology. If the AS1generates a route prefix 10.1.0.0/16, and sends 10.1.0.0/16 to the AS4through the AS2and the AS3. In addition, the AS1also sends 10.1.0.0/16 to the AS4through another AS path, namely, the AS5. The R41separately receives route information including the route prefix 10.1.0.0/16 from the R33and the R52. Route information received from the R33includes 10.1.0.0/16 and an AS path corresponding to 10.1.0.0/16 (AS3, AS2, AS1). The marking mode “(AS3, AS2, AS1)” indicates that a source AS corresponding to 10.1.0.0/16 is the AS1, and 10.1.0.0/16 passes through the AS1, the AS2, and the AS3in sequence. Route information received from the R52includes 10.1.0.0/16 and an AS path corresponding to 10.1.0.0/16 (AS5, AS1). The marking mode “(AS5, AS1)” indicates that a source AS corresponding to 10.1.0.0/16 is the AS1, and 10.1.0.0/16 passes through the AS1and the AS5in sequence. When calculating a route to 10.1.0.0/16, the R41selects the R52as a next hop according to a rule of selecting a shorter AS path. In other words, the R41sends a data packet with a destination to 10.1.0.0/16 to the R52instead of the R33. It is assumed that the foregoing description is based on a normal flow direction in network planning. However, an administrator for the AS3may configure an outbound route policy on the R33. The outbound route policy affects AS path information in route information sent by the R33to the outside. For example, if the outbound route policy on the R33is incorrectly configured, a normal AS path (AS3, AS2, AS1) for sending 10.1.0.0/16 by the R33to the outside is changed to (AS3). As a result, the R41sends the data packet with the destination to 10.1.0.0/16 to the R33, and an abnormal network flow direction is caused. Embodiments of the present disclosure provide a route processing method and a corresponding network device. The method and the network device are based on a same inventive concept, and the method and the network device have similar principles for resolving a problem. Therefore, mutual reference may be made between a network device embodiment and a method embodiment, and no repeated description is provided. In the embodiments of the present disclosure, in the application scenario shown inFIG.1, a route origin information base server, for example, an RPM server, is disposed. The R33may obtain route origin verification data from the RPM server to form a route origin information base. The route origin verification data may be ROA data, or autonomous system path authorization (autonomous system path authorization, ASPA) data. The ROA data describes a correspondence between a route prefix and a source autonomous system, and is used to verify whether a source AS corresponding to a route prefix is correct. The ASPA data describes a correspondence between a route prefix and an autonomous system pair, and is used to verify whether AS path information corresponding to a route prefix is correct. Certainly, an ROA database and an ASPA database may be implemented as one database, to provide source AS verification information and AS path verification information. In this way, before the R33sends route information carrying 10.1.0.0/16 to the R41in the AS4, the R33may first verify a route origin, for example, verify whether the source AS is correct, verify whether the AS path is correct, or verify whether the source AS and the AS path are correct. If the verification succeeds, the R33sends the route information carrying 10.1.0.0/16. This reduces a possibility of causing an abnormal network flow direction. The foregoing describes a case in which the R33receives, from the AS2, 10.1.0.0/16 originated from the AS1. In another case, if 10.1.0.0/16 is generated by the AS3, for example, generated by the R31, the R32, or the R33, when a route device in the AS3sends 10.1.0.0/16, there is no solution in the current technology for verifying whether a route origin corresponding to 10.1.0.0/16 is correct, because IETF RFC 6810 and IETF RFC 6811 specify that a receiver of route information verifies only a route origin corresponding to received route information. In the embodiments of the present disclosure, the R33may verify a route prefix generated by the AS3and then send the route prefix. This improves accuracy of AS origin information corresponding to the route prefix sent by the R33to the outside, and reduces a possibility of causing an abnormal network flow direction. The network device described in the embodiments of the present disclosure may be a device having a route function. For example, the network device is a router, or a switch having a route function. FIG.2is a schematic flowchart of a method according to an embodiment of the present disclosure. With reference toFIG.1, an R33inFIG.1is used as an execution body, and a route prefix 10.1.0.0/16 is used as an example. S201: The R33downloads ROA data from an RPM server, so that the R33locally stores the ROA data, where the ROA data is also referred to as an ROA base. An entry in the ROA base records a correspondence between a route prefix and a source AS, and is used to verify whether a source AS corresponding to a route prefix is correct. For example, in a network planning phase, if an administrator for an AS1registers the AS1as a source AS corresponding to 10.1.0.0/16 with an international organization corresponding to the RPM server, the ROA data downloaded by the R33includes a correspondence between 10.1.0.0/16 and the source AS1, in other words, the ROA data downloaded by the R33includes 10.1.0.0/16, and the source AS corresponding to 10.1.0.0/16 is the AS1. It may be understood that the ROA data may not need to be downloaded each time the method shown in the embodiment inFIG.2is performed. S202: The R33receives, from an AS2, a piece of route information originated from the AS1, where the piece of route information carries the route prefix 10.1.0.0/16, and corresponding AS path information is (AS2, AS1), indicating that 10.1.0.0/16 originates from the AS1and passes through the AS2. For example, the R33receives the route information from an R32. S203: Before the R33forwards 10.1.0.0/16 to an AS4, the R33inserts, in the AS path information corresponding to 10.1.0.0/16, information about an AS in which the R33is located. The R33inserts information about an AS3in the AS path information to form a new AS path (AS3, AS2, AS1), indicating that 10.1.0.0/16 originates from the AS1and passes through the AS2and the AS3in sequence. S204: Before the R33forwards 10.1.0.0/16 to the AS4, the R33modifies AS information according to an outbound route policy configured on the R33. AS information after modification includes to-be-verified source AS information. In the embodiment inFIG.2, the AS information used before verification is performed is referred to as autonomous system information associated with the route prefix, and the autonomous system information associated with the route prefix includes the to-be-verified source AS information. For example, assuming that an AS path after modification in step S204is (AS3), the AS path (AS3) is referred to as AS information associated with 10.1.0.0/16. Steps S203and S204are optional steps. When step S203is performed but step S204is not performed, the AS path (AS3, AS2, AS1) in step S203is referred to as the AS information associated with 10.1.0.0/16. When neither step S203nor step S204is performed, the AS path (AS2, AS1) in step S202is referred to as the AS information associated with 10.1.0.0/16. S205: The R33verifies whether there is a match item in the ROA data, where the match item includes 10.1.0.0/16 and the to-be-verified source AS information in the AS information after modification in step S204. The following describes three cases. Case 1 It is assumed that there is an entry matching 10.1.0.0/16 in the ROA base and a source AS in the entry is the AS1. If step S204is not performed, or the AS path after modification in step S204is changed to (AS9, AS2, AS1), the to-be-verified source AS information is the AS1. For 10.1.0.0/16, a to-be-verified source AS is the same as the source AS in the ROA base, and both are the AS1. The R33determines that the ROA data includes the match item, and performs step S207. Case 2 It is assumed that the ROA base includes an entry matching 10.1.0.0/16, a source AS in the entry is the AS1, and the AS path after modification in step S204is (AS3, AS2, AS9). In this case, the to-be-verified source AS information is the AS9. The AS1is different from the AS9. Therefore, the R33determines that there is no match item in the ROA data, and performs step S206. Case 3 If there is no entry matching 10.1.0.0/16 in the ROA base, the R33determines that there is no match item in the ROA data, and performs step S208. Case 3 may also be understood as that the R33does not know whether a to-be-verified source AS corresponding to 10.1.0.0/16 is correct. In other words, a verification result is “unknown”. It should be understood that the three cases are merely used as examples to demonstrate whether there is the match item in the ROA data. However, it should not be considered that there are only the three cases. S206: Determine not to send 10.1.0.0/16. After step S206, step S209may be further performed. S207: Determine to send 10.1.0.0/16 to an R41in the AS4. Then, step S210is performed. S208: Determine, according to a preconfigured policy, whether to send 10.1.0.0/16. The preconfigured policy may be sending 10.1.0.0/16 or not sending 10.1.0.0/16. S209: Send alarm information. For example, the alarm information may be sent to a network management workstation. The alarm information may carry the route prefix for which a verification result is mismatching, and carry information about the route prefix, for example, the AS path information. S210: Send route information carrying 10.1.0.0/16 and the AS path after modification to the R41in the AS4. For example, if the AS path is modified to (AS9, AS2, AS1) in step S204, the route prefix in the sent route information is 10.1.0.0/16, and the AS path in the sent route information is (AS9, AS2, AS1). It should be understood that, in Case 1, if step S204is not performed, the AS path corresponding to 10.1.0.0/16 sent in step S210is (AS3, AS2, AS1). In the embodiment inFIG.2, steps S201, S203,204, and S209are optional. In addition, a sequence of the steps is merely an example, and the steps may also be performed in another appropriate sequence. For example, step S201may be performed before step S205. In the embodiment shown inFIG.2, before sending the route prefix, the R33first verifies the source AS information corresponding to the route prefix, and determines, based on the verification result, whether to send the route prefix. This improves accuracy of the source AS information in the sent route information, and reduces a possibility of causing an abnormal network flow direction. Further, the R33verifies the source AS information after modification according to the outbound route policy. This reduces a source AS information error caused by incorrect manual configuration. In addition, if there is no matched route prefix in the ROA base, processing is flexibly performed according to the preconfigured policy. In addition, if a matching result is mismatching in the ROA data, an alarm is sent to notify a network administrator that mismatching occurs on the source AS information in the R33. FIG.3is a schematic flowchart of a method according to an embodiment of the present disclosure. With reference toFIG.1, an R33inFIG.1is used as an execution body, and a route prefix 10.1.0.0/16 is used as an example. The embodiment inFIG.3is used to verify information about an AS path corresponding to the route prefix. S301: The R33downloads ASPA data from an RPM server, so that the R33locally stores the ASPA data, where the ASPA data is also referred to as an ASPA base. An entry in the ASPA base records a correspondence between a route prefix and an autonomous system pair, and is used to verify whether an AS path corresponding to a route prefix is correct. For example, in a network planning phase, an administrator for an AS1registers, with an international organization corresponding to the RPM server, that the AS1plans to send 10.1.0.0/16 to an AS2. An administrator for the AS2registers with the international organization corresponding to the RPM server that the AS2plans to send 10.1.0.0/16 to an AS3. An administrator for the AS3registers with the international organization corresponding to the RPM server that the AS3plans to send 10.1.0.0/16 to an AS4. Therefore, the ASPA data downloaded by the R33includes 10.1.0.0/16, and AS pairs corresponding to 10.1.0.0/16 are [AS1, AS2], [AS2, AS3] and [AS3, AS4]. A mark “[AS1, AS2]” indicates that the AS1sends 10.1.0.0/16 to the AS2. It may be understood that the ASPA data does not need to be downloaded each time when the method shown in the embodiment inFIG.3is performed. For steps S302and S303, refer to steps S202and S203respectively. S304: The R33modifies AS information according to an outbound route policy configured on the R33. AS information after modification includes to-be-verified AS path information. In the embodiment inFIG.3, the AS information before verification is referred to as autonomous system information associated with the route prefix, and the associated AS information includes the to-be-verified AS path information. For example, assuming that an AS path after modification in step S304is (AS3, AS5, AS1), the AS path (AS3, AS5, AS1) is referred to as information about the AS associated with 10.1.0.0/16, and the to-be-verified AS path information is (AS3, AS5, AS1). S305: The R33verifies that there is no match item in the ASPA base. The R33finds an entry having an AS pair matching 10.1.0.0/16 in the ASPA base, but the AS pair in the entry and the to-be-verified AS path information (AS3, AS5, AS1) do not match (namely, both are not the same). This is because the to-be-verified AS path information indicates that a source AS associated with 10.1.0.0/16 is the AS1. The AS1sends 10.1.0.0/16 to an AS5, and the AS5sends 10.1.0.0/16 to the AS3. Therefore, the R33verifies that there is no match item in the ASPA base. Then, step S306is performed. S306: The R33determines not to send 10.1.0.0/16. Then, step S307is performed. S307: The R33sends alarm information. Step S307is an optional step. The following describes several other possible implementations by using examples based on the embodiment inFIG.3. It should be understood that the implementations are not limited to these possible implementations. In a possible case, the ASPA base includes 10.1.0.0/16, and the AS pairs corresponding to 10.1.0.0/16 are [AS1, AS2], [AS2, AS3], and [AS3, AS4]. Step S304is not performed. A to-be-verified AS path is (AS3, AS2, AS1). The R33determines that there is the match item in the ASPA base, determines to send 10.1.0.0/16, and sends route information carrying 10.1.0.0 and the AS path (AS3, AS2, AS1). In a possible case, if there is no entry that matches 10.1.0.0/16 in the ASPA base, the R33determines that there is no match item in the ASPA base, and determines, according to a preconfigured policy, whether to send 10.1.0.0/16. The preconfigured policy may be sending 10.1.0.0/16 or not sending 10.1.0.0/16. The case may also be understood as that the R33does not know whether the to-be-verified AS path information corresponding to 10.1.0.0/16 is correct. In other words, a verification result is “unknown”. In a possible case, the ASPA base includes 10.1.0.0/16, and the AS pairs corresponding to 10.1.0.0/16 are [AS1, AS2], [AS2, AS3], and [AS3, AS4]. Neither step S303nor step S304is performed. A to-be-verified AS path is (AS2, AS1). Because the to-be-verified AS path matches an AS pair [AS1, AS2], the R33determines that there is the match item in the ASPA base, determines to send 10.1.0.0/16, and sends route information carrying 10.1.0.0 and the AS path (AS2, AS1). In a possible case, the ASPA base includes 10.1.0.0/16, and the AS pairs corresponding to 10.1.0.0/16 are [AS1, AS2], [AS2, AS3], and [AS3, AS4]. After step S303is performed, an AS path is (AS3, AS2, AS1). An AS path after modification in step S304is (AS2, AS1). The R33determines that there is the match item in the ASPA base, determines to send 10.1.0.0/16, sends route information carrying 10.1.0.0 and the AS path (AS2, AS1). In the embodiment shown inFIG.3, before sending the route prefix, the R33first verifies the AS path information associated with the route prefix, and determines, based on a verification result, whether to send the route prefix, to improve accuracy of the AS path information of the sent route information, and reduce a possibility of causing an abnormal network flow direction. Further, the R33verifies the AS path information after modification according to the outbound route policy, to reduce an AS path information error caused by incorrect manual configuration. In addition, if there is no matched route prefix in the ASPA base, processing is flexibly performed according to a preconfigured policy. In addition, if a matching result is mismatching in the ASPA data, an alarm is sent to notify a network administrator that mismatching occurs on the AS path information in the R33. FIG.4is a schematic flowchart of a method according to an embodiment of the present disclosure. With reference toFIG.1, the R33inFIG.1is used as an execution body, and the route prefix 10.1.0.0/16 is used as an example. InFIG.4, 10.1.0.0/16 is generated by an AS3, for example, by the R33. S401: The R33downloads ROA data from an RPM server, so that the R33locally stores the ROA data, which is also referred to as an ROA base. An entry in the ROA base records a correspondence between a route prefix and a source AS, and is used to verify whether the source AS associated with the route prefix is correct. For example, in a network planning phase, if an administrator for the AS3registers with an international organization corresponding to the RPM server, a source AS associated with 10.1.0.0/16 as the AS3. The ROA data downloaded by the R33includes a correspondence between 10.1.0.0/16 and the source AS3. In other words, the ROA data downloaded by the R33includes 10.1.0.0/16, and the source AS corresponding to 10.1.0.0/16 is the AS3. It may be understood that the ROA data may not need to be downloaded each time when the method shown in the embodiment inFIG.4is performed. S402: The R33generates the route prefix 10.1.0.0/16. S403: The R33associates 10.1.0.0/16 with the source AS3. When the R33sends route information carrying 10.1.0.0/16 to an AS4, the R33inserts, to the AS path corresponding to 10.1.0.0/16, information about an AS in which the R33is located (namely, the AS3). In this case, the AS3is the source AS associated with 10.1.0.0/16. S404: Before the R33sends 10.1.0.0/16 to the AS4, the R33modifies the AS information according to an outbound route policy configured on the R33. In the embodiment inFIG.4, the AS information before verification is referred to as AS information associated with the route prefix, and the associated AS information includes the to-be-verified AS information. It is assumed that an AS path after modification is (AS1), the AS information associated with 10.1.0.0/16 is (AS1), and the to-be-verified AS information is the source AS information, namely, the AS1. S405: The R33verifies that there is no match item in the ROA data. The to-be-verified source AS information is the AS1, but the source AS corresponding to 10.1.0.0/16 in the ROA base is the AS3. The AS1is different from the AS3. Therefore, the R33determines that there is no match item in the ROA data. Step S406is performed. S406: The R33determines not to send 10.1.0.0/16. Then, step S407may be performed. The following describes several other possible implementations by using examples based on the embodiment inFIG.4. It should be understood that the implementations are not limited to these possible implementations. In a possible case, referring to Case 1 in the embodiment inFIG.2in which step S404is not performed, or although step S404is performed, the source AS corresponding to the AS path after modification in step S404is still the AS3(for example, the AS path after modification in step S404is (AS2, AS3)), the R33determines that there is a match item in the ROA data. Refer to Case 1 in the embodiment inFIG.2. In a possible case, if there is no entry matching 10.1.0.0/16 in the ROA database, the R33determines that there is no match item in the ROA data. Refer to Case 3 in the embodiment inFIG.2. In the embodiment inFIG.4, steps S401, S404, and S407are optional. In addition, a sequence of the steps is merely an example, and the steps may also be performed in another appropriate sequence. For example, step S401may be performed before step S405. In the foregoing example, the R33generates 10.1.0.0/16. Therefore, the source AS associated with 10.1.0.0/16 is the AS3. It should be understood that 10.1.0.0/16 may also be generated by an R31or an R32and sent to the R33. In this case, the source AS associated with 10.1.0.0/16 is still the AS3. For an execution method, refer to the embodiment inFIG.4. Details are not described again. In the embodiment shown inFIG.4, before sending the route prefix whose source AS is the AS3, a network device in the AS3first verifies the source AS information associated with the route prefix, and determines, based on the verification result, whether to send the route prefix, to improve accuracy of the source AS information in the sent route information, and reduce a possibility of causing an abnormal network flow direction. For other beneficial effects, refer to the description of beneficial effects in the embodiment inFIG.2. Embodiments inFIG.2toFIG.4describe a case in which the R33is the execution body. In another implementation, an R31or an R32in the AS3may also establish a connection with the RPM server, downloads the ROA data or the ASPA data, and verifies the AS information corresponding to 10.1.0.0/16 before sending 10.1.0.0/16. For a specific method, refer to steps performed by the R33in the embodiments inFIG.2toFIG.4. Details are not described herein again. In the embodiments inFIG.2toFIG.4, the R33downloads the ROA data or the ASPA data from the RPM server. In another possible design, the RPM server actively pushes the ROA data or the ASPA data to the R33. FIG.5is a schematic diagram of a possible structure of a network device in the method embodiment. The network device500may be the R31, the R32, or the R33inFIG.1. If the network device500is the R33inFIG.1, the network device500implements a function of the R33in the embodiment shown inFIG.2,FIG.3, orFIG.4. Referring toFIG.5, the network device500includes an obtaining module501, a verification module502, and a determining module503. These modules may perform corresponding function of the network device in the method embodiment. The obtaining module may include a route prefix obtaining sub-module and an autonomous system information obtaining sub-module, configured to support the network device500in performing steps S202to S205inFIG.2, steps S302to S305inFIG.3, or steps S402to S404inFIG.4. The verification module502is configured to support the network device500in performing step S205inFIG.2, step S305inFIG.3, or step S405inFIG.4. The determining module503is configured to support the network device500in performing steps S206to S208inFIG.2, step S306inFIG.3, or step S406inFIG.4. The network device500further includes a route origin information base obtaining module504and a sending module506. The route origin information base obtaining module504is configured to support the network device500in performing step S201inFIG.2, step S301inFIG.3, or step S401inFIG.4. The sending module506is configured to support the network device500in performing step S210inFIG.2. Optionally, the network device500further includes an alarm module505, configured to support the network device500in performing step S209inFIG.2, step S307inFIG.3, or step S407inFIG.4. For a specific execution process, refer to detailed descriptions of corresponding steps in the embodiment shown inFIG.2,FIG.3, orFIG.4. Details are not described herein again. When an integrated module is used, modules in the embodiment inFIG.5may be integrated. For example, the verification module502and the determining module503in the embodiment inFIG.5may be combined into one module. FIG.6is a schematic diagram of a possible structure of a network device in the method embodiment. The network device600may be the R31, the R32, or the R33inFIG.1. If the network device600is the R33inFIG.1, the network device600implements a function of the R33in the embodiment shown inFIG.2,FIG.3, orFIG.4. The network device600includes a processor601, a memory602, a communications interface603, and a bus604. The processor601, the communications interface603, and the memory602are connected to each other through the bus604. The bus604may be a peripheral component interconnect (peripheral component interconnect, PCI for short) bus, an extended industry standard architecture (extended industry standard architecture, EISA for short) bus, or the like. The bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is used to represent the bus inFIG.6, but this does not mean that there is only one bus or only one type of bus. The processor601is configured to store a computer program. The computer program includes a program instruction. The processor601is configured to invoke the program instruction to perform steps inFIG.2,FIG.3, orFIG.4. In a possible design, the communications interface603performs steps S201, S202, S209, and S210inFIG.2, steps S301, S302, and S307inFIG.3, or steps S401and S407inFIG.4. The processor601is configured to invoke the program instruction to perform steps S203to S208inFIG.2, steps S303to S306inFIG.3, or steps S402to S406inFIG.4. An embodiment of the present disclosure further provides a computer storage medium. The computer-readable storage medium stores a program. When the program is run, the computer is enabled to implement the method in the method embodiment. An embodiment of the present disclosure further provides a route processing apparatus, including hardware related to a program instruction. The hardware is used in the method in the method embodiment. “First” mentioned in the embodiments of the present disclosure is merely used as a name identifier, and does not represent the first in sequence. This rule is also applicable to “second”. The method steps described in the content disclosed in the embodiments of the present disclosure may be implemented by hardware, or a processor by executing a software instruction. The software instruction may be formed by a corresponding software module, and the software module may be stored in a random access memory (random access memory, RAM), a flash memory, a read only memory (read only memory, ROM), an erasable programmable read only memory (erasable programmable ROM, EPROM), an electrically erasable programmable read-only memory (electrically EPROM, EEPROM), a hard disk, a removable hard disk, an optical disc, or any other form of storage medium well-known in the art. For example, a storage medium is coupled to a processor, so that the processor can read information from the storage medium and write information into the storage medium. Certainly, the storage medium may alternatively be a component of the processor. The processor and the storage medium may be located in an ASIC. A person skilled in the art should be aware that in the foregoing one or more examples, functions described in the present disclosure may be implemented by hardware, software, firmware, or any combination thereof. When the present disclosure is implemented by software, the foregoing functions may be stored in a computer-readable medium or transmitted as one or more instructions or code in the computer-readable medium. The computer-readable medium includes a computer storage medium and a communications medium. The communications medium includes any medium that enables a computer program to be transmitted from one place to another. The storage medium may be any available medium accessible to a general-purpose or dedicated computer. The objectives, technical solutions, and beneficial effects of the present disclosure are further described in detail in the foregoing specific embodiments. It should be understood that the foregoing descriptions are merely specific implementations of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Based on the technical solutions of the present disclosure, any modification, equivalent replacement, and improvement made shall fall within the protection scope of the present disclosure.
29,806
11863448
DESCRIPTION OF EMBODIMENTS In the following description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Throughout the following description similar reference numerals have been used to denote similar elements such as components, features of a system and/or operations performed in a system or element of the system, when applicable. In the following description and claims, the terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. “Coupled” is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. “Connected” is used to indicate the establishment of communication between two or more elements that are coupled with each other. In one embodiment, a client device establishes a first VPN connection with a first server based on first VPN credentials. One or more flows of traffic are transmitted and received through the first VPN connection to and from the first server. The service server determines an identification of a second server. The second server is identified based on one more traffic optimization criteria that need to be satisfied by the VPN connection. The service server transmits the identification of the second server to the client device. The client device receives an identification of the second server to be used as a destination of a second VPN connection. The second VPN connection satisfies a set of traffic optimization goals for at least one flow from the flows forwarded through the first VPN connection. Based on the identification of the second server, the client device establishes a second VPN connection for the at least one flow from the flows between the client device and the second server. FIG.1illustrates a block diagram of an exemplary architecture for enabling traffic optimization in virtual private networks, in accordance with some embodiments. The architecture100includes a client device110, two or more servers120A-N, one or more origin servers130A-B, a domain owner145and a service server125. The client device110is a computing device (e.g., laptop, workstation, smartphone, palm top, mobile phone, tablets, gaming system, set-top box, etc.) that is capable of accessing network resources (e.g., they include software such as client network applications (e.g., web browsers, mobile applications, etc.) that are capable of accessing network resources). In some embodiments, the client network applications are implemented based on web application program interfaces (APIs) enabling the client device to request access to resources served by a server. The client device110includes a VPN client122that is associated with a first VPN address. The VPN client122is operative to perform operations of a private virtual network protocol. Several VPN protocols can be used without departing from the scope of the present invention. The client device110is operative to establish one or more VPN connections with one or more servers. The client device110is operative to transmit and receive traffic to and from a server through a VPN connection based on VPN credentials associated with the client device110. The VPN credentials identify a VPN address of the client device110and cryptographic credentials to allow for secure communication through the VPN connection. The client device110is operative to transmit a request for a network resource that is served by the origin server130A. In some embodiments, the client device110is operative to transmit the request for the network resource through the VPN connection(s). The VPN connection can be referred to as a VPN tunnel. While a single client device is illustrated, any number of client devices can be in communication with each one of the servers120A-N. Each one of the servers120A-N is a computing device coupled with one or more client devices through a network (not illustrated). Each one of the servers120A-N includes a respective VPN server123A-N. Each one of the VPN server123A-N is operative to perform operations of a private virtual network protocol. Several VPN protocols can be used without departing from the scope of the present invention. Each one of the servers120A-N is operative to establish one or more VPN connections with one or more client device. Each one of the servers120A-N is operative to establish a VPN connection with the client device110. Each one of the servers120A-N is operative to transmit and receive traffic to and from a client device (e.g., client device110) through a VPN connection based on VPN credentials. The VPN credentials include a VPN address of the client device as well as cryptographic credentials of the client device. The VPN credentials further include a VPN address associated with the server and cryptographic credentials associated with the server. The cryptographic credentials of the server and the client device allow for secure communication through the VPN connection. The cryptographic credentials can include authentication credentials that allow for authentication of the server and the client device. The cryptographic credentials may further include encryption keys for encrypting traffic within the VPN tunnel between the client device and the server. Each one of the servers120A-N enable client devices to access network resources hosted on origin servers (e.g.,130A-B) through a VPN connection. The VPN connections established between the client device110and the server120A enables the client device to obtain anonymity and secure communication when accessing network resources hosted or served by the origin server130A. Each one of the servers120A-N is not typically part of the local network of the origin servers130A-B. For example, the first server120A is outside of the local area network of the origin server130A and is typically not physically accessible by the owner/administrator of the origin server130A. In some embodiments, each one of the servers120A-N is a proxy server that is part of a cloud-based proxy service. The cloud-based proxy server provides different services for customers (e.g., the domain owner145). For example, the first server120A can be a first proxy server situated between client devices (e.g., client device110) and the origin servers130A-B. In one embodiment, each one of the proxy servers120A-N is a reverse proxy server. Certain network traffic is received and processed through the proxy servers. For example, web traffic (e.g., HTTP requests/responses, HTTPS requests/responses, SPDY requests/responses, etc.) for domains of the origin server130A may be received and processed at the first server120A. In one embodiment, the domain owner145is a customer of the cloud-based proxy service. The owner of the servers120A-N is typically different than the owner of the origin servers130A-B. By way of example, the cloud-based proxy service may provide services including protecting against Internet-based threats (e.g., proactively stopping botnets, cleaning viruses, trojans, and worms, etc.), providing performance services for customers (e.g., acting as a node in a content delivery network (CDN) and dynamically caching customer's files closer to visitors, page acceleration, content optimization services, etc.), TCP stack optimizations, and/or other services. In one embodiment, the cloud-based proxy service provides a mechanism for establishing VPN connections between client devices and one or more proxy servers of the service when the client devices attempt to access resources served by the origin servers. Generally speaking, each one of the servers120A-N receives network traffic from the client device110requesting Internet resources. The request for Internet resources is performed through a VPN connection based on the VPN credentials of each one of the client device and the respective server. For example, the first server120A may receive requests for an action to be performed on an identified resource (e.g., an HTTP GET request, an HTTP POST request, other HTTP request methods, or other requests to be applied to an identified resource on an origin server) from the client device110through a first VPN connection. The request received from the client device110is destined for an origin server (e.g., origin server130A). Each one of the servers120A-N analyzes incoming traffic and takes one or more actions on the incoming traffic. For example, the servers120A-N may cause the incoming traffic to be fulfilled. In some embodiments, each one of the servers120A-N may transmit the outgoing traffic to the appropriate origin server130A-B. For example, the first server120A may receive a first request for a network resource through a first VPN connection. The first request for the network resource may be encrypted as per the VPN protocol requirements. The first server120A is operative to decrypt the traffic received through the VPN protocol and obtain the first request for the network resource. The server120A is operative to fulfil the request. For example, the server120A may transmit a request (e.g., an HTTP GET request) for the network resource to the origin server130A. The origin server130A may transmit a response (e.g., an HTTP response) with the requested resource to the first server120A. The first server120A may analyze the incoming traffic including the response and take one or more actions, including, for example, transmitting the response to the requesting client device110. The response may be transmitted through the first VPN connection established with the client device110. In some embodiments, the packets transporting the response are encrypted based on the VPN credentials associated with the first VPN connection. In some embodiments, the first server120A may also cache resources for the domains and respond to requests from client devices locally if the requested resource is in cache. In some embodiments, incoming traffic is received at a particular server120A as a result of a DNS request for a domain of one of the domain owners145resolving to an IP address of the server120A. By way of example, DNS record(s) for the domain “example.com” may resolve to an IP address of a server120A. In some embodiments, multiple domains that may be owned by different domain owners may resolve to the same server120A (e.g., the same IP address or a different IP address of the server120A). For example, the domain owner145owns one or more domains (e.g., example.com) for which the first server120A may receive requests. The first server120A may receive requests for the resources at a given location of the domain (e.g., example.com/login). Each one of the origin servers130A-B is an electronic device that serves network resources (e.g., web pages, images, word processing documents, PDF files movie files, music files, or other computer files). For example, each one of the origin server130A-B may host one or more domains of domain owners and is operative to respond to requests for resources at that domain. For example, the origin server130A may host a domain (example.com) owned by domain owner145. Each one of the origin servers130A-B may generate the network resources or alternatively may be coupled with another server that generates the network resources. Although not illustrated inFIG.1andFIGS.2A-B, it should be understood that the network resources of the origin servers may be stored separately from the device that responds to the requests. In some embodiments, the domain owner145is a customer of a cloud-based service and registers their respective domain for the service. For example, the authoritative name servers for each domain of the domain owner145may be changed to the authoritative name server of the service. It should be understood that the backup authoritative name server serving the domain may also be changed to an authoritative name server of the service. The zone file record for the domain is also changed such that DNS resolution requests for the domain owned by the domain owner145, which corresponds with the origin server130A, resolve to the first server120A. In one embodiment, a customer (e.g., the domain owners145or other entity (e.g., web administrators) on behalf of the domain owner145) may use the service server125to change their authoritative name server to the authoritative name server and change their zone file to have their domain point to the first server120A. In some embodiments, the domain owner145or an administrator of the domain may perform these changes through a graphical interface. The service server125is an electronic device operated by the cloud-based proxy service. The service server125includes a VPN traffic optimizer135. The VPN traffic optimizer135is operative to analyze the network formed by the different servers120A-N and determine optimized routes within the network formed by the servers120A-N for VPN connection between a client device and a server. In some embodiments, the service server125may also provide a set of tools and interfaces for the domain owner145that are accessible over the Internet. For example, the service server125, among other things, allows the domain owner145to register for the cloud-based proxy service, view statistics/logs of events, and report suspicious events. The service server125includes tools to assist the domain owner145in changing their authoritative name servers and zone file record. It should be understood, however, that the domain owner145may change their authoritative name server and zone file without use of the service server125(i.e., they may directly change the authoritative name server and zone file). The service server125includes tools to assist the domain owner145to select a set of services offered by the cloud-based proxy service. The architecture100may further include a DNS system that is not illustrated. The DNS system may include multiple DNS servers to resolve DNS requests. The DNS system includes an authoritative name server, which is an authoritative name server for the service. Thus, the authoritative name server is the authoritative name server for the domains corresponding to the origin servers130A-B. Accordingly, when the DNS system resolves a request for a domain corresponding to the origin server130A or the origin server130B, the authoritative name server provides the authoritative answer. It should be understood that the DNS system may include several DNS servers (e.g., preferred domain servers, top-level domain name servers, other domain servers). It should also be understood that there may be multiple authoritative web servers for the service and they may be geographically distributed. When the domain owner145is a customer of the cloud-based proxy service, DNS resolution requests for the domain owned by the domain owner145, which corresponds with the origin server130A, resolve to an IP address of a proxy server that is part of the service (e.g., the first server120A). When the domain owner145is not a customer of the cloud-based proxy service, or alternatively the servers120A-N are not part of a cloud-based proxy service, DNS resolution requests for the domain owned by the domain owner145resolve to an IP address of the origin server130A. In some embodiments the cloud-proxy service has multiple proxy servers that are geographically distributed. For example, in some embodiments, the service uses multiple point of presences (POPs). A POP is a collection of networking equipments (e.g., authoritative name servers and proxy servers) that are geographically distributed to decrease the distance between requesting client devices and content. The authoritative name servers have the same anycast IP address and the proxy servers have the same anycast IP address. As a result, when a DNS request is made, the network transmits the DNS request to the closest authoritative name server. That authoritative name server then responds with a proxy server within that POP. Accordingly, a visitor will be bound to that proxy server until the next DNS resolution for the requested domain (according to the TTL (time to live) value as provided by the authoritative name server). In some embodiments, instead of using an anycast mechanism, embodiments use a geographical load balancer to route traffic to the nearest POP. WhileFIG.1illustrates two origin servers130A-B and a single client device110respectively coupled with the first server120A, in some embodiments each of the servers120A-N is coupled with multiple origin servers and/or with multiple client devices. Moreover, in some embodiments, there are multiple proxy servers providing service for a particular domain. In operation, the service server125transmits initial VPN route configurations to the client device110at operation1. In some embodiments, the client device110A establishes, at operation2, a first VPN connection with the first server120A based on first VPN credentials. The first VPN credentials include cryptographic credentials associated with the client device110A and the first server120A to enable a secure communication through a VPN protocol between the client device110and the first server120A. The first VPN credentials further include a first VPN destination address. When the first VPN connection is established, the first VPN destination address identifies the first server120A. At operation3, one or more flows of traffic are transmitted and received through the first VPN connection to and from the first server120A. In some embodiments, based on the initial VPN route configuration, all traffic originating from the client device110is transmitted through the first VPN connection towards the first server120A, which acts as a VPN server and the VPN destination of the first VPN connection. The first server120A de-encapsulate the traffic received through the first VPN connection and transmits the traffic to the original destination (e.g., the origin server130A) at operation4. At operation5, the service server125determines an identification of a second server120B. The second server120B is identified based on one more traffic optimization criteria that need to be satisfied by the VPN connection. At operation6, the service server125transmits the identification of the second server120B to the client device110to update the VPN route for traffic to the first origin server130A. The client device110receives an identification of the second server120B to be used as a destination of a second VPN connection. The second VPN connection satisfies a set of traffic optimization goals for at least one flow from the flows forwarded through the first VPN connection. Based on the identification of the second server120B, the client device establishes, at operation7, a second VPN connection for at least one flow from the flows between the client device and the second server based on the second VPN credentials. The second VPN credentials include cryptographic credentials that enable the second server120B and the client device110to communicate securely through the VPN protocol. The VPN credentials further include the VPN address of the client device110and the VPN destination address of the second server120B. Upon establishment of the second VPN connection, the client device110forwards, at operation8, at least one flow through the second VPN connection to the second server120B. In some embodiments, all traffic that was previously forwarded through the first VPN connection is routed through the second VPN connection. In other embodiments, only a portion of the traffic is routed through the second VPN connection towards the second server120B while another portion of the traffic continues to be routed through the first VPN connection towards the first server120A. For example, the first server120A can be coupled with the two origin servers130A and130B and traffic between the client device110towards and from each one of these origin servers is forwarded through the first VPN connection in the initial phase. In this example, the second VPN connection can be established for traffic that is destined to a first domain which is served by the first origin server130A and traffic destined to a second domain served by the second origin server130B can be forwarded through the first VPN connection. The two VPN connections can be used successively such that a first connection is first established to forward all traffic destined to one or more origin servers and then the second connection is established to forward all of this same traffic without the first connection being used. Alternatively, the two VPN connections can be used concurrently such that part of the traffic is forwarded through the first VPN connection and another part of the traffic is forwarded through the second VPN connection. Upon receipt of the traffic from the client device110, the second server120A forwards traffic to the origin servers. For example, the second server120A transmits traffic to the origin server130A. In some embodiments, the traffic can be forwarded towards the origin server130A through the first server120A, operation9a. A VPN connection can be established between the first server120A and the second server120B such that traffic received through the second VPN connection is transmitted via this VPN connection between the servers120A and120B. The server120A then forwards, at operation10a, the traffic to the origin server130A. This allows for continuous forwarding of traffic to/from the origin server130A and the client device110without interruption even when the VPN connection originating from the client device110was rerouted towards a second server120B. Alternatively, the traffic can be forwarded towards the origin server130A without going through the first server120A, at operation9b. For example, for new traffic originating from the client device110which was not previously routed through the first VPN connection, this traffic is transmitted through the second VPN connection and towards the first origin server130A (without going through the first server120A). FIG.2Aillustrates a block diagram of detailed operations for initial configuration of the VPN, in accordance with some embodiments. Each one of the devices (e.g., client device, first server120A, second server120B, origin server130A, and origin server130B) is a network device that has an associated IP address (e.g., IP address111, IP address121A, IP address121B, IP address131A, and IP address131B). The IP addresses of each device (110,120A-B,130A-B) allows the device to communicate through the IP protocol with the other devices (110,120A-B,130A-B) in the network. The IP address is an Internet-addressable addresses. Each one of the devices (110,120A-B,130A-B) may be coupled to another one of the devices (110,120A-B,130A-B) via one or more network devices that are not illustrated. Further, to enable the VPN communication the client device110includes a VPN routing table112. The VPN routing table includes VPN entries that define VPN routes in the network. Each VPN route has a destination IP address and a corresponding VPN destination address (VPN Dest. Add.). The client device110further includes an encapsulator113that includes for each VPN route a respective IP encapsulation destination address. The IP encapsulation destination address identifies the IP address that is to be used as a destination for encapsulating the VPN traffic addressed to a particular VPN destination address. In an initial set up the client device110is configured to include a first entry (1a) and a second entry (1b) in the VPN routing table112and for VPN destination125A an associated encapsulation IP address121A. In this initial set up the routing table112includes a first entry1asuch that traffic destined to IP address131A of the origin server130A is routed through a VPN route with VPN destination125A. The routing table112further includes a second entry1bsuch that traffic destined to IP address131B of the origin server130B is also routed through the same VPN route with VPN destination125A. The client device110is configured such that a first VPN connection can be established between the client device110with a source VPN address126(VPN Src. Add.) and the first VPN destination address125A. According to1c, the first VPN destination address125A is associated with the first destination IP address121A of the first server120A. Thus, when the client device110A establishes, at operation3, a first VPN connection with the first server120A that is done based on the entry (1c) with the VPN destination address125A and the IP address121A of the first server120A. The traffic transmitted through the VPN connection is encapsulated within IP packets with destination IP address131A or131B. The first VPN connection is established based on the cryptographic credentials associated with the first client device110and the first VPN server120A. The cryptographic credentials associated with the client device110A and the first server120A enable a secure communication through a VPN protocol between the client device110and the first server120A. In some embodiments, the cryptographic credentials include cryptographic keys of the client device110and the first server120A that are exchanged during the establishment of the first VPN connection. In some embodiments, the first VPN connection is established for forwarding a set one or more flows. A flow may include IP packets of a request for network resources at a first domain. The packets received at the client device110through the first VPN connection may include the requested network resources. The first domain is served by the origin server130A. In some embodiments, traffic including requests and responses of more than one domain can be forwarded through the first VPN connection. For example, the client device can transmit requests for network resources at two or more domains. The domains can be served by different origin servers (e.g., first domain served by origin server130A and second domain served by origin server130B). Alternatively, the domains can be served by the same origin server. In some embodiments, the origin server130A is a customer of a cloud-based proxy server and a DNS request for the domain served by the origin server130A resolves to an IP address of the proxy server120A instead of an IP address of the origin server130A. In these embodiments, the routing table112may include, instead of the IP address131A of the origin server130A, an IP address121A of the proxy server120A as the destination IP address. In these embodiments, the destination IP address is associated with a VPN route (with destination VPN125A). The destination VPN address125A is then associated with an encapsulation destination IP address121A, which is the IP address of the first server120A. The requests for and the responses of the network resources are transmitted to the origin servers through the proxy server and the VPN connection is established between the client device (e.g., client device110) and the proxy server (e.g., first server120A) that is coupled with the origin server serving the network resources. Thus, in these embodiments, the first server120A acts as a VPN server as well as a proxy server of the cloud-based proxy service. In these embodiments, when the VPN traffic is received at the first server120A, a higher-level protocol (e.g., HTTP, HTTPS) can be used to identify the destination of the packets. For example, packets received at the first server120A through the VPN connection are processed at the first server120A to determine an HTTP request for network resources at a first domain served by the origin server130A. The request can be fulfilled by the first server120A by either transmitting the request to the origin server130A (e.g., processing the request and transmitting IP packets including the request towards the origin server130A) or by retrieving network resources previously stored in a cache for that domain. FIG.2Billustrates a block diagram of detailed operations for configuration of the VPN to optimize traffic in the VPN, in accordance with some embodiments. At operation5, the service server125determines an identification of a second server120B. The VPN traffic optimizer135determines a second server120B from a set of servers that is to be used as a destination of a second VPN connection for the client device110to obtain network resources of one or more domains. The second server120B is identified based on one more traffic optimization criteria that need to be satisfied by a VPN connection between the client device and a VPN server. The VPN traffic optimizer135collects network intelligence metrics from requests that are fulfilled by different servers. In some embodiments, when the servers are proxy servers of a cloud-based proxy service120A-N, the network intelligence metrics relates to requests fulfilled by the proxy servers on behalf of the origin servers. The VPN traffic optimizer135determines based on the collected network intelligence metrics an optimized route for a VPN connection originating from the client device110. The optimized route identifies a new server that is to be used as a destination of the VPN connection instead of the first server120A, where the optimized route satisfies the traffic optimization criteria. The collected metrics can be obtained based on active or passive monitoring of remote destinations (e.g., proxy servers, and/or origin servers) to measure latency, packet loss, congestion or other network metrics. In some embodiments, the determination of the server is further performed based on properties of the network such as cost, reliability, current or predicted utilization. The determination of the optimized route can be performed to satisfy one or multiple optimization criteria, including ensuring low latency responses between the client device and the origin server hosting the requested network resource, high reliability of the traffic between the client device and the origin server and/or the proxy server, low cost of the VPN connection established for the client device, and/or quality of the VPN service. In some embodiments, the traffic optimization criteria can be determined based on the characteristics of the VPN protocol that is used to establish the VPN connections. For example, the optimization criteria can be set based on characteristics of the protocol such as whether a protocol is latency-sensitive or insensitive, has high or low bandwidth requirements, or how tolerant the specific protocol is to packet loss. The optimized route is determined based at least in part on the characteristics of the protocol. In some embodiments, the traffic optimization is performed for traffic of a given domain such that the second server is to be used as a destination of a second VPN connection that is to be used to forward traffic of the given domain. In other embodiments, the traffic optimization is performed for traffic of multiple domains (e.g., a subset of all domains of network resources that are requested at the client device110, or all domains of network resources requested at the client device110) and the second VPN connection is to be used for forwarding traffic of the multiple domains. At operation6a, the service server125transmits an update to the VPN routes to the client device110. For example, the service server125transmits the identification of the second server120B to the client device110. The client device110receives an identification of the second server120B to be used as a destination of the second VPN connection. The identification of the second server120B includes an IP address of the second server120B. The request to update the VPN route at6amay further indicate which flows are to be forwarded through the new VPN route. The service server125may identify a flow based on its destination IP address and/or source IP address, based on a domain name, or other types of flow identification. For example, the service server125may transmit the source IP address of the client device with the new VPN route identification to indicate that all flows originating from the client device are to be transmitted through the new VPN connection. Alternatively, the service server125may transmit a set of one or more destination IP addresses with the identification of the new VPN route indicating that all flows destined to these destinations are to be routed through the new VPN route. In another example, the service server125may transmit a domain name (e.g., first domain) that is served by the origin server130A. The flows identified may be flows that were previously transmitted/received by the client device or alternatively new flows that are to be transmitted/received by the client device. The second VPN connection satisfies a set of traffic optimization goals for at least one flow from the flows forwarded through the first VPN connection. The VPN traffic optimizer135causes configuration of the client device's VPN routing table112to route traffic to certain destinations via the selected second server that is determined by the VPN traffic optimizer135. For example, at operation6a, the routing table112is updated to include entry6csuch that traffic destined to IP address131A of the origin server130A is routed through a VPN route with VPN destination125B. The routing table112further includes a second entry1bsuch that traffic destined to IP address131B of the origin server130B is routed through another VPN route with VPN destination125A. The entry1bis not updated as no indication is received from the service server125to update the VPN route for this traffic. The traffic optimizer135identifies (in6d) the IP address121B of second server123B as the encapsulation destination IP address for the VPN destination address125B. In the embodiments where the origin server130A is a customer of a cloud-based proxy server, the routing table112is updated to include a VPN route (VPN destination125B) for the IP address of the proxy server120A as the destination IP address (instead of the IP address of the origin server131A as illustrated inFIG.2B). The destination VPN address125B is then associated with an encapsulation destination IP address121B, which is the IP address of the second server120B. In these embodiments, while the second server120B is updated to be the VPN server (destination of the second VPN connection) for the flows, the first server120A may remain the proxy server of the cloud-based service that is the actual destination of the flows as opposed to the origin server130A. While the illustrated example shows that only one entry in the VPN routing table is updated, in other examples, multiple flows or all flows that are tunneled through the first VPN connection are updated to be routed through the second VPN connection. In these alternative examples, multiple or all entries of the routing table112are updated with the VPN destination address125B instead of VPN destination address125A. In some embodiments, the VPN routing table112may also be configured on a per-protocol level, either by routing based on both port and VPN IP address, by packet inspection, or any other mechanism available to VPN client122to detect and determine a protocol. Based on the identification of the second server120B, the client device establishes a second VPN connection for at least one flow from the flows between the client device and the second server based on second VPN credentials. The second VPN credentials include cryptographic credentials that enable the second server120B and the client device110to communicate securely through the VPN protocol. The second VPN credentials further include the VPN source address of the client device110and the second VPN destination address of the second server120B. Upon establishment of the second VPN connection, the client device110forwards at least one flow through the second VPN connection to the second server120B. In some embodiments, all traffic that was previously forwarded through the first VPN connection is routed through the second VPN connection. In other embodiments, only a portion of the traffic is routed through the second server120B. For example, the first server120A can be coupled with the two origin servers130A and130B and traffic between the client device110towards and from each one of these origin servers is forwarded through the first VPN connection. In this example, the second VPN connection can be established for traffic that is destined to a first domain which is served by the first origin server130A while traffic destined to a second domain served by the second origin server130B continues to be forwarded through the first VPN connection. These two VPN connections can be used successively such that a first connection is first established to forward all traffic destined to one or more origin servers and then the second connection is established to forward all of the traffic without the first connection being used. Alternatively, the two VPN connections can be used concurrently such that some traffic is routed through the first VPN connection and other traffic is routed through the second VPN connection. The VPN traffic optimizer135is further operative to configure the second server120B, at operation6b. At operation6b, the VPN traffic optimizer135transmits an identification of the first server120A to be used as a destination VPN address to be used for forwarding traffic destined to the origin server130A. The traffic optimizer135causes the VPN routing table114B to be updated to include an entry6e. The entry6eincludes the destination IP address131A of the origin server130A and a route for the traffic destined to the IP address131A. In one embodiment, the route is a VPN route towards VPN address125A. The entry6eis used for forwarding traffic that is received from the source VPN address126(VPN address of the client device) that is destined to the origin server131A towards the first server120A via a third VPN connection. While this embodiment describes a VPN connection between the first server121A and the second server121B, in other embodiments, the connection between these servers is not a VPN connection. In some embodiments, the operation6bmay be performed to forward traffic towards the first server120A when the first server120A is a proxy server from the cloud-based service that is to receive traffic on behalf of an origin server130A. In other embodiments, the second server121B is updated such that it is identified as the proxy server that is to receive traffic on behalf of the first origin server130A instead of the first server120A. This may be done by updating DNS records (not shown) such that a DNS request for the first domain resolves to an IP address of the second server120B instead of the IP address of the first server120A or the IP address of the origin server130A. In some embodiments, the VPN traffic optimizer135is further operative to configure the first server120A, at operation6g. In some embodiments, the VPN traffic optimizer135may perform operation6g, while in other embodiments, this operation can be skipped as the server121A is already configured to forward traffic towards the origin server130A. At operation6g, the VPN traffic optimizer135configures the first server120A to forward traffic received from the second server120B and destined to the first origin server130A towards the origin server130A. For example, when the connection between the first server120A and the second server120B is a VPN connection, the traffic optimizer135causes the VPN traffic received through the third VPN connection to be de-encapsulated and forwarded towards the origin server131A. In one embodiment, the flows received from the client device110through the second VPN connection are transmitted from the second server120B to the first server120A, at operation9a. Once received at the first server120A, the flows are forwarded towards the first origin server (operation10a). Alternatively, the flows can be forwarded directly towards the origin server130A without going through the first server120A. For example, this may occur when the traffic is new traffic which was not previously forwarded through the first VPN connection. This may also occur, when the second server120B is identified as a proxy server of the cloud-based service that is to receive traffic on behalf of the origin server130A. In the embodiments where the first server120A remains the proxy server and the second server120A acts as the VPN server, the requests for and the responses of the network resources are transmitted to the origin servers through the proxy server120A and via the second VPN connection (7). In these embodiments a third connection (e.g., VPN connection) is established between the second server120B and the first server120A. In these embodiments, when the VPN traffic is received at the second server120B it is de-encapsulated and forwarded towards the first server120A (through the third VPN connection or other connection). When the traffic is received at the first server121A, a higher-level protocol (e.g., HTTP, HTTPS) can be used to identify the destination of the packets. For example, an HTTP request for network resources at a first domain served by the origin server130A can be determined from the packets received. The request can be fulfilled by the first server120A by either transmitting the request to the origin server130A or by retrieving network resources from a cache. In the embodiments where the second server120B is updated to act as the proxy server instead of the first server120A, when the VPN traffic is received at the second server120B it is de-encapsulated and a higher-level protocol (e.g., HTTP, HTTPS) can be used to identify the destination of the packets. For example, an HTTP request for network resources at a first domain served by the origin server130A can be determined from the received packets. The request can be fulfilled by the second server120B by either transmitting the request to the origin server130A or by retrieving network resources from a cache. The configuration of each one of the client device110and the second server120B can be performed by the service server125transmitting the route information via a direct communication link (e.g., through an IP protocol, or other communication protocols that can be used for configuration of the network devices) or alternatively through messages tunneled through the first VPN connection. The embodiments of the present invention enable the dynamic routing of VPN traffic from a client device towards one or more servers through one or more VPN connections. The multiple VPN connections can be established simultaneously or successively to transmit a portion of all of traffic that is originating from the client device. In some embodiments, the VPN connections are established between the client device and one or more proxy servers of a cloud-based proxy service when the client device is requesting network resources served by origin servers that are coupled with the proxy servers. The routing is dynamically updated to optimize the VPN route that are established based one or more optimization criteria. The operations in the flow diagrams below will be described with reference to the exemplary embodiments of theFIGS.1-2B. However, it should be understood that the operations of the flow diagrams can be performed by embodiments of the invention other than those discussed with reference to the other figures, and the embodiments of the invention discussed with reference toFIGS.1-2Bcan perform operations different than those discussed with reference to the flow diagrams. FIG.3illustrates a flow diagram of exemplary operations for traffic optimization in virtual private networks, in accordance with some embodiments. At operation302, the client device110A establishes a first VPN connection with the first server120A based on first VPN credentials. The client device is associated with a source VPN address and the first server is associated with a destination VPN address. The first VPN credentials include cryptographic credentials associated with the client device110A and the first server120A to enable a secure communication through a VPN protocol between the client device110and the first server120A. The first VPN credentials may further include the first VPN destination address. When the first VPN connection is established, the first VPN destination address identifies the first server120A as the destination of the first VPN connection. The first VPN destination can be associated with a first encapsulation IP address for encapsulating the VPN traffic. The first encapsulation IP address can be the IP address of the first server120A or alternatively the IP address of the origin server130A. The IP address of the first server120A can be used as the first encapsulation IP address when the first server120A is a proxy server of a cloud-based proxy service and DNS requests for a domain at the origin server resolve to the IP address of the proxy server instead of the origin server. At operation304, one or more flows of traffic are forwarded (transmitted and received) through the first VPN connection to and from the first server120A. In some embodiments, the first VPN connection is established for forwarding a set one or more flows. A flow may include packets of requests for network resources at a first domain or packets of responses including the network resources at the first domain. The first domain is served by the origin server130A. In some embodiments, traffic including requests and responses of more than one domain can be forwarded through the first VPN connection. For example, the client device can transmit requests for network resources at two or more domains. The domains can be served by different origin servers (e.g., first domain served by origin server130A and second domain served by origin server130B). Alternatively, the domains can be served by the same origin server. In some embodiments, the requests for and the responses of the network resources are transmitted to the origin servers through a proxy server and the VPN connection is established between the client device (e.g., client device110) and the proxy server (e.g., first server120A) that is coupled with the origin server serving the network resources. In some embodiments, wherein forwarding the one or more flows of traffic through the first VPN connection includes operations305and307. At operations305, a first request for a network resource at a domain served by a first origin server that is coupled with the first proxy server, is transmitted. At operation307, a response including the network resource is received from the origin server and through the first proxy server and the first VPN connection. The client device110receives, at operation306, an identification of the second server120B to be used as a destination of a second VPN connection. The identification of a second server120B is received from a service server (e.g., service server125). The second server120B is identified based on one more traffic optimization criteria that need to be satisfied by the VPN connection. In some embodiments, the traffic optimization criteria include at least one of obtaining low latency for requests for the at least one flow, obtaining high reliability of traffic forwarded through the second VPN connection, ensuring a low cost of the second VPN connection, and ensuring a good quality of service for the second VPN connection. In some embodiments, the traffic optimization criteria can be determined based on the characteristics of the protocol of the packets/traffic sent through the VPN connections. For example, the optimization criteria can be set based on characteristics of the protocol such as whether a protocol is latency-sensitive or insensitive, has high or low bandwidth requirements, or how tolerant the specific protocol is to packet loss. The optimized route based at least in part on the characteristics of the protocol. The VPN traffic optimizer135causes configuration of the client device's VPN routing table112to route traffic to certain destinations via the selected second server that is determined by the VPN traffic optimizer135. At operation308, the VPN routing table112is updated to define the second VPN connection from the client device110to the second server120B for forwarding the at least one flow from the flows. The second VPN connection is to be performed based on second VPN credentials and the second server is associated with a second VPN address. For example, the service server125transmits an update of the VPN routes to the client device110. The client device110receives an identification of the second server120B to be used as a destination of the second VPN connection. The identification of the second server120B includes an IP address of the second server120B. The update the VPN route may further indicate which flows are to be forwarded through the new VPN route. The service server125may identify a flow based on its destination IP address and/or source IP address, based on a domain name, or other types of flow identification. For example, the service server125may transmit the source IP address of the client device110with the new VPN route identification to indicate that all flows originating from the client device100are to be transmitted through the new VPN connection. Alternatively, client device110may receive a set of one or more destination IP addresses with the identification of the new VPN route indicating that all flows destined to these destinations are to be routed through the new VPN route. In another example, the client device110may receive one or more domain names (e.g., first domain) that are served by origin servers (e.g., origin server130A). The flows identified may be flows that were previously transmitted/received by the client device110or alternatively new flows that are to be transmitted/received by the client device110. The VPN traffic optimizer135causes configuration of the client device's VPN routing table112to route traffic to certain destinations via the selected second server. For example, the routing table112of the client device110is updated to include an entry (e.g., entry6c) such that traffic destined to IP address131A of the origin server130A hosting a first domain is routed through a VPN route with a second VPN destination (e.g., VPN destination125B). The traffic optimizer135identifies the IP address121B of second server123B as the encapsulation destination IP address for the VPN destination address125B and the client device is updated to include the encapsulation destination IP address121B for the second VPN destination. In the embodiments where the origin server130A is a customer of a cloud-based proxy server, the routing table112of the client device110is updated to include a VPN route (VPN destination125B) for the IP address of the proxy server120A as the destination IP address (instead of the IP address of the origin server131A as illustrated inFIG.2B). The destination VPN address125B is then associated with an encapsulation destination IP address121B, which is the IP address of the second server120B. In these embodiments, while the second server120B is updated to be the VPN server (destination of the second VPN connection) for the flows, the first server120A may remain the proxy server of the cloud-based service that is the actual destination of the flows as opposed to the origin server130A. While the illustrated example ofFIG.2Bshows that only one entry in the VPN routing table is updated, in other examples, multiple flows or all flows that are tunneled through the first VPN connection are updated to be routed through the second VPN connection. In these alternative examples, multiple or all entries of the routing table112are updated with the VPN destination address125B instead of VPN destination address125A. In some embodiments, the VPN routing table112may also be configured on a per-protocol level, either by routing based on both port and VPN IP address, by packet inspection, or any other mechanism available to VPN client122to detect and determine a protocol. In some embodiments, when not all traffic from the client device110is to be routed through the second VPN connection, one or more additional entries can be present in the VPN routing table112for one or more flows. These additional entries can for example define the first VPN connection as a tunnel for forwarding traffic and potentially additional VPN connections to one or more other flows of traffic (not illustrated). Based on the identification of the second server120B, the client device establishes, at operation310, a second VPN connection for the at least one flow from the flows between the client device and the second server based on the second VPN credentials. The second VPN credentials include cryptographic credentials that enable the second server120B and the client device110to communicate securely through the VPN protocol. The VPN credentials further include the VPN source address of the client device110and the VPN destination address. Upon establishment of the second VPN connection, the client device110forwards, at operation312, at least one flow through the second VPN connection to the second server120B. In some embodiments, all traffic that was previously forwarded through the first VPN connection is routed through the second VPN connection. In other embodiments, only a portion of the traffic is routed through the second server120B. For example, the first server120A can be coupled with the two origin servers130A and130B and traffic between the client device110towards and from each one of these origin servers is first forwarded through the first VPN connection. The second VPN connection can be established for traffic that is destined to a first domain which is served by the first origin server130A and traffic destined to a second domain served by the second origin server130B can be forwarded through the first VPN connection. Alternatively, the second VPN connection can be established for traffic that is destined to the first domain and for traffic that is destined to the second domain. In some embodiments, forwarding the at least one flow through the second VPN connection to the second server based on the first VPN credentials associated with the VPN client includes operations309and311. At operation309, a second request for the network resource at the domain is transmitted. At operation311, a second response including the network resource is received from the origin server and the second proxy server. In some embodiments, when the first server is a proxy server of the cloud-based proxy service that is operative to receive traffic on behalf of the origin server, the second response including the network resource can be received via the first server and the second server. FIG.4illustrates a flow diagram of exemplary operations for determining a second server to be used as a VPN destination, in accordance with some embodiments. The operations ofFIG.4are typically performed by a service server125. The VPN traffic optimizer135determines a second server120B from a set of servers that is to be used as a destination of the VPN connection for the client device110to obtain network resources of one or more domain. The second server120B is identified based on one more traffic optimization criteria that need to be satisfied by the VPN connection. At operation402, the VPN traffic optimizer135collects network intelligence metrics from requests that are fulfilled by multiple servers that from a set of servers that can be used as VPN servers. For example, the set of servers can be proxy servers of a cloud-based proxy service. At operation404, the VPN traffic optimizer135determines based on the collected network intelligence metrics an optimized route for the VPN connection that identifies the server that is to be used as a destination of the VPN connection. The optimized route satisfies the traffic optimization criteria. The collected metrics can be performed based on (operation403) active or passive monitoring of remote destinations (e.g., proxy servers, and/or origin servers) to measure latency, packet loss, congestion or other network metrics. In some embodiments, the determination of the server is (operation405) further performed based on properties of the network such as cost, reliability, current or predicted utilization. The determination of the optimized route can be performed to satisfy one or multiple optimization criteria, including ensuring low latency responses between the client device and the origin server hosting the requested network resource, high reliability of the traffic between the client device and the origin server and/or the proxy server, low cost of the VPN connection established for the client device, and/or quality of service. In some embodiments, the traffic optimization criteria can be determined based on the characteristics of the protocol of the packets/traffic sent through the VPN connections. For example, the optimization criteria can be set based on characteristics of the protocol such as whether a protocol is latency-sensitive or insensitive, has high or low bandwidth requirements, or how tolerant the specific protocol is to packet loss. The optimized route is based at least in part on the characteristics of the protocol. In some embodiments, the traffic optimization is performed for traffic of a given domain such that the second server is to be used as a destination of a second VPN connection that is to be used to forward traffic of the given domain. In other embodiments, the traffic optimization is performed for traffic of multiple domains (e.g., a subset of all domains of network resources that are requested at the client device110, or all domains of network resources requested at the client device110) and the second VPN connection is to be used for forwarding traffic of the multiple domains. The service server125transmits, at operation406, the identification of the second server120B to the client device110causing the update of the VPN routing table to include a second route for a second VPN connection for at least one flow from the flows between the client device and the second server based on the first VPN credentials. The client device110receives an identification of the second server120B to be used as a destination of the second VPN connection. The second VPN connection satisfies a set of traffic optimization goals for at least one flow from the flows forwarded through the first VPN connection. The VPN traffic optimizer135causes configuration of the client device's VPN routing table112to route traffic to certain destinations via the selected second server. For example, at operation3a, the traffic optimizer135identifies the IP address121B of second server123A and VPN destination address125A. The traffic to a given destination is identified based on a flow identifier. The flow identifier can be a destination IP address, a source address, or a domain name. In some embodiments, all flows that are tunneled through the first VPN connection are updated to be routed through the second VPN connection and there is no need to specify the flow for which the second server is intended to be used as a VPN destination. The embodiments of the present invention enable the dynamic routing of VPN traffic from a client device towards one or more servers through one or more VPN connections. The multiple VPN connections can be established simultaneously or successively to transmit a portion of all of traffic that is originating from the client device. In some embodiments, the VPN connections are established between the client device and one or more proxy servers of a cloud-based proxy service when the client device is requesting network resources served by origin servers that are coupled with the proxy servers. The routing is dynamically updated to optimize the VPN route that are established based one or more optimization criteria. FIG.5illustrates a block diagram of an exemplary computer system that can be used for traffic optimization in virtual private networks (VPNs), in accordance with some embodiments. The computer system500, which is an electronic device, includes the bus(es)550which is coupled with the processing system520, power supply525, memory530, and the nonvolatile memory540(e.g., a hard drive, flash memory, Phase-Change Memory (PCM), etc.). The bus(es)550may be connected to each other through various bridges, controllers, and/or adapters as is well known in the art. The processing system520may retrieve instruction(s) from the memory530and/or the nonvolatile memory540and execute the instructions to perform operations described herein. The bus650interconnects the above components together and also interconnects those components to the display controller & display device570, Input/Output devices580(e.g., NIC (Network Interface Card), a cursor control (e.g., mouse, touchscreen, touchpad, etc.), a keyboard, etc.), and the optional wireless transceiver(s)590(e.g., Bluetooth, Wi-Fi, Infrared, etc.). In one embodiment, the client device110, the first server120A, the second server120B, the service server125, and/or the origin servers130A-B can take the form of the computer system500and perform the operations described with reference toFIGS.1-4. The techniques shown in the figures can be implemented using code and data stored and executed on one or more electronic devices (e.g., a client device, a proxy server, an origin server, a service server). Such electronic devices store and communicate (internally and/or with other electronic devices over a network) code and data using computer-readable media, such as non-transitory computer-readable storage media (e.g., magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase-change memory) and transitory computer-readable communication media (e.g., electrical, optical, acoustical or other form of propagated signals—such as carrier waves, infrared signals, digital signals). In addition, such electronic devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine-readable storage media), user input/output devices (e.g., a keyboard, a touchscreen, and/or a display), and network connections. The coupling of the set of processors and other components is typically through one or more busses and bridges (also termed as bus controllers). Thus, the storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of that electronic device. Of course, one or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware. While the flow diagrams in the figures show a particular order of operations performed by certain embodiments of the invention, it should be understood that such order is exemplary (e.g., alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, etc.). While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.
64,878
11863449
DETAILED DESCRIPTION The technical terms “first”, “second” and the similar terms are used to describe elements for distinguishing the same or similar elements or operations and are not intended to limit the technical elements and the order of the operations in the present disclosure. Furthermore, the element symbols/alphabets can be used repeatedly in each embodiment of the present disclosure. The same and similar technical terms can be represented by the same or similar symbols/alphabets in each embodiment. The repeated symbols/alphabets are provided for simplicity and clarity and they should not be interpreted to limit the relation of the technical terms among the embodiments. Reference will now be made in detail to the present embodiments of the disclosure, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the description to refer to the same or like parts. Reference is made toFIG.1.FIG.1depicts a diagram illustrating a configuration of communication devices according to some embodiments of the present disclosure. The communication device PE1stores a destination-port forwarding table TB. As shown inFIG.1, a client device c1plans to send a data flow DF to a client device c2. In a normal situation, a communication device PE1forwards the data flow to the data flow through a port P1. The forwarding path should be the path of the communication devices CE1-PE1-PE3-PE4-CE2. However, the communication device PE1may be configured incorrectly or other reasons to drop a packet of the data flow. For example, VLAN member portlist is not configured properly, such that the packet is dropped because a misconfiguration that the packet is not permitted to pass the VLAN filtering of egress port P1is made. For monitoring the reasons why the packet is dropped, the communication device PE1executes an event monitoring process. In some embodiments, as shown inFIG.1, a monitor port P2is set on the communication device PE1. For example, when the packet is dropped by the VLAN filter of the egress port, the packet will be redirected to the monitor port P2. At this time, after a header of the packet is modified to add some information, the modified packet will be sent to a monitoring center M1, such that a network administrator can coordinate information of the packets which are dropped to monitor a dropping event. In some embodiments, a monitor port is set on the communication device PE1(e.g., the CPU port of the destination-port forwarding table TB inFIG.1). When a determination that the packet should be dropped is made, the monitor port receives the dropped packet to execute the following diagnostic process. Reference is made toFIG.2.FIG.2depicts a block diagram illustrating the communication device200according to some embodiments of the present disclosure. As shown inFIG.2, the communication device200includes a packet processor210, a data port220, a monitor port230, and a memory240. The packet processor210is coupled to the port220, the monitor port230, and the memory240. The data port220is configured to execute a data packet receiving/forwarding. The communication device200includes a plurality of data ports220. For the sake of brevity, one data port220is shown in the present disclosure as an embodiment. For example, the communication device200receives a packet of data flow through the data port220. If the communication device200determines that the packet should be dropped, the packet will be redirected to the monitor port230. The monitor port230is the monitor port (P2) or the monitor port (CPU port) inFIG.1. For further showing descriptions of the communication device200of the disclosure, reference is made incorporating withFIG.3.FIG.3is a flow chart illustrating a network management method according to some embodiments of the present disclosure. In step S305, receiving a packet of data flow through the data port220is performed. In some embodiments, because the packets of the same data flow have the same attribute, the communication device200can identify the data flow of the packets according to the packet attribute. In step S310, determining whether the packet satisfies a dropping event is performed. The dropping event can be, but is not limited to, the event that the packet carries mismatched or wrong information such that the correct destination port can not be found, the packet is the unauthorized packet or the intrusion packet, the packets can not be stored and forwarded because the communication device200has no enough memory spaces, a lookup table miss occurs, and so on. Accordingly, the packet is determined to be dropped. In some embodiments, if a determination that the packet does not satisfy a dropping event is made, it represents that the packet is normal and in step S355the data forwarding process is performed to send the packet to the destination. If a determination that the packet satisfies the dropping event is made, in step S315the packet is redirected to the monitor port230. In some embodiments, the packet is stored in the memory240through the monitor port230. The memory240includes a plurality of queues, and each queue has its corresponding priority. Based on the packet attributes and the packet priorities, when the packets are redirected, the packets will be distributed to the queues which have different priorities. The packet processor210accesses the packet in the queues according to the priority of the queues to execute the following diagnostic process (e.g., the packet in the queue having high priority will be diagnosed first). In some embodiments, if a burst of monitoring packets are redirected to the monitor port230, i.e., a large number of packets of the same type of data flows will be stored in the memory240, the communication device200has to diagnose a large number of packets which belong to the same type of data flows, such that the problem of wasting computing resources occurs. For preventing the problem, in some embodiments, the communication device200executes a suppression filter process. In the suppression filter process, only the first packet of the same data flow which is received by the communication device200is redirected to the monitor port230. If one packet which belongs to the same data flow is diagnosed and the dropping event occurs to the following packets which belong to the same data flow, the following packets will not be redirected to the monitor port230. Therefore, the packet processor210has to process only one packet of the same data flow without wasting time and storage space due to the dropping event of the same data flow. The suppression filter process is shown below. First, the communication device200classifies the data flow. In step S320, computing a digest value of the packet according to a packet attribute and the dropping event by the packet processor210is performed. The packet attribute can be, but is not limited to, the destination/source media access control (MAC) address, the destination/source IP address, the IEEE 802.3 Ethertype, the VLAN identifier, the L4 destination/source port, the tunnel header, the IPv6 flow label. In some embodiments, the digest value is computed by using the hash algorithm, the packet attribute, and the dropping event. For example, the packet content is a binary value of 32-bit length.FIGS.4A to4Dare diagrams for hash algorithms according to some embodiments of the present disclosure. In some embodiments, as shown inFIG.4A, the hash algorithm reverses the content of an original packet400such that the most significant bit of the original packet400is reversed to the least significant bit of the original packet400to obtain a packet401. And then, the exclusive-or computation is performed at the original packet400and the packet401which are the inputs, and the result of the exclusive-or computation is a first digest value of the400. In some embodiments, as shown inFIG.4B, the hash algorithm reverses the original packet400to obtain two reversed packets: packet403and packet404. And then, the exclusive-or computation is performed at the packet403and the packet404which are the inputs, and the result of the exclusive-or computation is a second digest value of the packet400. In some embodiments, as shown inFIG.4C, the hash algorithm splits the original packet400into two segments and reverses the order of the two segments to obtain the packet405. And then, the exclusive-or computation is performed at the packet405and the original packet400which are the inputs, and the result of the exclusive-or computation is a third digest value of the packet400. In some embodiments, as shown inFIG.4D, the hash algorithm performs the exclusive-or computation at the two packets405which are the inputs, and the result of the exclusive-or computation is a fourth digest value of the packet400. It should be noted that the illustration ofFIG.4AtoFIG.4Dis provided as some embodiments of the disclosure and the hash algorithms are not limited herein. In step S325, computing an identification code of the packet according to the digest value of the packet is performed. For example, the length of the digest value is 32-bit. The communication device200allocates a lookup table which size is 232for storing whether the digest value is recorded. However, for saving the memory space, in step S325the length of the digest value is compressed to reduce the size of the lookup table. The lookup table stores the status value associated with the digest value. In some embodiments, the length of the identification code is smaller than the length of the digest value. For example, the length of the digest value 32-bit is compressed into 14-bit. The digest value whose length is compressed is called the identification code. For example, the 32-bit digest value is split into three 14-bit data (the last segment has only 4-bit data and the other 10 bits are zero). The computation is executed on the three data, e.g., the exclusive-or, to obtain the identification code which length is 14-bit. The identification code corresponds to an address of the lookup table (e.g., address 0 to address 214-1). The content which the address of the lookup table points to is the status value and the status value represents whether the packet corresponding to the digest value is received (described below). Therefore, when the lookup table is allocated with the size 214, the lookup table can store the status value of the digest value whose length is 32-bit. In some embodiments, the length of the identification code is equal to the length of the digest value. The length of the digest value which is compressed is not limited herein. In step S330, searching in the lookup table the status value associated with the identification code is performed. In some embodiments, because the hash algorithms are applied for computing the hash values, it may happen that different data flows have the same digest value (hash value), such that the collision will occur. Therefore, the method designs the solution that the same packet has more than one hash values to prevent the collision problem. The embodiment in the disclosure provides four hash algorithms and four lookup tables to respectively store the status value of the digest value corresponding to each data flow. It should be noted that the number of lookup tables is not limited and the scope of the disclosure includes two or more lookup tables to store the status value. Reference is made toFIGS.5A-5C.FIGS.5A to5Care diagrams of lookup tables according to some embodiments of the present disclosure. The digest value is compressed to generate the identification code. The identification code is computed by applying a first hash algorithm501, a second hash algorithm502, a third hash algorithm503, and a fourth hash algorithm504to generate a first hash value, a second hash value, a third hash value, and a fourth hash value. On the other hand, the first lookup table511corresponds to the first hash algorithm501and the first status value is recorded into where the address of the first hash value points. Similarly, the second lookup table512corresponds to the second hash algorithm502and the second status value is recorded into where the address of the second hash value points. The third lookup table513corresponds to the third hash algorithm503and the third status value is recorded into where the address of the third hash value points. The fourth lookup table514corresponds to the fourth hash algorithm504and the fourth status value is recorded into where the address of the fourth hash value points. The packet processor210determines whether the data flow has been received according to the four status values. For example, when the status value is 0, it means that the communication device200has not processed the dropping event of the data flow. When the status value is 1, it means that the dropping event of the data flow has been processed. In step S335, determining whether the status value is equal to or satisfies a control value is performed. In some embodiments, the status value is applied for determining whether the packet of the data flow is analyzed. It should be noted that the status value is one or more bits and the number of bits of the status value is not limited herein. A person of ordinary skill in the art can design the bit number based on the practical situation, for example, the status value is a 2-bit value to express four different status. In one embodiment, the status value is 1-bit. When the status value is 1, it means that communication device200has analyzed the packet of the data flow. Otherwise, when the status value is 0, it means that the communication device200has not analyzed the packet of the data flow. For the sake of brevity, the control value which is 1 is taken as an embodiment for showing whether the packet is analyzed, and a person of ordinary skill in the art can design the status and the value based on the practical situation. As shown inFIG.5A, the contents which the four addresses point to are the status values, which are 0, 0, 0, and 0 respectively. In other words, the communication device200does not analyze the packet of the data flow. Therefore, in step S340the monitoring event records the packet. For example, the packet will be diagnosed and the diagnosed result is recorded to the monitoring event for the network administrator referring. In some embodiments, the packet content includes the timestamp that the packet is received, the dropping event associated with the packet, the ingress port and the egress port of the packet. In step S345, modifying the status value to be the control value (e.g., 1) is performed. As shown inFIG.5B, the status value corresponding to the identification code of the first lookup table to the fourth lookup table is modified to be 1, which represents that the situation of dropping the packet of the data flow is recorded. In some embodiments, steps S305to S330are performed. The monitor port230receives the packet of the data flow and the status value corresponding to the packet is computed accordingly. Then in step S335, a determination that all the status values of the first lookup table to the fourth lookup table are 1, as shown inFIG.5B, it represents that the situation of dropping the packet of the data flow is recorded (the control value is 1, for example). In step S350, the packet will be dropped accordingly. Therefore, the packet will not be processed in the diagnostic process (e.g., forwarding the packet to the monitoring center or recording the packet in the monitoring event). It should be noted that if in step S335only part of the lookup tables whose status value(s) is/are 1 (e.g., the status value of the first lookup table is 1 and the status values of the second to the fourth lookup tables are 0), as shown inFIG.5C, it means that the collision occurs and the communication device200will still process the dropping event of the data flow. For example, the hash value of the first packet of the first data flow which is computed by using the first hash algorithm is 3173, and the content which is pointed by address 3173 is modified to be 1 in the previous dropping event. However, the hash value of the second packet of the second data flow which is computed by using the second hash algorithm is also 3173. Because the hash values of the second data flow are computed by the second to fourth hash algorithms and the status values which correspond to the hash values are all 0, the collision occurs between the hash values of the first packet and the second packet. Actually, in the embodiment, the communication device200does not process the second data flow. In the meantime, the communication device200will process the dropping event of the second data flow. Therefore, multiple hash algorithms are applied for decreasing the situation of determining by mistake that the data flow has been processed. In some embodiments, the communication device200sets an aging value to the digest value of the data flow. For example, if the aging value of the digest value of the data flow is larger than a threshold value (e.g., the aging value counts increasingly to the threshold value 255) or is smaller than a threshold value (e.g, the aging value counts decreasingly to the threshold value 0), the status value of the lookup table will be erased. Therefore, the memory space can be freed to store the new status value of the data flow. In some embodiments, the communication device200will erase periodically (e.g., every day) all the status values of the lookup table. Therefore, the lookup table records status values according to the period of the data flow to increase efficiency. As described above, the communication device and the network management method in the disclosure no additional hardware is needed for embedding the management information into the packet (i.e., the chip cost is reduced). The network administrator only has to set configurations of the communication device to forward the packet to the processor or specific port, such that the dropping packet can be monitored. Additionally, when the packet occurs the dropping event, the communication device records the management information base and gathers the statistics of the counter, and the communication device also records the packets having problems, their causes, and the timestamps. The communication device only has to record once (to reduce the processor loading) to generate the report for the network administrator and further inspections to reduce the time for troubleshooting. It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present disclosure without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the present disclosure cover modifications and variations of this disclosure provided they fall within the scope of the following claims.
19,049
11863450
DETAILED DESCRIPTION Overview In an embodiment, a method comprises: at a network device configured to be connected to a network and having control and data planes, and interfaces configured for network operations in the network: upon receiving, from a controller, instructions to form a local twin of the network device that is a virtual replica of the network device to be used for test purposes, creating the local twin and configuring the local twin to include virtual control and data planes, and virtual interfaces, which are virtual replicas of, and operate independently from, the control and data planes, and the interfaces, of the network device, respectively; and hosting the local twin on physical resources of the network device such that the local twin is configured for virtual network operations on the network device that replicate, but are independent from, the network operations. Example Embodiments Networks today are complex and dispersed given the explosion of co-location facilities, and cloud and 5G environments, for example. At the same time, the networks provide critical support to and maintain a wide variety of applications operated by enterprises. Network architecture development and operational support teams seek new and innovative methods for developing and testing the networks to assure they provide high availability for all functions and/or locations of the networks. Furthermore, a test network used as a proxy to test a network is preferably an “exact-as-possible” replica of the network to assure that testing and compliance are closely aligned in any test simulation. Accordingly, embodiments presented herein accurately replicate a network (also referred to as a “production” network) to produce a test or “replica” network, and inject failures into the replica network in a secure manner to determine how the network will react to the failures. The replica network provides the same capabilities as the network. The failures are injected into the replica network, which responds to the injected failures, in a way that does not interfere with operation of the network. Reference is now made toFIG.1, which shows a system100configured to deploy a virtual network environment that “virtually” replicates a network120. System100includes a network controller110that configures and controls a plurality of network devices122(e.g., routers) of network120. More generally, any controller or management entity may be used to control network devices122according to the embodiments presented herein. In the example shown inFIG.1, network devices122include routers R1, R2, R3, R4, and R5 connected to one another over physical links or connections to form a network topology. More or fewer network devices may be employed in other examples. Each network device Ri respectively includes a management plane, a control plane, a data plane, and interfaces connected to the physical links, as is known. In the ensuing description, the management plane and the control plane are collectively, and more generally, referred to simply as the “control plane.” The control and data planes of network devices122are configured to forward traffic (e.g., data packets) over the interfaces to and from user devices (not shown) connected to network120. Network120may be referred to as a “production” network120, and network devices122may be referred to as “production” network devices122. The term “production” means that the network/network devices are deployed or fielded and perform normal/regular network operations such as routing and traffic forwarding to and from customers. In the example ofFIG.1, a multi-role network injection failure probe (NIFP)130(referred to individually as NIFP130and collectively as NIFPs130) is embedded in each network device Ri in the network120. NIFPs130may be embodied as software agents (each referred to as a probe process) and are configured to communicate with the network controller110. In some embodiments, network controller110sends commands to NIFPs130to control the NIFPs to create a virtual replica network136(also referred to as a virtual “test network”) for network test purposes, which replicates the configuration, topology, and operation of part or all of network120. As described below, virtual replica network136includes or instantiates one or more virtual replica network devices VR1, VR2, and so on, that replicate (virtually) corresponding ones of network devices R1, R2, and so on, and their interconnections. In some embodiments, network controller110may control NIFPs130to introduce virtual or synthetic network failure tasks or scenarios into virtual replica network136. The network failure tasks or scenarios may include different types of network failures such as link failure, link load, etc. After completion of a failure scenario, the NIFPs130may report back to network controller110, which may communicate with a network operations team (not shown) to understand and comprehend the potential impact of the failures. In an example, the network controller110and the NIFPs130may be configured to use in-Situ Operations Administration and Maintenance (iOAM) techniques to propagate relevant network failure information as these induced failures/errors occur. Embodiments presented herein include an out-of-band option and an in-band option for virtual replica network136. In the out-of-band option, virtual replica network136is physically separate from network devices122. That is, virtual replica network136is hosted in a network test environment that does not reside on network devices122. On the other hand, in the in-band option, virtual replica network136resides on one or more of network devices122. In both cases, virtual replica network136virtually replicates the control planes (including the management planes), data planes, and interfaces of one or more of network devices122, and thereby accurately replicates at least a portion of the topology of network120. Thus, virtual replica network136may include virtual control planes, virtual data planes, and virtual interfaces that replicate the control planes, data planes, and interfaces of network devices122. FIG.2is an illustration of an example out-of-band method200of creating a virtual replica network202(representative of virtual replica network136inFIG.1) in an external virtual environment, such as on a protocol simulator204, that is physically separate from (i.e., external to) network devices122and thus physically separate from network120. In the example ofFIG.2, virtual replica network202includes virtual replica network devices VR1, VR2, VR3, and VR4 hosted/instantiated on protocol simulator204, and which virtually replicate network devices R1, R2, R3, and R4, respectively. Out-of-band method200employs NIFPs130to replicate the “content” of one or more of network devices122(e.g., network devices R1-R4) to protocol simulator204. Particularly, NIFPs130freeze the configurations and memory states of the one or more of network devices and take an exact “snapshot in time” (referred to simply as a “snapshot”) of the (frozen) configurations and memory states of the one or more network devices at the time the snapshot is taken. The snapshot includes management plane, control plane, and data plane state entries, counter and registry information, and interface details, of the one or more network devices, which may be used to fully replicate, or only partially replicate, the network. For example, the snapshot may include routing and forwarding tables, interface register states and counters, and so on. Multiple snapshots may be used to instantiate virtual replica network202. Then, operators may introduce virtual failure scenarios into virtual replica network202safely, outside of network120. In the example ofFIG.2, each network device Ri respectively includes a kernel206(e.g., a Linux kernel), NIFP130hosted on the kernel, and network components208that are also hosted on the kernel. Network components208collectively implement or represent a control plane, a data plane, and interface configuration (Intf) of network device Ri in order to route/forward traffic according to the topology of network120. Network components208may be implemented in hardware and/or as applications/processes. Network components208include routing protocols208athat execute on kernel206, and configuration information208b(including data plane information) for/associated with the routing protocols and interfaces. Routing protocols208amay include an open shortest path first (OSPF) protocol, a border gateway protocol (BGP), and a Cisco express forwarding (CEF) switching process, for example. Other protocols may be used. Configuration information208bmay include a routing information base (RIB), a forwarding information base (FIB), and configuration information and counters (and registers) for the interfaces, for example. Out-of-band method200starts at220, when an operator sends a query to network controller110to initialize a test failure scenario to be run. The operator may do this via application program interfaces (APIs) to network controller110. Network controller110analyzes the query from the operator to determine or identify a list of network devices122(referred to as the “identified network devices”) that will be part of the test failure scenario. In the example ofFIG.2, the identified network devices include network devices R1-R4. Network controller110communicates only with the NIFPs130of the identified network devices. Specifically, network controller110queries NIFPs130of the identified network devices to freeze, and then take snapshots of, full device configurations and memory states of the identified network devices. Responsive to the queries, NIFPs130on the identified network devices capture the full device configurations and memory states of the identified network devices to produce “memory snapshot” files (also referred to as device “core” files) for the identified network devices. NIFPs130send the memory snapshot files to network controller110. At222, network controller110transfers the memory snapshot files to protocol simulator204to instantiate on the protocol simulator one or more virtual replica network devices (e.g., virtual replica network devices VR1-VR4) that virtually replicate the identified network devices (e.g., routers R1-R4) and collectively comprise or form virtual replica network202. To do this, for example, protocol simulator204re-instantiates the full device configuration and memory states from the memory snapshot files to virtual replicate network devices VR1-VR4 of virtual replica network202. Once virtual replica network202has been instantiated, at224, network controller110performs virtual failure injection into the virtual replica network202on protocol simulator204. FIG.3shows operations300employed to instantiate virtual replica network device VRi that virtually replicates network device Ri using the memory snapshot file taken from the network device. At302, network controller110receives the memory snapshot file from/for the network device Ri. The memory snapshot file may be formatted in accordance with an internetwork operating system (IOS) that is running on network device Ri. At304, network controller110extracts information for network device Ri from the memory snapshot file, including configuration, protocol, and any other relevant information to be used to virtually replicate the network device. Network controller110sends the information to protocol simulator204. At306, protocol simulator204simulates control plane protocols, a data plane, and local interface specific configuration of network device Ri based on the information from network controller110, to instantiate virtual replica network device VRi. At308, protocol simulator204uses neighbor interface and protocol information for network devices that are neighbors to network device Ri as provided in the memory snapshot file to simulate the neighbors as virtual neighbors of virtual replica network device VRi. At310, network controller110compares network routes and other network-related information present in the memory snapshot file with the virtual replica network device VRi to ensure that the virtual replica network device accurately replicates, i.e., matches, the network device Ri. When there is a mismatch, at312, adjustments are made to attain closer alignment. When there is a match, at314, no action is taken. Next, network controller110injects virtual/simulated faults into the virtual replica network device VRi in accordance with at test scenario. As described above, the out-of-band option uses the memory snapshots to simulate one or more network devices and their surrounding neighbors, which brings the virtual replica network and its virtual replica network devices as close as possible to the network and its network devices. An advantage of this approach is that, for each network device, the network device state is “snapshotted” at a given moment in time and contains all of the information configured on the network device at that time and that can be used to virtualize the network device. The out-of-band option includes security measures to secure or protect communication between network controller110and network devices122, and to secure content on and operations performed by the network devices. For example, multiple levels of transport authentication and authorization may be applied to securing the request for and retrieval of snapshots with respect to NIFPs130. While the transport security is highly secure when strong encryption is used, API keys, and other forms of security, may be added. For example, hardware identity and software posture attestation and appraisal may also be applied, and any transfer of a snapshot item may require a full cross-check of attestation for hardware, software, software images, and tampering. The in-band option is described below in connection withFIGS.4-6. At a high-level, the in-band option creates a virtual replica network that virtually replicates, and is physically implemented on, network120. The virtual replica network comprises virtual replica network devices referred to as “local twins” implemented on, and that virtually replicate, corresponding ones of network devices122. Although the local twins are implemented on corresponding ones of the network devices122, the local twins operate as a virtual test environment separately or independently from their hosting network devices with respect to “production” network operation, such as production routing and traffic forwarding, and so on. The virtual replica network comprising the local twins represents a virtual replica network overlay hosted on, but that operates independently of, the underlying network (i.e., network120). FIG.4is an illustration of an example in-band option400that includes local twins that are implemented physically on network devices R1 and R2 under control of network controller110. Each network device Ri includes kernel206and NIFP130hosted on the kernel. Kernel206may include a virtualized operating system, such as the Cisco vIOS-XR, or the like, depending on a type of the network device Ri. The virtualized operating system may support Docker containers and/or virtual machines, for example. On network device R1, kernel206hosts a local instance404(i.e., a production instance) of a control plane and a data plane (i.e., control and data planes) of network device R1. Local instance404is considered “local” because it resides/is hosted on network device R1. By way of example, the control and data planes include instances of the BGP and the OSPF protocol, and instances of an RIB and possibly an FIB (not shown) associated with the protocols. The data plane also includes interface configuration information for interfaces of the network device R1. To implement the in-band option on network device R1, kernel206also hosts a local twin410that includes a virtual control plane, a virtual data plane (i.e., virtual control and data planes), and virtual interfaces. The virtual interfaces are shown inFIG.5, described below. The local twin410may be implemented in/as a virtual machine or a container hosted directly on kernel206separately from processes that execute in local instance404, for example. The local twin410virtually replicates local instance404, but is operationally decoupled form the local instance. The virtual control and data planes of local twin410include virtual instances of the BGP and the OSPF protocol, and virtual instances of an RIB and possibly an FIB associated with the virtual protocols, which replicate the corresponding components or peers on local instance404. The virtual control and data planes also include virtual interface configuration information for the virtual interfaces of local twin410. More generally, local twin410represents a virtual replica network device (e.g., VRi) hosted on network device R1 that virtually replicates the network device with respect to production network functions performed by the network device (e.g., by local instance404), such as preparing to forward production traffic and then forwarding that traffic. While hosting of/instantiating local twin410on network device R1 relies on or uses physical resources of the network device, such as physical compute (e.g., CPU), storage (e.g., memory), and network (e.g., interface/port) resources, the local twin performs virtualized network operations separately and independently from the production network operations performed by local instance404. To this end, under control of network controller110, NIFP130on network device R1 can be commanded to inject virtual failures and/or virtual traffic into local twin410. The injected failures and/or virtual traffic propagate through local twin independently of, and on a non-interfering basis with respect to, the production network operations performed by network device R1, including routing, traffic forwarding, and so on. The NIFP130reports, to network controller,110, results of injecting the failures and/or forwarding the virtual traffic. To further implement the in-band option, network device R2 is configured similarly to network device R1 to host, on network device R2, a local twin420that serves as a virtual replica network device that virtually replicates network device R2. More specifically, kernel206of network device R2 hosts a local instance414having control and data planes similar to those of local instance404on network device R1. Also, kernel206of network device R2 hosts local twin420to include virtual control and data planes, similar to those of local twin410hosted on network device R1, that virtually replicate local instance414. The virtual data plane of local twin420also includes virtual interface configuration information for virtual interfaces of the local twin that replicate interfaces of network device R2. Network controller110employs NIFP130on network device R2 to inject failures and/or virtual traffic into local twin420, which propagate through the local twin independently of network-related operations performed by network device R2. More specifically, network controller110sends commands to NIFP130, which trigger the NIFP to inject the failures. FIG.5is an illustration of in-band option400that shows virtual connections between local twin410and local twin420to form a virtual replica network that replicates a portion of the topology of network120. That is, the virtual replica network shown inFIG.4has a virtual topology that replicates the topology of network120(at least between network devices R1 and R2). The example ofFIG.5assumes that, in network120, network device R1 includes 3 physical interfaces connected to 3 physical interfaces of network device R2 over 3 physical links L1, L2, L3. The in-band option replicates this topology virtually under control of network controller110in the following manner. Local twin410includes 3 virtual interfaces VI1connected to 3 virtual interfaces VI2of local twin420(which is referred to as the “remote local twin”) over 3 virtual links VL1, VL2, and VL2that virtually replicate physical links L1, L2, and L3, respectively. The virtual links may be implemented as logical links on the physical links. The virtual replica network shown inFIG.5represents a virtual replica network overlay that resides on and virtually replicates the topology of the underlying network (e.g., network120). Under control of network controller110and NIFPs130of network devices R1 and R2, virtual traffic that is forwarded by local twin410to (remote) local twin420over virtual links VL1-VL3may replicate traffic forwarded by network device R1 to network device R2 over physical links L1-L3; however, the virtual traffic is forwarded over the virtual links independently of and without interfering with the traffic forwarded over the physical links. Thus, NIFPs130may inject virtual traffic into the virtual network for test purposes without interfering with the underlying network. The in-band option may use different mechanisms to create the local twin(s) and the above-described virtual replica network/overlay of network120. In a first method, network controller110installs NIFPs130and the virtualized operating system on each of network devices122. Based on a test scenario to be run, network controller110identifies a portion of network120(i.e., the network topology) to be replicated, including relevant network devices in that portion of the network. Network controller110instructs the relevant network devices to replicate themselves. In response, each relevant network device instantiates a local twin (on that network device), and connects its local twin to any remote local twins to replicate the topology, as described above. Each relevant network device may implement its local twin in a container or as a virtual machine, which may implement a virtual control plane, a virtual data plane, and virtual interfaces. In addition, the local NIFP130may capture configuration information from a local control plane, a local data plane, and from local interfaces, and transfer the configuration information to the virtual control plane, virtual data plane, and virtual interfaces. The in-band option includes security measures to secure or protect communication between network controller110and network devices122, and to secure content on and operations performed by the network devices. Network controller110may limit the resources (e.g., compute, storage, and/or network resources) used by a local twin (e.g., local twin410) on the network device, so as to minimize the impact of the local twin on the local instance (e.g., local instance404). Network controller110may apply safety bounds to the local twin, with the ability to apply extremely tight boundaries. These boundaries restrict the impact (or a percentage of resources, as per selection by an operator) of the local twin on the local instance. For example, the operator may create a test scenario in which the local twin forwards virtual traffic (also referred to as synthetic or test traffic). The operator applies a lower priority to the virtual traffic compared to the (operational) traffic so that the traffic takes precedent over the virtual traffic. FIG.6is a flowchart of an example method600of an in-band option that includes creating, on a network device among network devices of a network that are controlled by a network controller, a local twin of the network device that replicates the network device and that is used for test purposes. The network device includes/has a control plane, a data plane (i.e., control and data planes), and interfaces configured for network operations in the network. The network operations may include routing and traffic forwarding, for example. The network device may be considered a production network device deployed in a production network, which includes many production network devices. The network operations may be considered production network operations that are not test network operations. At602, the network controller sends, to the network device over the network, instructions to instantiate/create a local twin of the network device. Upon receiving the instructions, the network device creates the local twin as a virtual replica of the network device (e.g., of a local production instance of the control plane, the data plane, and the interfaces) that is configured for test purposes. To do this, the network device configures the local twin to include a virtual control plane, a virtual data plane (i.e., virtual control and data planes), and virtual interfaces, which are virtual replicas of, and operate independently from, the control and data planes, and the interfaces of the network device, respectively. The virtual control and data planes may be configured to replicate routing protocols and one or more of a routing information base and a forwarding information base used by the control and data planes of the network device. The virtual interfaces may be configured to replicate the interfaces of the network device. At604, the network device hosts the local twin on physical resources (e.g., using compute, storage, and network resources) of the network device, such that the local twin is configured for virtual network operations that replicate, but are independent from, the (production) network operations. For example, processes used to instantiate the local twin may be executed as a container or as a virtual machine on the network device separately from processes used by the network device to implement its production control and data planes, and the interfaces of the network device. In addition, the network controller may perform placing constraints on the physical resources of the network device that are to be used to host the local twin, so that the network device hosts the local twin within the constraints to minimize an impact of the local twin on performance of the network operations. To do this, the network device may send, to the network device, a command that includes the constraints (e.g., CPU and/or storage limits). Once the network device instantiates the local twin, the network device monitors usage of the physical resources by the local twin, and limits such usage to the constraints. At606, the network device employs a probe process (e.g., NIFP) configured to inject, into the local twin, virtual network failures or virtual traffic that propagate through the local twin without interfering with control plane, data plane, and interface operations (such as traffic forwarding) performed by the network device. In an example in which the network includes physical links/connections to connect the network device to a remote network device, at608, the network device creates virtual links that are virtual replicas of, and reside on, the physical links, to connect the local twin to a remote local twin that is a virtual replica of, and hosted on, the remote network device. The local twin, the remote local twin, and the virtual links collectively form a virtual replica network overlay that resides on, replicates a topology of, and operates independently from the network. That is, the virtual replica network overlay performs virtual network operations that are independent of network operations performed by the “underlying” network. The network device may employ the probe process to implement a test scenario that includes forwarding virtual traffic from the virtual interfaces (over the virtual links), and may also assign a higher priority to traffic forwarding from the interfaces than to virtual traffic forwarding from the virtual interfaces. In another embodiment, a network controller of production network devices configured to implement a production network topology of a production network for routing and forwarding production traffic performs a method. The method comprises: providing first instructions to the production network devices to cause the production network devices collectively to form a virtual replica network overlay that replicates the production network topology, resides on the production network, and operates independently of the production network, wherein the first instructions cause the production network devices to perform, respectively: creating virtual network devices (i.e., local twins) that virtual replicate the production network devices; hosting the local twins devices on the network devices; and providing second instructions to the production network devices (e.g., to embedded probe process) to cause the production network devices to inject, into the virtual replica network overlay, virtual failures or virtual traffic that propagate through the virtual replica network overlay independently of forwarding the production traffic. Referring toFIG.7,FIG.7illustrates a hardware block diagram of a computing/computer device700or in general any apparatus that may perform functions of the network controller110, each of network devices122, and protocol simulator204described herein in connection withFIGS.1-6. Moreover, the hardware block diagram inFIG.7may also be generally representative of an apparatus, such as network device that is controlled, according to the techniques presented herein, to create a local twin on the network device, and then to induce a failure into the local twin, and into a virtual replica network overlay to which the local twin is connected. In at least one embodiment, the computing device700may be any apparatus that may include one or more processor(s)702, one or more memory element(s)704, storage706, a bus708, one or more network processor unit(s)710interconnected with one or more network input/output (I/O) interface(s)712, one or more I/O interface(s)714, and control logic720. In various embodiments, instructions associated with logic for computing device700can overlap in any manner and are not limited to the specific allocation of instructions and/or operations described herein. In at least one embodiment, processor(s)702is/are at least one hardware processor configured to execute various tasks, operations and/or functions for computing device700as described herein according to software and/or instructions configured for computing device700. Processor(s)702(e.g., a hardware processor) can execute any type of instructions associated with data to achieve the operations detailed herein. In one example, processor(s)702can transform an element or an article (e.g., data, information) from one state or thing to another state or thing. Any of potential processing elements, microprocessors, digital signal processor, baseband signal processor, modem, PHY, controllers, systems, managers, logic, and/or machines described herein can be construed as being encompassed within the broad term ‘processor’. In at least one embodiment, memory element(s)704and/or storage706is/are configured to store data, information, software, and/or instructions associated with computing device700, and/or logic configured for memory element(s)704and/or storage706. For example, any logic described herein (e.g., control logic720) can, in various embodiments, be stored for computing device700using any combination of memory element(s)704and/or storage706. Note that in some embodiments, storage706can be consolidated with memory element(s)704(or vice versa), or can overlap/exist in any other suitable manner. In at least one embodiment, bus708can be configured as an interface that enables one or more elements of computing device700to communicate in order to exchange information and/or data. Bus708can be implemented with any architecture designed for passing control, data and/or information between processors, memory elements/storage, peripheral devices, and/or any other hardware and/or software components that may be configured for computing device700. In at least one embodiment, bus708may be implemented as a fast kernel-hosted interconnect, potentially using shared memory between processes (e.g., logic), which can enable efficient communication paths between the processes. In various embodiments, network processor unit(s)710may enable communication between computing device700and other systems, entities, etc., via network I/O interface(s)712(wired and/or wireless) to facilitate operations discussed for various embodiments described herein. In various embodiments, network processor unit(s)710can be configured as a combination of hardware and/or software, such as one or more Ethernet driver(s) and/or controller(s) or interface cards, Fibre Channel (e.g., optical) driver(s) and/or controller(s), wireless receivers/transmitters/transceivers, baseband processor(s)/modem(s), and/or other similar network interface driver(s) and/or controller(s) now known or hereafter developed to enable communications between computing device700and other systems, entities, etc. to facilitate operations for various embodiments described herein. In various embodiments, network I/O interface(s)712can be configured as one or more Ethernet port(s), Fibre Channel ports, any other I/O port(s), and/or antenna(s)/antenna array(s) now known or hereafter developed. Thus, the network processor unit(s)710and/or network I/O interface(s)712may include suitable interfaces for receiving, transmitting, and/or otherwise communicating data and/or information in a network environment. I/O interface(s)714allow for input and output of data and/or information with other entities that may be connected to computing device700. For example, I/O interface(s)714may provide a connection to external devices such as a keyboard, keypad, a touch screen, and/or any other suitable input and/or output device now known or hereafter developed. In some instances, external devices can also include portable computer readable (non-transitory) storage media such as database systems, thumb drives, portable optical or magnetic disks, and memory cards. In still some instances, external devices can be a mechanism to display data to a user, such as, for example, a computer monitor, a display screen, or the like. In various embodiments, control logic720can include instructions that, when executed, cause processor(s)702to perform operations, which can include, but not be limited to, providing overall control operations of computing device; interacting with other entities, systems, etc. described herein; maintaining and/or interacting with stored data, information, parameters, etc. (e.g., memory element(s), storage, data structures, databases, tables, etc.); combinations thereof; and/or the like to facilitate various operations for embodiments described herein. The programs described herein (e.g., control logic720) may be identified based upon application(s) for which they are implemented in a specific embodiment. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience; thus, embodiments herein should not be limited to use(s) solely described in any specific application(s) identified and/or implied by such nomenclature. In various embodiments, any entity or apparatus as described herein may store data/information in any suitable volatile and/or non-volatile memory item (e.g., magnetic hard disk drive, solid state hard drive, semiconductor storage device, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM), application specific integrated circuit (ASIC), etc.), software, logic (fixed logic, hardware logic, programmable logic, analog logic, digital logic), hardware, and/or in any other suitable component, device, element, and/or object as may be appropriate. Any of the memory items discussed herein should be construed as being encompassed within the broad term ‘memory element’. Data/information being tracked and/or sent to one or more entities as discussed herein could be provided in any database, table, register, list, cache, storage, and/or storage structure: all of which can be referenced at any suitable timeframe. Any such storage options may also be included within the broad term ‘memory element’ as used herein. Note that in certain example implementations, operations as set forth herein may be implemented by logic encoded in one or more tangible media that is capable of storing instructions and/or digital information and may be inclusive of non-transitory tangible media and/or non-transitory computer readable storage media (e.g., embedded logic provided in: an ASIC, digital signal processing (DSP) instructions, software [potentially inclusive of object code and source code], etc.) for execution by one or more processor(s), and/or other similar machine, etc. Generally, memory element(s)704and/or storage706can store data, software, code, instructions (e.g., processor instructions), logic, parameters, combinations thereof, and/or the like used for operations described herein. This includes memory element(s)704and/or storage706being able to store data, software, code, instructions (e.g., processor instructions), logic, parameters, combinations thereof, or the like that are executed to carry out operations in accordance with teachings of the present disclosure. In some instances, software of the present embodiments may be available via a non-transitory computer useable medium (e.g., magnetic or optical mediums, magneto-optic mediums, CD-ROM, DVD, memory devices, etc.) of a stationary or portable program product apparatus, downloadable file(s), file wrapper(s), object(s), package(s), container(s), and/or the like. In some instances, non-transitory computer readable storage media may also be removable. For example, a removable hard drive may be used for memory/storage in some implementations. Other examples may include optical and magnetic disks, thumb drives, and smart cards that can be inserted and/or otherwise connected to a computing device for transfer onto another computer readable storage medium. Variations and Implementations Embodiments described herein may include one or more networks, which can represent a series of points and/or network elements of interconnected communication paths for receiving and/or transmitting messages (e.g., packets of information) that propagate through the one or more networks. These network elements offer communicative interfaces that facilitate communications between the network elements. A network can include any number of hardware and/or software elements coupled to (and in communication with) each other through a communication medium. Such networks can include, but are not limited to, any local area network (LAN), virtual LAN (VLAN), wide area network (WAN) (e.g., the Internet), software defined WAN (SD-WAN), wireless local area (WLA) access network, wireless wide area (WWA) access network, metropolitan area network (MAN), Intranet, Extranet, virtual private network (VPN), Low Power Network (LPN), Low Power Wide Area Network (LPWAN), Machine to Machine (M2M) network, Internet of Things (IoT) network, Ethernet network/switching system, any other appropriate architecture and/or system that facilitates communications in a network environment, and/or any suitable combination thereof. Networks through which communications propagate can use any suitable technologies for communications including wireless communications (e.g., 4G/5G/nG, IEEE 802.11 (e.g., Wi-Fi®/Wi-Fi®), IEEE 802.16 (e.g., Worldwide Interoperability for Microwave Access (WiMAX)), Radio-Frequency Identification (RFID), Near Field Communication (NFC), Bluetooth™ mm.wave, Ultra-Wideband (UWB), etc.), and/or wired communications (e.g., T1 lines, T3 lines, digital subscriber lines (DSL), Ethernet, Fibre Channel, etc.). Generally, any suitable means of communications may be used such as electric, sound, light, infrared, and/or radio to facilitate communications through one or more networks in accordance with embodiments herein. Communications, interactions, operations, etc. as discussed for various embodiments described herein may be performed among entities that may directly or indirectly connected utilizing any algorithms, communication protocols, interfaces, etc. (proprietary and/or non-proprietary) that allow for the exchange of data and/or information. In various example implementations, any entity or apparatus for various embodiments described herein can encompass network elements (which can include virtualized network elements, functions, etc.) such as, for example, network appliances, forwarders, routers, servers, switches, gateways, bridges, loadbalancers, firewalls, processors, modules, radio receivers/transmitters, or any other suitable device, component, element, or object operable to exchange information that facilitates or otherwise helps to facilitate various operations in a network environment as described for various embodiments herein. Note that with the examples provided herein, interaction may be described in terms of one, two, three, or four entities. However, this has been done for purposes of clarity, simplicity and example only. The examples provided should not limit the scope or inhibit the broad teachings of systems, networks, etc. described herein as potentially applied to a myriad of other architectures. Communications in a network environment can be referred to herein as ‘messages’, ‘messaging’, ‘signaling’, ‘data’, ‘content’, ‘objects’, ‘requests’, ‘queries’, ‘responses’, ‘replies’, etc. which may be inclusive of packets. As referred to herein and in the claims, the term ‘packet’ may be used in a generic sense to include packets, frames, segments, datagrams, and/or any other generic units that may be used to transmit communications in a network environment. Generally, a packet is a formatted unit of data that can contain control or routing information (e.g., source and destination address, source and destination port, etc.) and data, which is also sometimes referred to as a ‘payload’, ‘data payload’, and variations thereof. In some embodiments, control or routing information, management information, or the like can be included in packet fields, such as within header(s) and/or trailer(s) of packets. Internet Protocol (IP) addresses discussed herein and in the claims can include any IP version 4 (IPv4) and/or IP version 6 (IPv6) addresses. To the extent that embodiments presented herein relate to the storage of data, the embodiments may employ any number of any conventional or other databases, data stores or storage structures (e.g., files, databases, data structures, data or other repositories, etc.) to store information. Note that in this Specification, references to various features (e.g., elements, structures, nodes, modules, components, engines, logic, steps, operations, functions, characteristics, etc.) included in ‘one embodiment’, ‘example embodiment’, ‘an embodiment’, ‘another embodiment’, ‘certain embodiments’, ‘some embodiments’, ‘various embodiments’, ‘other embodiments’, ‘alternative embodiment’, and the like are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments. Note also that a module, engine, client, controller, function, logic or the like as used herein in this Specification, can be inclusive of an executable file comprising instructions that can be understood and processed on a server, computer, processor, machine, compute node, combinations thereof, or the like and may further include library modules loaded during execution, object files, system files, hardware logic, software logic, or any other executable modules. It is also noted that the operations and steps described with reference to the preceding figures illustrate only some of the possible scenarios that may be executed by one or more entities discussed herein. Some of these operations may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of the presented concepts. In addition, the timing and sequence of these operations may be altered considerably and still achieve the results taught in this disclosure. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by the embodiments in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the discussed concepts. As used herein, unless expressly stated to the contrary, use of the phrase ‘at least one of’, ‘one or more of’, ‘and/or’, variations thereof, or the like are open-ended expressions that are both conjunctive and disjunctive in operation for any and all possible combination of the associated listed items. For example, each of the expressions ‘at least one of X, Y and Z’, ‘at least one of X, Y or Z’, ‘one or more of X, Y and Z’, ‘one or more of X, Y or Z’ and ‘X, Y and/or Z’ can mean any of the following: 1) X, but not Y and not Z; 2) Y, but not X and not Z; 3) Z, but not X and not Y; 4) X and Y, but not Z; 5) X and Z, but not Y; 6) Y and Z, but not X; or 7) X, Y, and Z. Each example embodiment disclosed herein has been included to present one or more different features. However, all disclosed example embodiments are designed to work together as part of a single larger system or method. This disclosure explicitly envisions compound embodiments that combine multiple previously-discussed features in different example embodiments into a single system or method. Additionally, unless expressly stated to the contrary, the terms ‘first’, ‘second’, ‘third’, etc., are intended to distinguish the particular nouns they modify (e.g., element, condition, node, module, activity, operation, etc.). Unless expressly stated to the contrary, the use of these terms is not intended to indicate any type of order, rank, importance, temporal sequence, or hierarchy of the modified noun. For example, ‘first X’ and ‘second X’ are intended to designate two ‘X’ elements that are not necessarily limited by any order, rank, importance, temporal sequence, or hierarchy of the two elements. Further as referred to herein, ‘at least one of’ and ‘one or more of’ can be represented using the ‘(s)’ nomenclature (e.g., one or more element(s)). In summary, in some aspects, the techniques described herein relate to a method including: at a network device configured to be connected to a network and having control and data planes, and interfaces configured for network operations in the network: upon receiving, from a controller, instructions to form a local twin of the network device that is a virtual replica of the network device to be used for test purposes, creating the local twin and configuring the local twin to include virtual control and data planes, and virtual interfaces, which are virtual replicas of, and operate independently from, the control and data planes, and the interfaces, of the network device, respectively; and hosting the local twin on physical resources of the network device such that the local twin is configured for virtual network operations on the network device that replicate, but are independent from, the network operations. In some aspects, the techniques described herein relate to a method, further including, at the network device: injecting, into the local twin, virtual network failures or virtual traffic that propagate through the local twin without interfering with traffic forwarding by the network device. In some aspects, the techniques described herein relate to a method, further including: hosting, on the network device, a probe process configured to, under control of the controller, trigger injecting the virtual network failures or the virtual traffic into the local twin. In some aspects, the techniques described herein relate to a method, wherein the network includes physical links to connect the network device to a remote network device in the network, and the method further includes: creating virtual links that are virtual replicas of, and reside on, the physical links, to connect the local twin to a remote local twin that is a virtual replica of, and hosted on, the remote network device. In some aspects, the techniques described herein relate to a method, wherein the local twin, the remote local twin, and the virtual links collectively form a virtual replica network overlay that resides on, replicates a topology of, and operates independently from, the network. In some aspects, the techniques described herein relate to a method, further including: hosting a virtualized operating system on the network device, wherein hosting includes hosting the local twin on the virtualized operating system. In some aspects, the techniques described herein relate to a method, wherein: configuring includes configuring the virtual control and data planes to replicate routing protocols and one or more of a routing information base (RIB) and a forwarding information base (FIB) of the control and data planes of the network device. In some aspects, the techniques described herein relate to a method, further including: implementing a test scenario that includes forwarding virtual traffic from the virtual interfaces; and assigning a higher priority to forwarding traffic from the interfaces than to forwarding the virtual traffic from the virtual interfaces. In some aspects, the techniques described herein relate to a method, further including: placing constraints on the physical resources of the network device that are to be used to host the local twin, wherein hosting includes hosting the local twin within the constraints to minimize an impact of the local twin on performance of the network operations. In some aspects, the techniques described herein relate to a method, wherein the network operations include routing and forwarding traffic, and the virtual network operations include virtual routing and forwarding virtual traffic. In some aspects, the techniques described herein relate to an apparatus including: a network input/output interface to communicate with a network; and a processor of a network device having control and data planes, and interfaces configured for network operations in the network, the processor coupled to the network input/output interface and configured to perform: upon receiving, from a controller, instructions to form a local twin of the network device that is a virtual replica of the network device to be used for test purposes, creating the local twin and configuring the local twin to include virtual control and data planes, and virtual interfaces, which are virtual replicas of, and operate independently from, the control and data planes, and the interfaces, of the network device, respectively; and hosting the local twin on physical resources of the network device such that the local twin is configured for virtual network operations on the network device that replicate, but are independent from, the network operations. In some aspects, the techniques described herein relate to an apparatus, wherein the processor is further configured to perform: injecting, into the local twin, virtual network failures or virtual traffic that propagate through the local twin without interfering with traffic forwarding by the network device. In some aspects, the techniques described herein relate to an apparatus, wherein the processor is further configured perform: hosting, on the network device, a probe process configured to, under control of the controller, trigger injecting the virtual network failures or the virtual traffic into the local twin. In some aspects, the techniques described herein relate to an apparatus, wherein the network includes physical links to connect the network device to a remote network device in the network, and the processor is further configured to perform: creating virtual links that are virtual replicas of, and reside on, the physical links, to connect the local twin to a remote local twin that is a virtual replica of, and hosted on, the remote network device. In some aspects, the techniques described herein relate to an apparatus, wherein the local twin, the remote local twin, and the virtual links collectively form a virtual replica network overlay that resides on, replicates a topology of, and operates independently from, the network. In some aspects, the techniques described herein relate to an apparatus, wherein the processor is further configured to perform: hosting a virtualized operating system on the network device by hosting the local twin on the virtualized operating system. In some aspects, the techniques described herein relate to an apparatus, wherein: the processor is configured to perform configuring by configuring the virtual control and data planes to replicate routing protocols and one or more of a routing information base (RIB) and a forwarding information base (FIB) of the control and data planes of the network device. In some aspects, the techniques described herein relate to a non-transitory computer medium encoded with instructions that, when executed by a processor of a network device configured to be connected to a network and having control and data planes, and interfaces configured for network operations in the network, cause the processor to perform: upon receiving, from a controller, instructions to form a local twin of the network device that is a virtual replica of the network device to be used for test purposes, creating the local twin and configuring the local twin to include virtual control and data planes, and virtual interfaces, which are virtual replicas of, and operate independently from, the control and data planes, and the interfaces, of the network device, respectively; and hosting the local twin on physical resources of the network device such that the local twin is configured for virtual network operations on the network device that replicate, but are independent from, the network operations. In some aspects, the techniques described herein relate to a non-transitory computer medium, further including instructions to cause the processor to perform: injecting, into the local twin, virtual network failures or virtual traffic that propagate through the local twin without interfering with traffic forwarding by the network device. In some aspects, the techniques described herein relate to a non-transitory computer medium, further including instructions to cause the processor to perform: hosting, on the network device, a probe process configured to, under control of the controller, trigger injecting the virtual network failures or the virtual traffic into the local twin. One or more advantages described herein are not meant to suggest that any one of the embodiments described herein necessarily provides all of the described advantages or that all the embodiments of the present disclosure necessarily provide any one of the described advantages. Numerous other changes, substitutions, variations, alterations, and/or modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and/or modifications as falling within the scope of the appended claims.
55,357
11863451
It will be noted that throughout the appended drawings, like features are identified by like reference numerals. DETAILED DESCRIPTION Congestion control (CC) algorithms may provide feedback on the congestion of a network. However, this feedback may be coarse-grained and provide late and/or inaccurate indications of network congestion. Instead, temporal congestion signals such as line idleness and line busyness may be used to provide early information about network congestion and bandwidth availability. Temporal congestion signals may transmit data related to the temporal or time domain, such as the amount of time during which a device was idle or busy. These temporal congestion signals may be transmitted by switches, such as when they receive a request from a host. Switches may transmit these signals prior to reaching maximum link utilization, which may enable hosts to adjust their traffic rates prior to congestion delays and/or dropped packets. Rather than using scout packets which may increase network overhead, switches may be configured to calculate temporal congestion signals. These switches may calculate these values directly and include them in a packet, such as using an in-band network telemetry (INT) header in a packet. Alternatively, a switch may include information sufficient to calculate the temporal congestion signals and include this information in the packet, allowing a host to calculate line busyness and line idleness. These values may then be returned to the host, such as in an acknowledgement (ACK) packet received from a destination node of the packet. A host may be configured to request temporal congestion signals from the switch and may be configured to react to the signals it receives. A host may periodically request temporal congestion signals from a switch, such as by marking packets to include this request. For example, a host may include an INT header in a packet to indicate a request for temporal congestion signals. The host may use the line busyness and line idleness signals to adjust its flow rate, such as its congestion window size, based on the received temporal congestion signals. These temporal congestion signals may allow a host to react more rapidly to network congestion than prior CC algorithms, while causing less overhead than prior approaches which used separate scout packets. FIG.1is an illustration100of the line busyness and line idleness signals of a network device, according to one aspect of the present disclosure. The network device may be a switch, such as a programmable switch. The illustration graphs the queue length102of a network device, such as a switch, on the spatial domain or y-axis104and the time on the time domain or x-axis106. The queue length102of the device may vary in time, based on the traffic in the network. The device may have busy periods110,112, during which there it has a queue of packets waiting to be transmitted, and the device may also have idle periods114, during which there is no queue. The device may be configured to generate temporal congestion signals, such as when it receives a request for these signals. The device receives a packet120during an idle period114of the device when there are no packets in its queue. The device may be configured to generate a line idleness (δ)122signal, which reflects the amount of time the device was idle for prior to receiving the packet120. In this example, the device was idle from the end of its previous busy period110until the receipt of the packet120. The device may also be configured to generate a line busyness value, which may be the length of the busy period110which was immediately prior to its current idle period122. Subsequently, the device receives another packet130while the queue was busy, during busy period112. The device may be configured to generate a line busyness (β)132signal, which reflects the amount of time the device has been busy for prior to receiving the packet130. In this example, the device was busy from the end of its previous idle period114until the receipt of the packet130. The device may also be configured to generate a line idleness signal, which may be the length of the idle period114which was immediately prior to its current busy period132. Generally, the queue length102of a device, or its queueing delay, is a spatial congestion signal which provides information about the queue state of the device in the spatial domain or the y-axis in illustration100. These spatial congestion signals may provide a snapshot in time, showing information for the present moment. In contrast, line busyness132and line idleness122are temporal congestion signals which provide information about the queue state of the device in the time domain or the x-axis in illustration100. Temporal congestion signals may be tightly related to link utilization, and may provide early indications of congestion, such as when a device gets close to maximum link utilization, even before it accumulates packets in its queue. Temporal congestion signals may also have other useful properties, such as providing amplified congestion signals when compared to spatial congestion signals such as queue length or queueing delay, which is limited to the hardware buffer size. Temporal congestion signals may also not be limited by the buffer size of a device while spatial signals are. Temporal congestion signals may also be advantageous over other signals as they can provide instantaneous information about the bandwidth available at a switch, compared with other indicators such as link utilization which must be aggregated over a period of time. FIG.2is a method200used by a device to calculate a line busyness and line idleness signal, according to one aspect of the present disclosure. This method200may be used by a switch, including programmable switches and other switches. This method200allows a switch to calculate both line idleness and line busyness, and to include those values in the packet if the packet includes an INT header. The line idleness and line busyness values may be calculated in different ways depending on whether the switch is currently idle or busy. When a switch is idle, the line idleness value may be the length of the current idle period while the line busyness value is the length of the prior busy period. When a switch is busy, the line busyness value may be the length of the current busy period, while line idleness is the length of a prior idle period. The method200may be configured to update certain values when new packets are received, so that it can accurately calculate idleness and busyness. For example, the switch may update packet arrival time when idle (PcktArrivalTimeWhenIdle), packet arrival time when busy (PcktArrivalTimeWhenbusy), idle start time (IdleStartTime), and busy start time (BusyStartTime) for each packet received, whether or not those packets include a request for temporal congestion signals. Each of these measurements may be based on time stamps which are local to the device, and therefore may not require synchronization between clocks of different devices. At block202, the switch receives a packet and at block204, the switch checks whether its queue is empty. This check may be used to inform the switch of how to calculate its line busyness and line idleness values. If the queue is empty, the switch will proceed to block206, while if the queue is not empty, the switch will proceed to block214. If the queue is empty, the switch proceeds to block206, where it sets a packet arrival time when idle (PcktArrivalTimeWhenIdle) value to be equal to an egress time stamp (egressTimeStamp) on the packet. For example, the packet may include a time stamp which indicates when it was transmitted by a source node to the device. The egressTimeStamp may be intrinsic metadata which is created by the switch for each packet it receives, recording the time when the packet was received by the switch. Thus, the packet arrival time when idle value may be set to the time at which a source node transmitted the packet. The packet arrival time when idle value may be used by the device in a later instance of this method200, at block216, to show the start of a busy period. At block208, the switch sets a start time of an idle period of the device (IdleStartTime) value to be equal to a packet arrival time when busy (PcktArrivalTimeWhenBusy) value. The packet arrival time when busy value may be set each time the switch receives a packet while it is busy, such as at block214. Accordingly, the idle start time value may reflect the most recent time when the switch received a packet while it was in a busy period, or the end of the most recent busy period. At block210, the switch generates a line idleness signal based on a direction of an idle period of the device, with the idleness value equal to the packet arrival time when idle value set in block206minus the IdleStartTime value set in block208. That is, the idleness value, or line idleness, may be a duration of time between when the current packet arrived and the end of the most recent busy period. At block212, the switch generates a line busyness signal based on a direction of a busy period of the device, with the busyness value equal to the IdleStartTime value minus a BusyStartTime value. That is, the line busyness value reflects a duration of time between the end of the most recent busy period and the start of the most recent busy period, as set in block216. Each of these four blocks206,208,210,212occur when the switch has an empty queue when the packet is received. Alternatively, if the queue is not empty, the switch proceeds to block214. At block214, the switch determines a packet arrival time when busy (PcktArrivalTimeWhenBusy) value to be equal to an egress time stamp (egressTimeStamp) of the packet. This value may be used in a later iteration of this method200at block208. At block216, the switch sets start time of a busy period of the device (BusyStartTime) value to be equal to a packet arrival time when idle (PcktArrivalTimeWhenIdle) value. That is, the busy start time value may be set to the time at which a packet last arrived when the switch was idle, which may have been set in a previous iteration of this method200at block206. At block218, the switch generates a line idleness signal based on a direction of an idle period of the device, with the idleness value equal to the busy start time value set in block216minus start time of an idle period of the device (IdleStartTime) value. The idle start time value may have been set in a previous iteration of this method200at block208. The line idleness may be a duration of time between the time when the most recent idle period ended (the busy start time) and the time when the most recent idle period began (the idle start time). At block220, the switch generates a line busyness signal based on a direction of a busy period of the device, with the busyness value equal to the packet arrival time when busy value set in block214value minus a BusyStartTime value. The line busyness value may be a duration of time between the time that the current packet arrived and the time when the current busy period began. Next, the switch proceeds to block222, whether or not the queue was empty in block204. At block222, the switch checks whether the packet contains an in-band network telemetry (INT) header. A host, or source node, may transmit a packet to a destination node. The host may include an INT header in the packet to request that it receive network congestion signals, such as the temporal congestion signals of this method200. Other indications may also be used to request the inclusion of temporal congestion signals. At block224and block226, if the packet contains an INT header, the switch adds the line idleness and line busyness values to the packet header. These temporal congestion signals may then be passed along to the host by a destination node, such as in an INT header in an ACK message. At block228, the switch proceeds with normal packet processing before the process ends230. As described in method200, line busyness and line idleness may be calculated based on somewhat different formulas, depending on whether a packet is received when the switch is idle or busy. If the queue is empty, line busyness is a duration calculated in block212based on β=idleStartTime−busyStartTime, and line idleness is a duration calculated in block210based on δ=egressTimeStamp−idleStartTime. If the queue is not empty, such that there is one or more items in its queue, line busyness is a duration calculated in block220based on β=egressTimeStamp−busyStartTime, and line idleness is a duration calculated in block218based on δ=busyStartTime−idleStartTime. In either case, these formulas may calculate busyness to be the length of the most recent or current busy period, and idleness to be the length of the most recent or current idle period. In method200, the switch includes both idleness and busyness in the packet. Alternatively, the switch may include only one of these values in the packet or may selectively include these values in the packet based on their value. For example, a switch may receive a packet which already includes busyness and idleness signals from another node, and the switch may selectively overwrite one or both of those values if, e.g., its line busyness value is higher than the value in the packet or its line idleness value is smaller than the value in the packet. This may allow a host to receive the maximum line busyness value between multiple nodes on a link, and to receive the minimum line idleness between multiple nodes on the link. This may allow a host to adjust its congestion window sizing based on the most overloaded node on a data path. FIG.3illustrates300the calculation of line busyness and line idleness by a switch, according to one aspect of the present disclosure. This calculation may be done using the formulas which were used in method200. A packet may arrive at the switch at packet arrival time when idle306. As above, this value of packet arrival time when idle306may be set based on an egress time stamp on the arriving packet. The packet may arrive at a time when the switch is idle, as there is nothing in its queue. Because the queue is empty, the line busyness may be calculated as the length of the previous busy period, by taking the difference between idle start time304and busy start time302. When the packet arrives, the switch may also calculate a line idleness (δ)312based on the difference between packet arrival time when idle306and idle start time304. The line idleness312value may be calculated as the length of the current idle period, between the time it received the packet and the time it last had a queue. Another packet may arrive at the switch at packet arrival time when busy310. As above, this value of packet arrival time when busy310may be set based on an egress time stamp on the arriving packet. The packet may arrive at a time when the switch is busy, while there are one or more packets in its queue. Because the queue is not empty, the line busyness (β)314may be calculated as packet arrival time when busy310minus busy start time308. Line busyness314may reflect the length of time which the switch has been busy for between the time it received the packet and the time in which it was last idle. When the packet arrives, the switch may also calculate line idleness, based on the difference between busy start time308and idle start time304. Line idleness may reflect the length of time during which the switch was idle during its most recent idle period. The switch may be configured to calculate some or all of these values for each packet it receives. As described in method200, the switch may update its BusyStartTime and/or IdleStartTime values for each received packet, regardless of whether the packet requests temporal congestion signals. If the switch receives a request for temporal congestion signals, it may then include the line idleness and line busyness values in a packet, such as including these values in an INT header of the packet. The request for temporal congestion signals may be inferred from the presence of an INT header in the packet. This method200generally requires a switch itself to calculate the line idleness and line busyness values, and to pass these values along to the host or source node. These temporal congestion signals may be used as an alternative to or in addition to spatial congestion signals, such as queue length, queueing delay, and link utilization. The switch may be configured to calculate these values at line rate, so as to not slow down the transmission of packets. In another aspect, the switch may delegate some of this processing to the requesting device, such as a host or source node. FIG.4is a flowchart of a method400to delegate part of calculating line busyness and line idleness to hosts, according to one aspect of the present disclosure. In this method400, the switch may provide a host with enough information to calculate line idleness and line busyness rather than directly calculating these values. This may allow a switch to operate more quickly and may require use less resources for the switch. This method400may effectively offload some processing from the switch to the host or source node, allowing more efficient operation of the switch and requiring fewer resources on the switch. At block402, the switch receives a packet and at block404, the switch checks whether its queue is empty. If the switch has an empty queue, the switch proceeds to block406, where it sets a packet arrival time when idle (PcktArrivalTimeWhenIdle) to be equal to the egress time stamp (egressTimeStamp) on the received packet. The packet arrival time when idle value may be used in future iterations of the method400at block412. Next, at block408, the switch sets a start time of an idle period of the device (IdleStartTime) value to be equal to a packet arrival time when busy (PcktArrivalTimeWhenBusy) value. The packet arrival time when busy value may have been set in a prior iteration of the method400at block410. If the switch does not have an empty queue, at block410, it sets a packet arrival time when busy (PcktArrivalTimeWhenBusy) to be equal to the egress time stamp (egressTimeStamp) on the received packet. The packet arrival time when busy value may be used in a future iteration of the method400at block408. Next, at block412, the switch sets a start time of a busy period of the device (BusyStartTime) to be equal to a packet arrival time when idle value (PcktArrivalTimeWhenIdle). The packet arrival time when idle value may have been set in a prior iteration of the method400at block406. Next, at block414, the switch checks whether there is an INT header on the received packet. The inclusion of an INT header on the packet may constitute a request from the sending device for temporal congestion signals, such as information sufficient to calculate line busyness and line idleness. If the packet does not have an INT header, the switch proceeds to block424and the packet is processed normally. If the packet has an INT header, the switch may be configured to add information to the INT header of the packet. This information may collectively be enough to calculate the line idleness and line busyness values described above. At blocks416,418,420, and422, the switch includes information sufficient to calculate line idleness and line busyness values, including the queue length (Qlength), an egress time stamp (egressTimeStamp), and the idle start time value and busy start time values. Each of the idle start time and the busy start time may be transmitted as a time stamp. The host may be configured to use these time stamps to determine one or more or line idleness and line busyness. After adding this information to the packet, the switch then proceeds to block424and the packet is processed normally before the method400ends426. The method400allows another device, such as the device which transmitted the packet, to calculate the line idleness and line busyness values. These values may be calculated in the same manner as in the prior method200, selecting which formulas to use for line idleness and line busyness based on the queue length. FIG.5illustrates500a packet passing between devices in a network, according to one aspect of the present disclosure. Here, a packet is being transmitted from source node502to destination node506passing through switch504on the way. Prior to transmitting the packet, the source node502may mark the packet522as a scout packet. The packet may be generated520by an application510on the source node502, and then may be marked522by a packet marking module514which may be part of the source node502. The packet may be marked by including an INT header on the packet. This marking may inform other devices that the source node502is requesting temporal congestion signals related to the packet. The packet may then be transmitted524to the switch504, which may then add information526to the packet. For example, the switch504may add temporal congestion signals to the packet, such as adding these signals to an INT header in the packet. The temporal congestion signals may be line idleness and line busyness signals, or may be information sufficient to calculate line idleness and line busyness signals, as described. The switch504may then transmit528the packet to the destination node506. The destination node may include an acknowledgement (ACK) generator module516, which is configured to transmit an ACK message532back to the source node502. The destination node506may be configured to echo information530back to the source node502in the ACK message532. For example, this information may include temporal congestion signals which are contained in an INT header of the packet, and they may be transmitted back to the source node502. As illustrated, hosts, such as source node502, may trigger the generation of a line busyness or line idleness signal by marking a packet which is transmitted to the switch504. The host may be configured to periodically request such information, such as requesting information on a certain schedule. The host may also be configured to use these temporal congestion signals to improve link efficiency, such as achieving close to maximum link capacity while minimizing queue length at the switch. FIG.6illustrates600a relationship between line idleness, line busyness, and link utilization, according to one aspects of the present disclosure. Generally, link utilization (u)614may be calculated based on line idleness (δ)610and line busyness (β)612. For example, link utilization614may represent a percentage of the time that the switch is busy and may be calculated as u=ββ+δ. Similarly, a host may also be able to extract the available bandwidth using line idleness (δ). As shown in this formula, line busyness612may be proportional to link utilization614. This relationship may be used by a host to limit the link capacity to a certain percentage by limiting the value of line busyness612to a maximum threshold, to reduce the chances of dropped packets. Similarly, line idleness610may be limited to a maximum threshold, to assist in efficient link utilization. For example, line busyness612may be limited by a host to be a maximum of 0.95×round-trip time (RTT), so that the host may seek to limit the busy periods of the switch to be 95% of an RTT between a given source node and a given destination node. Similarly, line idleness610may be limited by a host to be a maximum of 0.05×RTT. A host may be configured to control flow rates and/or adjust the size of a congestion window (w) based on the line idleness610and line busyness612signals to achieve close to maximum link capacity while minimizing queue length at the switch. FIG.7illustrates a host reaction mechanism700, according to one aspect of the present disclosure. The host reaction mechanism700may be used by a host to adjust the size of a control flow rate, such as a congestion window size, based on received line busyness and line idleness signals. These adjustments may reduce the likelihood of buffer overflows and dropped packets, while also seeking to ensure high link utilization. The host reaction mechanism700may be used by a host such as source node502and may be used in a micro-control module such as micro-control module512. The host reaction mechanism700may be triggered when the host receives line busyness (β) and line idleness (δ) values at block702. For example, the host may receive a packet which includes line busyness and line idleness or may receive a packet which includes information sufficient to calculate line busyness and line idleness. This process may initially be triggered by the host marking a packet, causing a switch to add line busyness and/or line idleness signals. The host may trigger this process by sampling using random distribution and/or based on a necessity to measure the network state, such as when transitioning to a congestion avoidance state in TCP. The sampling frequency used by the host may be based, at least in part, on a value of line busyness and/or line idleness. For example, the host may be configured to request temporal congestion signals more often when line busyness is high, to monitor the network state and adjust the congestion window size to prevent packet loss. Generally, the host reaction mechanism700may seek to limit line idleness to be very close to zero. At the same time, the mechanism700may seek to guide line busyness to converge to a stable point, where β>βmin>>0. To do this, the mechanism700may seek to adjust the congestion window (w) as shown in Equation 1: w←{w⁡(1-df⁡(β))if⁢β>βminw+δ·LOtherwise(1) Where df is a decrease factor which is used to reduce the size of the congestion window when β>βmin, and where L is a packet size, such as an average packet size transmitted on the link and is used to increase the size of the congestion window when β≤βmin. Generally, βminand ε may be selected to optimize between maximum link capacity and to minimize packet loss. For example, βminmay be set to be RTT, while trying to minimize6. These values may lead to link utilization converging at a value of βminβmin+ε, or slightly less than 100% of link capacity. At block704, the mechanism700includes comparing the line busyness signal to a minimum busyness signal (βmin). This comparison may be used to determine whether to increase the size of the congestion window, which would also increase line busyness, or to decrease the size of the congestion window, which would also decrease line busyness. If line busyness is not larger than the minimum busyness signal, at block714, the mechanism700increases the size of the congestion window by δ·L, based on the line idleness (δ) and the packet size (L). Increasing the size of the congestion window may increase link utilization and decrease line busyness. The mechanism700then ends at block716. If line busyness is larger than the minimum busyness signal, the mechanism700proceeds to block706, where it calculates a decrease factor (df). The decrease factor may be used to reduce the size of the congestion window to decrease link utilization and reduce the chances of packet loss. The decrease factor may be set to reduce the size of the congestion window based on a difference between the line busyness and the minimum busyness signal, using a larger value when these values are far apart. For example, in one aspect, the decrease factor may be calculated as d⁢f=0.5*(1βmin×(β-βmin))3, where 0.5 is a decrease factor scale, and Aran is the minimum busyness signal, which may be an RTT. This calculation of the decrease factor may reach 0.5 when the line busyness is or two RTTs. The decrease factor may also be calculated in other ways, but may generally seek to decrease the congestion window by a larger amount when line busyness is much higher than the minimum busyness value and by a smaller amount when line busyness is close to the minimum busyness At block708, the mechanism700determines whether df is larger than 0.5. If df is larger than 0.5, the mechanism700sets df to be equal to 0.5 at block710. The decrease factor's size may be limited to a maximum value, such as 0.5, to limit the maximum reduction of the congestion window in a single iteration of the mechanism700. At block712, the mechanism700sets the congestion window size to be equal to w=w(1−df). In this formula, the decrease factor may be a positive value between 0 and a maximum value, such as 0.5. Accordingly, this may decrease the size of the congestion window by between 0% and 50%, depending on the value of the decrease factor. The mechanism700then ends at block716. Generally, having the switch generate temporal congestion signals and include these signals in an INT header of packets may be much more efficient than using scout packets. Using scout packets may introduce significant overhead, whereas this overhead can be significantly reduced by allowing temporal congestion signals to piggyback in the header of an existing packet. Overhead may be reduced even further when all nodes included in a data path use the same header. For example, the packet may travel through multiple nodes such as switches, and each of those nodes may include temporal congestion signals in the packet, such as line busyness and line idleness values in the packet. Alternatively, each of these nodes may selectively overwrite the line busyness and/or line idleness values in the packet. For example, each node in the data path may be configured to overwrite the line busyness and/or line idleness values in a packet if its line busyness is higher than the line busyness in the packet or if its line idleness is lower than the line idleness in the packet. This may result in a host device receiving the maximum line busyness value along a data path and receiving a minimum line idleness value in the data path. This may allow a host to set a congestion window size to avoid packet drops at the busiest node or nodes in the data path. FIG.8is a schematic diagram of an electronic device800that may perform any or all of operations of the above methods and features explicitly or implicitly described herein, according to different embodiments of the present disclosure. For example, a computer equipped with network function may be configured as electronic device800. In some embodiments, the electronic device800may be a host, a switch, user equipment (UE), an AP, a STA or the like as appreciated by a person skilled in the art. As shown, the electronic device800may include a processor810, such as a central processing unit (CPU) or specialized processors such as a graphics processing unit (GPU) or other such processor unit, memory820, non-transitory mass storage830, input-output interface840, network interface850, and a transmitter/receiver860, all of which are communicatively coupled via bi-directional bus870. According to certain embodiments, any or all of the depicted elements may be utilized, or only a subset of the elements. Further, electronic device800may contain multiple instances of certain elements, such as multiple processors, memories, or transceivers. Also, elements of the hardware device may be directly coupled to other elements without the bi-directional bus. Additionally, or alternatively to a processor and memory, other electronics, such as integrated circuits, may be employed for performing the required logical operations. The memory820may include any type of non-transitory memory such as static random-access memory (SRAM), dynamic random-access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), any combination of such, or the like. The mass storage830may include any type of non-transitory storage device, such as a solid-state drive, hard disk drive, a magnetic disk drive, an optical disk drive, USB drive, or any computer program product configured to store data and machine executable program code. According to certain embodiments, the memory820or mass storage830, which may each be referred to as a machine-readable (or computer-readable) medium (or storage), may have recorded thereon statements and instructions executable by the processor810for performing any of the method operations described above. Embodiments of the present disclosure can be implemented using electronics hardware, software, or a combination thereof. In some embodiments, the disclosure is implemented by one or multiple computer processors executing program instructions stored in memory. In some embodiments, the disclosure is implemented partially or fully in hardware, for example using one or more field programmable gate arrays (FPGAs) or application specific integrated circuits (ASICs) to rapidly perform processing operations. It will be appreciated that, although specific embodiments of the technology have been described herein for purposes of illustration, various modifications may be made without departing from the scope of the technology. In particular, it is within the scope of the technology to provide a computer program product or program element, or a program storage or memory device such as a magnetic or optical wire, tape or disc, or the like, for storing signals readable by a machine, for controlling the operation of a computer according to the method of the technology and/or to structure some or all of its components in accordance with the system of the technology. Acts associated with the method described herein can be implemented as coded instructions in a computer program product. In other words, the computer program product is a computer-readable medium upon which software code is recorded to execute the method when the computer program product is loaded into memory and executed on the microprocessor of the wireless communication device. Further, each operation of the method may be executed on any computing device, such as a personal computer, server, personal digital assistant (PDA), or the like and pursuant to one or more, or a part of one or more, program elements, modules or objects generated from any programming language, such as P4 language, C++, Java, or the like. In addition, each operation, or a file or object or the like implementing each said operation, may be executed by special purpose hardware or a circuit module designed for that purpose. Through the descriptions of the preceding embodiments, the present disclosure may be implemented by using hardware only or by using software and a necessary universal hardware platform. Based on such understandings, the technical solution of the present disclosure may be embodied in the form of a software product. The software product may be stored in a non-volatile or non-transitory storage medium, which can be a compact disc read-only memory (CD-ROM), USB flash disk, or a removable hard disk. The software product includes a number of instructions that enable a computer device (personal computer, server, or network device) to execute the methods provided in the embodiments of the present disclosure. For example, such an execution may correspond to a simulation of the logical operations as described herein. The software product may additionally or alternatively include a number of instructions that enable a computer device to execute operations for configuring or programming a digital logic apparatus in accordance with embodiments of the present disclosure. Although the present invention has been described with reference to specific features and embodiments thereof, it is evident that various modifications and combinations can be made thereto without departing from the invention. The specification and drawings are, accordingly, to be regarded simply as an illustration of the invention as defined by the appended claims, and are contemplated to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the present invention.
36,095
11863452
DETAILED DESCRIPTION For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the embodiments illustrated in the drawings, and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of this disclosure is thereby intended. In the present disclosure, the term “about” can allow for a degree of variability in a value or range, for example, within 10%, within 5%, or within 1% of a stated value or of a stated limit of a range. In the present disclosure, the term “substantially” can allow for a degree of variability in a value or range, for example, within 90%, within 95%, or within 99% of a stated value or of a stated limit of a range. A novel system of approaches is disclosed that can efficiently assign a new incoming task to a task processor that does not require significant overhead. The current solutions available in the prior art represent technical solutions using computer technologies. The present disclosure provides an improvement to this technical field in order to meet the aforementioned gap between the two primary approaches of the prior art based created by overhead on one extreme and complete randomness on the other. To this end, the present disclosure provides three different approaches to address the aforementioned goal. The first approach is based on d-parallel non-backtracking random walks (NBRWs) on a k-regular random graph. The second approach is based on a reversible non-uniform random-walk based on load-balancing algorithm family with node weights given by wq(Qi(t))=exp(−αQi(t)) for different values of α, where Qi(t) represents the state of the node i, and α is a user defined constant. The state of the node refers to either the number of jobs waiting for service, or the total amount of time required to service all the jobs waiting, or some other measure of the value/cost of processing jobs. We will refer to Qias the queue-length. The third approach is based on a non-reversible non-uniform random-walk based load-balancing algorithm including the case where the random walker is only allowed to choose between next-hop nodes that have the minimum state value. To understand these separate aspects, some definitions are initially offered. The parameter d indicates the number of walkers on the graph with n nodes, where each node represents a server. The graphs are assumed to be k-regular graphs. A regular graph is one where all nodes have equal degrees, where each degree refers to a branch connecting one node to another. This means that from any one node (i.e., node), there are k possible paths from that node to other nodes. Such a graph is referred to as a k-regular graph. If one such node has a degree less than k, then that is not a regular graph. For all the three distinct algorithms provided herein that use a random-walk, a non-backtracking random walk (NBRW) feature is enforced. This means that at any point on the k-regular graph a random walker may not go back to the node from which it immediately travelled. Furthermore, while not posing a limitation, for all these situations, the present disclosure assumes d and k to be 5 (as discussed above, d represents the number of random walkers and k represents the degree of the regular graph, i.e., the number of available branches connected to each node). A few important points stand out. It is apparent that sampling the servers uniformly using an NBRW yields a performance that is extremely close to SQ(D); however, the approach discussed herein uses less randomness to yield roughly the same performance as SQ(D). These new algorithms of the present disclosure outperform SQ(D) with the performance being dramatically better in terms of achieving a (stochastically) smaller queue-length distribution, even at heavy load factors such as ρ=0.99. Here, ρ represents a measure of the throughput (or the number of jobs that can be processed per unit time while maintaining a finite number of waiting jobs on average) of a server system. The closer ρ is to 1, the more jobs that can be processed per unit time, but the larger the number of waiting jobs (on average). A k-regular random graph between the servers (with k≥3, e.g., 5) is used to propose the novel load balancing algorithms of the present disclosure that use a distributed memory/information structure. In this base scheme we will sample the servers to check for job assignment by using d independent NBRWs. As the graph is held fixed, for more general schemes we increase the information available for load-balancing by making each server communicate its queue-length to the neighbors in the random graph where a walker currently resides. Given the graph is a k-regular graph, each server is only going to receive k queue-length values. This communication can further be limited by communicating the information only when the queue-length changes. Thus, accordingly, the queue-lengths of the next-hop servers are known prior to making the hop. For the second and third algorithms of the present disclosure, this information is thus used for determining the next-hop of the random-walker as part of a weighting function wq(that is positive and non-increasing in the queue length), and dynamically bias the random-walker's next hop choice by making the walker choose the next-hop with a probability proportional to either the weight of the next-hop server or a combination of weights of current server and the next-hop server. We do this for all the d random-walkers, then compare the queue-lengths of the next hop servers to determine the identity of the least loaded sampled server to assign the job to. In the k-regular graph for each node there are k edges (i.e., branches between that node and the other nodes). This is identified as edge (i, j), also referred to herein as branch, between nodes i and j. The nodes i,j (respectively) in the set (1 . . . n) have weights wq(i) and wq(j) (respectively) which are used to determine the edge weight wg(i)wq(j) (for edge (i,j)) for the weighted random walk. We can generalize by setting the edge weights based on a symmetric function of the queue lengths of the two nodes ƒe, and use that for the random walk. An example is p-means, where ƒ(q1, q2)=((wqp(q1)+wqp(q2))/2)1/pfor p>0 with wq(qi)wq(i) being the appropriate weight for queue-length qifor i∈{1, 2}; note that p=0 corresponds to the geometric mean, i.e., weight being √{square root over (wq(q1)wq(q2))}. We can generalize the family of load-balancing algorithms to include non-reversible random walks too, so that edge-weights depend on the queue-lengths of the endpoint nodes and the node where the walker currently resides. In this case, we can generalize by setting the edge weights based on an asymmetric function of the queue lengths of the two nodes ƒe, and use that for the random walk. An example is weighted p-means, where ƒ(q1, q2)=((Z1wqp(q1)+Z2wqp(q2))/2)1/pfor p>0 with wq(q1)wq(i) being the appropriate weight for queue-length qifor i∈{1, 2} and Z1+Z2=1 and Z1>0 and Z2>0; note that p=0 corresponds to the geometric mean, i.e., weight being (wqZ1(qi)wqZ2(q1)). To better demonstrate the algorithms of the present disclosure an example is provided. Suppose m balls are to be placed into n bins, according to some dispatching policy, with each bin already including 0-l balls in it already. As each of the m balls become available, with the goal being to uniformly populate the bins different methods can be applied. One approach, seen in the prior art, is to assign each ball into a bin uniformly at random. The choice of bin for each ball is independent of choice of placing another ball in the same or another bin. Under the Power of d choices (discussed above), the scheduler for each ball, samples d bins randomly, uniformly, and independently. The scheduler then places the next ball into the bin that is least loaded. Any ties between the bins are addressed according to some predetermined policy. While, the Power of d choices provides a much higher efficacy of generating uniformly loaded bins, the efficacy is still low. Next approach represents the methods of the present disclosure. At each time a ball becomes available, a k-regular graph is used in which each bin is represented by a node (1 . . . n), and each node has connectivity with other nodes by k branches. d walkers are dispatched to d randomly chosen bins, where no two bins are the same. W1[j], W2[j], . . . , Wd[j] are candidate bins for the jth ball. The jth ball as assigned to the least loaded bin between W1[j], W2[j], . . . , Wd[j]. Here the k and d are fixed. The walkers are bound to non-backtracking random walks. An example of such a graph is shown inFIG.1A, where n=10, d=2 walkers, and k=3. InFIG.1A, as a new ball becomes available, a 3-regular graph is used with two walkers randomly assigned to two bins (i.e., bins 1 and 9). Each bin is connected to other bins via three connections. For example, bin 6 is connected to bins 9, 8, and 1. This allocation is shown inFIG.1B, where bins 1, 2, and 7 each have two ball, bin 3 has three balls, bins 6, 8, and 10 each have only one ball, and bins 4, 5, and 9 are empty. Further, suppose the two walkers are randomly assigned bins 1 and 9 at the time the new ball becomes available. InFIG.1B, the randomly assigned position of the walkers (i.e., bins 1 and 9) are highlighted with thickened line. According to one embodiment of the present disclosure, a comparison is made between these two bins and the new ball is placed in the bin with the least number of balls. This bin is 9 since bin 9 has no balls and bin 1 has 2 balls. This placement is shown inFIG.2, where the new ball is placed in bin 9. Note that the new ball is shaded differently than existing balls in various bins. Next, as a new ball arrives, and referring toFIG.3A, walkers are assigned new random bins. The random selection for bin 1 is based on three choices: 2, 5, and 6 (i.e., the bins to which bin 1 is connected). In this case, the random choice was bin 2. Similarly, the random choice for bin 9 is out of three choices: 4, 7, and 6. In this case, bin 4 was randomly chosen out of those choices. Thus, one walker walks from bin 1 to bin 2 and the other walker walks from bin 9 to bin 4, as indicated by the arrows. Since the walkers are not allowed to traverse back to where they immediately came from, bins 1 and 9 are highlighted. The new bins 2 and 4 are highlighted as well as immediately walked from bins of 1 and 9, respectively. Referring toFIG.3B, the new bins (i.e., bins 2 and 4) are highlighted by thickened lines. According to one embodiment of the present disclosure, a comparison is made between these two bins and the new ball is placed in the bin with the least number of balls. This bin is 4 since bin 4 has no balls and bin 2 has 2 balls. This placement is shown inFIG.4, where the new ball is placed in bin 4. Note that the new ball is shaded differently than existing balls in various bins. Next, as a new ball arrives and referring toFIG.5A, walkers are assigned new random bins. The random selection for bin 2 is based on three choices: 7, 1, and 3 (i.e., the bins to which bin 2 is connected). However, bin 1 represents a bin from which the walker immediately walked in order to reach bin 2. Thus, bin 1 is not an available option. Thus, only bins 7 and 3 are available, from which bin 7 is chosen randomly for the walker to walk from bin 2. Similarly, the random choice for bin 4 is out of three choices: 9, 5, and 3. However, bin 9 represents a bin from which the walker immediately walked in order to reach bin 4. Thus, bin 9 is not an available option. Thus, only bins 5 and 3 are available, from which bin 3 is chosen randomly for the walker to walk from bin 4. In this case, bin 3 was randomly chosen out of those choices. Thus, one walker walks from bin 2 to bin 7 and the other walker walks from bin 4 to bin 3, as indicated by the arrows. The new bins 7 and 3 are highlighted as well as immediately walked from bins of 2 and 4, respectively. Bins 1 and 9 are again unhighlighted, which refers to a situation where the walker cannot go back to those nodes from which the walkers immediately walked; however, other approaches may define non-backtracking as incorporating prohibition of backtracking not just of the immediate nodes but a predetermined number of previously walked nodes. Referring toFIG.5B, the new bins (i.e., bins 7 and 3) are highlighted by thickened lines. According to one embodiment of the present disclosure, a comparison is made between these two bins and the new ball is placed in the bin with the least number of balls. This bin is 7 since bin 7 has 2 balls and bin 3 has 3 balls. This placement is shown inFIG.6, where the new ball is placed in bin 7. Note that the new ball is shaded differently than existing balls in various bins. This process is repeated each time there is a new ball becomes available. Now suppose, upon arrival of a new ball, the walker at node 3 randomly chooses node 2. And suppose, going back to a recently visited node (e.g., node 2) results in a reset. In cases where the graphs have extreme large girths, a reset scheme may not be required, however, for smaller graphs, a reset scheme allows a similar statistical randomness as a large graph. Upon a reset, the walkers would be allowed to start at new random nodes and begin walking again. Resets can be triggered as a result of a plurality of events, including: 1) After some fixed number of arrival requests (i.e., new balls coming in or new tasks coming into a plurality of task-processing servers); 2) After some predetermined fixed time; and 3) After the first intersection event after the immediate past reset where the intersection event is defined as any of the random walkers revisiting past locations (after the immediate past reset) visited by any of the random walkers (including itself). In the latter, the past locations can be limited to an immediate past node, a node visited n nodes ago, or any of the past visited nodes. After a reset, each random walker picks a random location to restart the walk from. It is possible that the restart happens at a node that was visited recently in the past by one or more walkers or even that multiple walkers land up on the same node. As discussed above, two other approaches are covered by the present disclosure, including 1) based on a reversible non-uniform random-walk based on load-balancing algorithm family with node weights; and 2) based on a non-reversible non-uniform random-walk based load-balancing algorithm with the random walker only allowed to choose between next-hop nodes that have the minimum state value. Assume that a walker is currently on node i and arrived there from node j. By the non-backtracking principle, the walker cannot immediately go back to node j but has a choice between the remaining k−1 nodes to which node i is connected (k-regular graph). These nodes are j1, j2, . . . , j{k-1}. The second approach sets a weight y{j_1}:=f(wi,w{j1}) for node j1, y{j_1}:=f(wi,w{j2}) for node j2, . . . , and y{j{k-1}}:=f(wi,w{j{k-1}}) for node k−1. Then the next destination node is chosen by choosing one of j1, j2, . . . , j{k-1}by using the probability distribution (y{j_1}/Y, y{j2}/Y, . . . , y{j{k-1}}/Y) where Y=y{j1}+y{j2}+ . . . +y{j{k-1}}. As f(.) is a symmetric function of its arguments we can use reversible version of the walk. In the third approach, we follow the same procedure but with weights y{j_1}:=w{j_1}, y{j_2}:=w{j_2}, . . . , and y{j{k-1}}:=w{j{k-1}}; as the weights do not depend on the weight of node i in a symmetric manner. This is an example of a non-reversible version of the walk. An extreme version of this is to pick one of the least loaded (smallest queue-length) nodes among j1, j2, . . . , and j{k-1}and to send the walker there. Note that the first approach, discussed above, is a sub-case of both the second and the third approaches by taking setting y{j_1}=y{j_2}= . . . =y{j{k-1}}=1. Finally, as we are considering the choice of the walker to go from node i to node j1, node i to node j2, . . . , and node i to node j{k-1}, we can also interpret y{j_1}as the weight of the edge (i,j1), y{j2}as the weight of the edge (i,j2), . . . , and y{j{k-1}}as the weight of the edge (i,j{k-1}). The non-reversible walkers are useful since they can better load-balance by being able to find empty or less loaded nodes faster than reversible walkers. However, it may not always be feasible to implement a non-reversible walker and therefore it should be left to the specific use case to decide what type of walker to use in practice. Additional information is provided in the Appendixes of the present disclosure, filed herewith, and incorporated by reference in their entirety into the present disclosure. Those having ordinary skill in the art will recognize that numerous modifications can be made to the specific implementations described above. The implementations should not be limited to the particular limitations described. Other implementations may be possible.
17,162
11863453
DESCRIPTION OF EXAMPLE EMBODIMENTS Overview This disclosure describes systems and methods that, among other things, improve technologies related to dynamically load balancing traffic based on predicted and actual load capacities of backend server nodes. By way of example, and not limitation, a method according to the various techniques described in this disclosure may include determining, by a first data node of a network, a predicted capacity of the first data node during a period of time. The method may also include sending, to a load balancer of the network, an indication of the predicted capacity to prompt the load balancer to send a first number of data flows to the first data node during the period of time. The method may further include determining, by the first data node and during the period of time, a difference between the predicted capacity of the first data node and an actual capacity of the first data node. Based at least in part on the difference, the method may include prompting the load balancer to send a second number of the data flows to the first data node during the period of time. In additional or alternative examples, the first data node may be associated with a first traffic class, and the method may include determining, by a controller of the network, the predicted capacity of the first data node during the period of time. Additionally, the controller may receive, during the period of time, telemetry data indicating the actual capacity of the first data node during the period of time. The method may also include determining, by the controller, that the difference between the actual capacity of the first data node and the predicted capacity of the first data node is greater than a threshold difference. Based at least in part on the difference being greater than the threshold difference, the controller may send, to the load balancer, a request to redirect a data flow associated with a second traffic class to the first data node during the period of time such that the data flow is handled according to the first traffic class. Additionally, the techniques described herein may be performed as a method and/or by a system having non-transitory computer-readable media storing computer-executable instructions that, when executed by one or more processors, performs the techniques described herein. Example Embodiments As discussed above, in an environment where load balancers direct traffic to a pool of server nodes, the load balancing criteria may not be sufficient to ensure that the server nodes will remain fully utilized throughout their lifetime, especially when traffic levels are inconsistent. As load balancers aim at minimizing the delay introduced in the traffic they handle, load balancing algorithms often trade some level of accuracy for performance. These server nodes, however, may have information about their nominal capacity (e.g., number of hardware/software interruptions, I/O, etc.), current utilization (e.g., memory, CPU, I/O, etc.), as well as their utilization history. This makes these nodes able to more accurately determine their real load state and available capacity, and even accommodate some level of overcommitment based on usage fluctuations, trends, and observed traffic patterns (e.g., by time-of-day, frequency, or other criteria). Additionally providing automatic upgrading of backend processes is a difficult task, and using Equal Cost Multipath (ECMP) routing to spread VPN traffic from a data center edge router to a pool of backend nodes does not allow for any sort of “pinning” behavior, nor does it allow for automatically adjusting the pinning values. Accordingly, one aspect of this disclosure is directed to techniques for these backend server nodes to complement load balancer decisions by claiming more traffic or warning about imminent congestion, thus emulating a “feedback control loop” to allow for dynamic load balancing by using more metrics than the load balancing algorithm is capable of handling. Take, for example, a load balancing algorithm that defines allocations based on harmonized number of tunnels allocated to each backend server node. As the traffic pattern or trend changes, the backend node may either let the load balancers know of an imminent congestion based on changes in the traffic pattern, trends, and/or usage history, as well as let the load balancers know that the server node's deeper analysis concludes that it can handle more traffic than the load balancer is currently sending to it. For instance, the backend server nodes may send an indication (e.g., an Explicit Congestion Notification (ECN) or the like) to complement the load balancers. In some examples, the feedback control loop may be defined with the desired Set Point (SP) as the estimated capacity, the current load as the Process Variable (PV), and the error as the difference between both, and based on the magnitude of the error, an appropriate control algorithm can be picked to gradually apply corrections (increase or decrease load), based on Proportional (P), Proportional Integral (PI), or Proportional Integral Derivative (PID) terms. Additionally, another aspect of this disclosure is directed to measuring and determining historical usage of a certain data flow and upgrading it to a better traffic class if resources are available (e.g., throughput). For example, a data flow (e.g., encrypted tunnel) may suddenly experience a rush of incoming traffic at a sustained rate. In order to control the CPU usage of a backend server node, the techniques described herein may dynamically detect this usage and place the data flow on a specific backend server node, while at the same time preventing additional data flows from using that node. For instance, a moving average technique may be used to adjust the pinning of a data flow to a specific backend server node, reserving the backend node for a high-throughput customer, and moving lower-throughput data flows to the remainder of the other backend nodes. In some instances, bandwidth and traffic usage may be determined and/or used to adjust the mappings on a load balancer. Additionally, net stats per-5-tuple may be used as well, allowing for the backend server nodes and/or a network controller to, among other things: guess how much load a client may consume based on historic usage data; auto-upgrade a data flow if a backend server node has spare resources; in the case of IPsec, detect whether decrypted child_sa traffic is sensitive to jitter/delay (e.g., multimedia) and handle that child_sa separately on a more powerful backend server node; allow a data flow to temporarily exceed its contractual flow rate to absorb spikes, and the like. Thus, according to the various techniques described in this disclosure, improvements in computer-related technology may be realized. As discussed above, the techniques of this disclosure provide functionality for a backend server node to either let load balancers know of an imminent congestion based on changes in the traffic pattern, trends, and/or usage history, as well as let the load balancers know that the backend server node's deeper analysis concludes that it can handle more traffic than the load balancer is currently sending to it. This improves the functioning of load balancers and/or backend server nodes by more efficiently allocating data flows to specific backend server nodes that have available resources to handle the data flows. Additionally, in some instances a specific data flow that is associated with a first traffic class may be upgraded to a second, higher traffic class if a backend server node has available resources, thus providing a better experience for users. These are just some examples of the multiple improvements that may be realized according to the techniques described in this disclosure. These and other improvements will be easily understood and appreciated by those having ordinary skill in the art. By way of example, and not limitation, a method according to the various techniques described by this disclosure may include determining, by a data node (e.g., backend server node, worker node, etc.) of a network, a predicted (e.g., estimated) capacity of the data node during a period of time. In various examples, the data node may be one of multiple data nodes of the network that are configured to process data plane traffic (e.g., encapsulating security payload (ESP) packets associated with an IPsec connection, packets associated with a Wireguard connection, packets associated with a TLS/DTLS connection, etc.) or any form of encrypted payload. As such, in some examples the network may also include, in addition to the multiple data nodes, multiple control nodes that are configured to process control plane traffic (e.g., internet key exchange (IKE) packets associated with the IPsec connection, packets of an SSL VPN control protocol, packets of a Wireguard control protocol, etc.) or, similarly, any traffic related to protocols for establishing a secure authenticated session between a number of VPN peers, through which peers can exchange session lifecycle events. In some examples, the predicted capacity may be indicative of a number of available or unavailable computing resources of the data node. For instance, the computing resources may include, among other things, memory, processing units (e.g., CPU, GPU, etc.), throughput, number of hardware or software interruptions, I/O, and/or the like. In some examples, the predicted capacity of the data node may be determined based at least in part on utilization history associated with the data node. Additionally, or alternatively, the predicted capacity of the data node may be determined based at least in part on present behavior of the data node. For instance, if the data node determines that it has capacity to receive additional data flows or, conversely, that it is over capacity and needs to reduce the number of data flows being sent to it, then the data node may send an indication to a load balancer to either increase or decrease the number of data flows being sent to it. As such, the data node may determine its predicted capacity during the period of time based on sending the indication to either increase or decrease the number of flows. In some examples, usage statistics and/or utilization history associated with a data node may be stored in a remote database. In this way, if a data node failure occurs, a new data node may recover previous usage statistics and/or utilization data for the flows of the failed data node. In some examples, the period of time during which the predicted capacity is determined may be a present period of time, a future period of time, a future instance of time, etc. By way of example, and not limitation, the period of time may be an interval of time from, for instance, 4:00 AM to 6:00 AM, 6:00 PM to 8:00 PM, or the like. Additionally, or alternatively, the period of time may be an instance of time occurring at 4:00 PM, 5:00 PM, 6:00 PM, or the like. In even further examples, the period of time may be associated with particular days of the week and/or days of the year (e.g., weekday (Monday, Tuesday, Friday, etc.), weekend (e.g., Saturday or Sunday), Easter, Independence Day, Thanksgiving, Christmas, etc.). As an example, a period of time during which a predicted capacity may be determined may be from 5:00 PM on a Friday to 8:00 AM on a Monday, or the like. In some examples, the method may further include sending, to a load balancer of the network, an indication of the predicted capacity to prompt the load balancer to send a first number of data flows to the data node during the period of time. The first number of data flows may be a predicted number of data flows that, if all sent to the data node during the period of time, would cause the data node to operate at or near full capacity. In some examples, the load balancer may send data flows to the multiple data nodes according to an equal cost multipath (ECMP) routing strategy. In various examples, the method also may include determining, by the data node and during the period of time, a difference between the predicted capacity of the data node and an actual capacity of the data node. Accordingly, based at least in part on the difference, the data node may prompt the load balancer to send a second number of the data flows to the data node during the period of time. In some examples, the second number of the data flows may be greater than the first number of the data flows. Alternatively, the second number of the data flows may be less than the first number of the data flows. In some examples, prompting the load balancer to send the second number of the data flows may be based at least in part on determining that the difference is greater than a threshold difference. In some examples, the actual capacity may be indicative of a current number of available or unavailable computing resources of the data node during the present period of time. The computing resources may include, among other things, memory, processing units (e.g., CPU, GPU, etc.), throughput, number of hardware or software interruptions, I/O, and/or the like. In at least one example, the data node may determine, during a second period of time that is subsequent to the first period of time, a second difference between the actual capacity of the data node and the second number of the data flows. Based at least in part on the second difference, the data node may prompt the load balancer to send a third number of the data flows to the data node during the second period of time. In some instances, the second number of the data flows may be either one of greater than the first number or less than the first number. Additionally, the third number of the data flows may be either one of greater than the second number or less than the second number. In other words, the third number of the data flows may be determined in order to push the data node closer to its ideal operating capacity, and that may include either one of increasing or decreasing the total number of data flows being sent to the data node, based on the current capacity. The above described method may, in at least some examples, additionally or alternatively include operations for dynamically upgrading a data flow from a first traffic class to a second traffic class. For instance, the data node of the above example may comprise a first data node of the network that is associated with a first traffic class. Additionally, the predicted capacity of the first data node may be determined by a controller of the network. In some examples, the traffic class may be associated with a specific quality of service (QoS) metric or a specific traffic profile (e.g., audio traffic, video traffic, web traffic, streaming, etc.). In some examples, the method may also include receiving, at the controller and during the period of time, telemetry data indicating the actual capacity of the first data node during the period of time. That is, the telemetry data may be indicative of a number of available or unavailable computing resources of the first data node. In some examples, the controller may determine that a difference between the actual capacity of the first data node and the predicted capacity of the first data node is greater than a threshold difference (e.g., that the first data node has more than a threshold amount of available computing resources). Based at least in part on the difference being greater than the threshold difference, in some examples the controller may send, to the load balancer, a request to redirect one or more specific data flow(s) associated with a second traffic class to the first data node during the period of time so that the data flow(s) can be handled according to the first traffic class. For instance, the one or more specific data flow(s) may be hosted by one or more second data node(s) prior to being redirected, and the one or more second data node(s) may be associated with the second traffic class. In some examples, the second traffic class may be lower than the first traffic class. In at least one examples, the controller may determine to redirect the one or more specific data flow(s) based at least in part on a current capacity of the one or more second data node(s) during the period of time being greater than an estimated capacity. In other words, the controller may determine to redirect the data flow(s) based on the second data node(s) operating above their optimal capacity. In some examples, during a second period of time subsequent to the period of time in which the one or more specific data flow(s) were redirected, the controller may send a second request to the load balancer to redirect some or all of the one or more specific data flow(s) to at least one of the second data node(s) or a third data node that is associated with the second traffic class. For instance, data flows that are associated with the first traffic class that are to be sent to the first data node may need additional computing resources, and the first data node may no longer have additional computing resources available to allocate to the one or more specific data flow(s) associated with the lower traffic class. As such, the one or more specific data flow(s) may need to be sent back to data nodes that are associated with the second traffic class. Certain implementations and embodiments of the disclosure will now be described more fully below with reference to the accompanying figures, in which various aspects are shown. However, the various aspects may be implemented in many different forms and should not be construed as limited to the implementations set forth herein. The disclosure encompasses variations of the embodiments, as described herein. Like numbers refer to like elements throughout. FIG.1illustrates a schematic view of an example system-architecture100of a networked environment102including a tunneled communication session comprising split control-plane and data-plane traffic flows. Generally, the networked environment102may include devices that are housed or located in one or more data centers104that may be located at different physical locations. For instance, the networked environment102may be supported by networks of devices in a public cloud computing platform, a private/enterprise computing platform, and/or any combination thereof. The one or more data centers104may be physical facilities or buildings located across geographic areas that are designated to store networked devices that are part of the networked environment102. The data centers104may include various networking devices, as well as redundant or backup components and infrastructure for power supply, data communications connections, environmental controls, and various security devices. In some examples, the data centers104may include one or more virtual data centers which are a pool or collection of cloud infrastructure resources specifically designed for enterprise needs, and/or for cloud-based service provider needs. Generally, the data centers104(physical and/or virtual) may provide basic resources such as processor (CPU), memory (RAM), storage (disk), and networking (bandwidth). However, in some examples the devices in the networked environment102may not be located in explicitly defined data centers104and, rather, may be located in other locations or buildings. The networked environment102may be accessible to client devices106over one or more networks108. The networked environment102, and the networks108, may each respectively include one or more networks implemented by any viable communication technology, such as wired and/or wireless modalities and/or technologies. The networked environment102and networks108may each may include any combination of Personal Area Networks (PANs), Local Area Networks (LANs), Campus Area Networks (CANs), Metropolitan Area Networks (MANs), extranets, intranets, the Internet, short-range wireless communication networks (e.g., ZigBee, Bluetooth, etc.), Virtual Private Networks (VPNs), Wide Area Networks (WANs) — both centralized and/or distributed — and/or any combination, permutation, and/or aggregation thereof. The networked environment102may include devices, virtual resources, or other nodes that relay packets from one network segment to another by nodes in the computer network. In some examples, the networked environment102may provide, host, provide connectivity to, or otherwise support one or more services110for client devices106to connect to and use. The client devices106may comprise any type of device configured to communicate using various communication protocols (e.g., VPN, SSL, TLS, DTLS, and/or any other protocol) over the networks108. For instance, the client device106may comprise a personal user device (e.g., desktop computers, laptop computers, phones, tablets, wearable devices, entertainment devices such as televisions, etc.), network devices (e.g., servers, routers, switches, access points, etc.), and/or any other type of computing device. In some examples, the networked environment102may include edge routers112(1) and112(2) (hereinafter referred to collectively as “edge routers112”), load balancers114(1)-114(N) (hereinafter referred to collectively as “load balancers114”) (where N represents any number greater than or equal to one), data nodes116(1)-116(N), control nodes118(1)-118(N), firewall nodes120(1)-120(N), a key-value store122, and a controller124. In some examples, the edge routers112and the load balancers114may use ECMP, which is a strategy where next-hop packet forwarding to a single destination can occur over multiple “best paths” which tie for top place in routing metric calculations. Further, any routing strategy may be used by the edge routers112and the load balancers114, such as Open Shortest Path First (OSPF), Intermediate System to Intermediate System (ISIS), Enhanced Interior Gateway Routing Protocol (EIGRP), and/or Border Gateway Protocol (BGP) in conjunction with ECMP routing. Although shown inFIG.1as separate entities, it is to be appreciated that in some instances the edge routers112and the load balancers114may reside on a same hardware device and/or node. The edge routers112may, in some instances, balance traffic126based on a hash of a network 5-tuple in order to route packets to the load balancers114. The traffic126may include both control-plane traffic128and data-plane traffic130. Additionally, the load balancers114may balance traffic126based on a hash of a network 6-tuple in order to route control-plane traffic128to the control nodes118and to route data-plane traffic130to the data nodes116. The network 6-tuple of a packet may include a packet's SPI value, source IP address, source port, destination IP address, destination port, and protocol. As shown, the networked environment102may include data nodes116(1)-116(N) (hereinafter referred to collectively as “data nodes116”) (where N represents any number greater than or equal to one). In some examples, the data nodes116may process data-plane traffic130on behalf of the networked environment102. The data-plane traffic130may comprise ESP traffic associated with an IPsec connection. In some examples a data node116(1) of the data nodes116may be associated with one or more IPsec security associations. Additionally, the data nodes116may forward data plane traffic130to one or more downstream nodes and/or devices, such as the firewall nodes120(1)-120(N) (hereinafter referred to collectively as “firewall nodes120”) (where N represents any number greater than or equal to one). In some examples, a first data node of the data nodes116may be associated with a first traffic class, a second data node of the data nodes116may be associated with a second traffic class, and so forth. Additionally, or alternatively, a first interface of a first data node of the data nodes116may be associated with a first traffic class, a second interface of the first data node of the data nodes116may be associated with a second traffic class, and so forth. In some examples, the data nodes116may determine their predicted capacities during various periods of time and send indications of their predicted capacities to the load balancers114so that the load balancers114may adjust (e.g., increase or decrease) a number of data flows of the data-plane traffic130that the load balancer114are sending to respective data nodes116. The data nodes116may perform these techniques as part of a feedback control loop to ensure that the computing resources of each of the data nodes116are being used to their maximum potential or capacity. In some examples, the choice of algorithm used for the feedback control loop may determine how efficiently or smoothly a data node reaches its maximum potential or capacity. The networked environment102may also include one or more control nodes118(1)-118(N) (hereinafter referred to collectively as “control nodes118”) (where N represents any number greater than or equal to one). In some examples, the control nodes118may process control-plane traffic128on behalf of the networked environment102. The control-plane traffic128may comprise IKE traffic associated with an IPsec connection. As shown, both the data nodes116and the control nodes118may perform direct server return (DSR) to send return traffic132back to the client devices106. That is, the data nodes116and the control nodes118may send return traffic132to the client devices106via the edge router112(1), bypassing the load balancers114. Additionally, or alternatively, the data nodes116and the control nodes118may send the return traffic132directly to the client devices, bypassing the edge router112(1). The networked environment102may also include a key-value store122and a controller124. The key-value store122may include one or more databases that are accessible to the various nodes and devices of the networked environment102. In some examples, the load balancers114, the data nodes116, the control nodes118, and other nodes and/or devices of the networked environment102may read data from and/or write data to the key-value store122. The key-value store122may store associations between SPI values and SAs, SPI values and sets of 5-tuple values, and the like. In some examples, the controller124may receive telemetry data from the data nodes116and/or the control nodes118and, based at least in part on the telemetry data, determine statuses associated with individual ones of the data nodes116and/or the control nodes118. For instance, the controller124may receive telemetry data indicating a load capacity associated with the data node116(1). The controller124may also determine if the load capacity meets or exceeds a threshold load capacity and, if so, the controller124may prompt the data node116(1) to send a notification to the load balancer114(1) to request that the load balancer114(1) adjust where it is sending the data-plane traffic130. For instance, the controller124may send an indication to the load balancer114(1) to upgrade one or more data flows of the data-plane traffic130from a first traffic class to a second traffic class by, for instance, sending the data flows to the data node116(N) rather than the data node116(1). Although depicted inFIG.1as separate hardware components, it should be understood that the edge routers112, the load balancers114, the data nodes116, the control nodes118, the firewall nodes120, the key-value store122, and/or the controller124may be software components at least partially residing in memory. In this way, one or more processors may execute instructions that cause the one or more processors to perform all of the operations described herein with respect to the edge routers112, the load balancers114, the data nodes116, the control nodes118, the firewall nodes120, the key-value store122, and/or the controller124. In some instances, edge routers112, the load balancers114, the data nodes116, the control nodes118, the firewall nodes120, the key-value store122, and/or the controller124may be individual hardware components and/or software components that reside in a standalone device or a system of standalone devices. Additionally, or alternatively, the edge routers112, the load balancers114, the data nodes116, the control nodes118, the firewall nodes120, the key-value store122, and/or the controller124may include any type of networking device, such as servers, switches, routers, hubs, bridges, gateways, modems, repeaters, access points, etc. FIGS.2A and2Bcollectively illustrate a schematic view of an example traffic flow200in which a data node116(1) sends, to a load balancer114, a request for the load balancer114to increase the number of data flows being sent to the data node116(1). At “1,” the client devices106(1),106(2), and106(N) (hereinafter referred to collectively as “client devices106”) (where N represents any number greater than or equal to one) may send traffic202(e.g., control plane and data plane traffic) to the load balancer114, and the load balancer114may forward the traffic204(e.g., data plane traffic) to the respective data nodes116according to, for instance, an ECMP routing strategy based on a network 5-tuple. For instance, the load balancer114may send node116(1) traffic204(1) (e.g., data flows) to the data node116(1), node116(2) traffic204(2) to the data node116(2), and node116(N) traffic204(N) to the data node116(N). As shown inFIG.2A, each of the data nodes116may be operating at a different capacity based at least in part on a number of data flows currently being sent to each of the data nodes116. For instance, data node116(1) is shown operating at 65% capacity, data node116(2) is shown operating at 98% capacity, and data node116(N) is shown operating at 96% capacity. At “2,” the data node116(1) may send one or more optimization requests206to the load balancer114. The data node116(1) may send the optimization request(s)206to the load balancer114based at least in part on the data node116(1) operating at 65% capacity. For instance, the optimization request(s)206may indicate to the load balancer114that the data node116(1) is operating at less than full capacity, and that the load balancer114may send additional data flows to the data node116(1). Although shown inFIGS.2A and2Bas a request to increase the number of data flows sent to the data node116(1), the optimization request(s)206may also be used to indicate that a data node is operating above full capacity and that the load balancer114should redirect one or more data flows away from that data node. At “3,” the load balancer114(1) may send additional traffic208(e.g., additional data flows) to the data node116(1) to increase the capacity of the data node116(1). For instance, the capacity of the data node116(1) is increased to 94% based on receiving the additional traffic208shown inFIG.2B. The load balancer114(1) may send the additional traffic208to the data node116(1) based at least in part on receiving the optimization request206from the data node116(1) as part of a feedback control loop. FIGS.3A and3Bcollectively illustrate a schematic view of an example traffic flow300in which one or more data node(s)116send telemetry data306to a controller124, and the controller124uses the telemetry data306to determine to upgrade one or more data flows from a first traffic class to a second traffic class. At “1,” the client devices106may send traffic302(e.g., data plane and control plane traffic) to the load balancer114, and the load balancer114may forward the traffic304(e.g., data plane traffic) to the respective data nodes116according to, for instance, an ECMP routing strategy based on a network 5-tuple. For instance, the load balancer114may send node116(1) traffic304(1) (e.g., data flows of a first traffic class) to the data node116(1), node116(2) traffic304(2) (e.g., data flows of a second traffic class) to the data node116(2), and node116(N) traffic304(N) (e.g., data flows of a third traffic class) to the data node116(N). As shown inFIG.3A, each of the data nodes116may be operating at a different capacity based at least in part on a number of data flows currently being sent to each of the data nodes116. For instance, data node116(1) is shown operating at 34% capacity, data node116(2) is shown operating at 100% capacity, and data node116(N) is shown operating at 94% capacity. At “2,” the data nodes116may send telemetry data306to the controller124. The telemetry data306may be indicative of the load capacities of the data nodes116. For instance, the controller124may receive first telemetry data306from the data node116(1) indicating that the current load capacity of the data node116(1) is 34%, second telemetry data306from the data node116(2) indicating that the current load capacity of the data node116(2) is 100%, and so forth. In some examples, the data node116(1) may be associated with the first traffic class, the data node116(2) may be associated with the second traffic class, and the data node116(N) may be associated with the third traffic class. At “3,” the controller124may send a traffic class upgrade indication308to the load balancer114. The traffic class upgrade indication308may indicate that the load balancer is to redirect some of the node102(2) traffic of the second traffic class to the data node116(1) so that the node102(2) traffic may be handled according to the first traffic class. For example, based at least in part on receiving the telemetry data306from the data nodes116, the controller124may determine that the data node116(1), which is associated with a first traffic class, has additional capacity and/or resources to receive additional data flows. In addition, the controller124may determine, based at least in part on the telemetry data306, that the data node116(2), which is associated with a second, lower traffic class, is operating at full capacity. Based on this, the controller124may send the traffic class upgrade notification308to cause the load balancer114to upgrade one or more data flows, which are being sent to node116(2) and handled according to the second traffic class, to be sent to the data node116(1) so that the data flows may be handled according to the first traffic class. At “4,” the load balancer114may send a portion of the node102(2) traffic310of the second traffic class to the data node116(1) such that the portion of the node102(2) traffic310may be handled according to the first traffic class. For instance, one or more data flows that are typically sent to the data node116(2) and handled according to the second traffic class may be sent to the data node116(1) so that the data flows may be handled according to the first, higher traffic class since the data node116(1) has spare capacity and/or resources. Additionally, upgrading the one or more data flows may further be based at least in part on the capacity of the data node116(2) operating at full capacity. FIG.4illustrates a data flow diagram of an example traffic flow400between a load balancer114, a data node116, and a controller124to perform some of the techniques described herein for dynamic load adjustment and dynamic traffic class upgrading. The operations404-418shown inFIG.4may be performed at various instances or periods of time with respect to the timeline402. However, it is to be understood that the operations404-418may be performed at different times, and that the times shown inFIG.4are merely used for illustration purposes. The timeline402and the times T0, T1, T2, and T3, may represent different values or units of time. For instance, the timeline402may be in units of milliseconds and time To may represent 0 milliseconds, time T1may represent 1 millisecond, time T2may represent 2 milliseconds, and time T3may represent 3 milliseconds. However, this is merely an example and other units of time may be used (e.g., microseconds, seconds, minutes, hours, etc.). Furthermore, the intervals between the times T0, T1, T2, and T3, may not be equal (e.g., time T0may represent 0 seconds, time T1may represent 1 second, time T2may represent 4 seconds, and time T3may represent 7 seconds, etc.). At time T0the data node116may send telemetry data404to the controller124. The telemetry data404may be indicative of an actual or current capacity of the data node116at time To. For instance, the telemetry data404may indicate a current number of available or unavailable computing resources of the data node116at time To. Between times T0and T1, the data node116and/or the controller124may perform operation(s)406and compare the actual capacity of the data node116during the period of time from T0to T1with the predicted capacity of the data node116during the period of time from T0to T1. At time T1, the data node116may send telemetry data408to the controller124. The telemetry data404may be indicative of an actual or current capacity of the data node116at time T1. For instance, the telemetry data408may indicate a current number of available or unavailable computing resources of the data node116at time T1. Additionally, at time T1the data node116may also send a request410to the load balancer114to increase or decrease the number of data flows being sent to the data node116. For instance, based on the data node116performing operation406, the data node116may determine that its actual capacity during the period of time from T0to T1is greater than or less than the predicted capacity of the data node116during the period of time from T0to T1. As such, the data node116may send the request410to the load balancer114to increase or decrease the number of data flows being sent to the data node116based at least in part on comparing the actual capacity and the predicted capacity. In response to receiving the request410, the load balancer114may increase or decrease the number of data flows being sent to the data node116during the period of time from T1to T2. Between times T1and T2, the data node116and/or the controller124may perform operation(s)412and compare the actual capacity of the data node116during the period of time from T1to T2with the predicted capacity of the data node116during the period of time from T1to T2. At time T2, the data node116may send telemetry data414to the controller124. The telemetry data414may be indicative of an actual or current capacity of the data node116at time T2. For instance, the telemetry data414may indicate a current number of available or unavailable computing resources of the data node116at time T2. Between times T2and T3, the data node116and/or the controller124may perform operation(s)416and compare the actual capacity of the data node116during the period of time from T2to T3with the predicted capacity of the data node116during the period of time from T2to T3. Based on the controller124performing operation416, the controller124may determine that the data node116has additional capacity. As such, the controller124may send the request418to the load balancer114to upgrade a traffic class of a data flow by sending the data flow to the data node116. For instance, the data node116may be associated with a higher traffic class than a current data node where the data flow is being sent. In response to receiving the request418, the load balancer114may redirect a data flow of a lower traffic class to be sent to the data node116such that the data flow may be handled according to the higher traffic class of the data node116during a period of time after T3. FIGS.5and6illustrate logic flow diagrams of various example methods associated with the technologies presented herein for load balancing encrypted traffic based on SPI values. The logical operations described herein with respect toFIGS.5and6may be implemented (1) as a sequence of computer-implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within a computing system. The implementation of the various components described herein is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as operations, structural devices, acts, or modules. These operations, structural devices, acts, and modules can be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. It should also be appreciated that more or fewer operations might be performed than shown in theFIGS.5and6, and described herein. These operations can also be performed in parallel, or in a different order than those described herein. Some or all of these operations can also be performed by components other than those specifically identified. Although the techniques described in this disclosure is with reference to specific components, in other examples, the techniques may be implemented by less components, more components, different components, or any configuration of components. FIG.5illustrates a logic flow diagram of an example method500for dynamic load adjustment that may be performed at least partially by a data node of a network, such as one of the data nodes116. The method500begins at operation502, which includes determining, by a data node of a network, a predicted capacity of the data node during a period of time. For instance, the data node116(1) of the networked environment102may determine its predicted capacity during a period of time (e.g., from 5:00 PM to 8:00 PM on a Friday). At operation504, the method500includes sending, to a load balancer of the network, an indication of the predicted capacity to prompt the load balancer to send a first number of data flows to the data node during the period of time. For instance, the data node116(1) may send the indication of the predicted capacity to the load balancer114(1). In response to receiving the indication, the load balancer114(1) may send a first number of data flows of data-plane traffic130to the data node116(1) during the period of time (e.g., starting at 5:00 PM on Friday). At operation506, the method500includes determining, by the data node and during the period of time, a difference between the predicted capacity of the data node and an actual capacity of the data node. For instance, the data node116(1) may determine the difference between the predicted capacity of the data node during the period of time (e.g., 5:00 PM to 8:00 PM on Friday) and the actual capacity of the data node measured at some instance of time during the period of time (e.g., at 5:15 PM on Friday). At operation508, the method500includes, based at least in part on the difference, prompting the load balancer to send a second number of the data flows to the data node during the period of time. For instance, the data node116(1) may prompt the load balancer114(1) to send the second number of the data flows to the data node116(1) during the period of time (e.g., from 5:00 PM to 8:00 PM on Friday). In some examples, the second number of the data flows may be less than the first number of the data flows in order to decrease the load of the data node116(1). In other examples, the second number of the data flows may be greater than the first number of the data flows in order to increase the load of the data node116(1). FIG.6illustrates a logic flow diagram of an example method600for dynamic traffic class upgrading that may be performed at least partially by a controller of a network, such as the controller124of the networked environment102. The method600begins at operation602, which includes determining, by a controller of a network, a predicted capacity of a first data node of the network during a period of time, the first data node being associated with a first traffic class. For instance, the controller124may determine a predicted capacity of a first data node116(1) during a period of time (e.g., from 5:00 PM to 8:00 PM on a Friday). At operation604, the method600includes receiving, at the controller and during the period of time, telemetry data indicating an actual capacity of the first data node during the period of time. For instance, the controller124may receive telemetry data306from the data nodes116, and the telemetry data306may indicate the actual capacity of each of the data nodes116of the networked environment102during the period of time (e.g., at 5:15 PM on Friday). At operation606, the method600includes determining, by the controller, that a difference between the actual capacity of the first data node and the predicted capacity of the first data node is greater than a threshold difference. For example, the controller124may determine that the difference between the actual capacity of the first data node116(1) and the predicted capacity is greater than the threshold difference. In some instances, the threshold difference may be a percentage of available computing resources and/or capacity of the data nodes116. For example, the threshold difference may be that at least 40%, 50%, 60%, etc. of resources of a data node are available. At operation608, the method600includes sending, by the controller and to a load balancer of the network, a request to redirect a data flow associated with a second traffic class to the first data node during the period of time such that the data flow is handled according to the first traffic class. For instance, the controller124may send the request to the load balancer114(1). In response, the load balancer114(1) may redirect the data flow associated with the second traffic class to the data node116(1), which may be associated with the first traffic class. For instance, the data flow may have been previously sent to the data node116(N), which may be associated with the second traffic class, and the load balancer114(1) may redirect that data flow to the data node116(1) during the period of time. FIG.7illustrates a schematic view of an example computer-hardware architecture for implementing a network node and/or computing device, such as a load balancer114, control node118, data node116, controller124, etc. that can be utilized to implement aspects of the various technologies presented herein. The computer architecture shown inFIG.7illustrates a conventional server computer, network device, workstation, desktop computer, laptop, tablet, network appliance, e-reader, smartphone, and/or other computing device, and can be utilized to execute any of the software components presented herein. The computer700may comprise networked devices such as servers, switches, routers, hubs, bridges, gateways, modems, repeaters, access points, etc. The computer700includes a baseboard702, or “motherboard,” which is a printed circuit board to which a multitude of components or devices can be connected by way of a system bus or other electrical communication paths. In one illustrative configuration, one or more central processing units (“CPUs”)704operate in conjunction with a chipset706. The CPUs704can be standard programmable processors that perform arithmetic and logical operations necessary for the operation of the computer700. The CPUs704perform operations by transitioning from one discrete, physical state to the next through the manipulation of switching elements that differentiate between and change these states. Switching elements generally include electronic circuits that maintain one of two binary states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements can be combined to create more complex logic circuits, including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like. The chipset706provides an interface between the CPUs704and the remainder of the components and devices on the baseboard702. The chipset706can provide an interface to a RAM708, used as the main memory in the computer700. The chipset706can further provide an interface to a computer-readable storage medium such as a read-only memory (“ROM”)710or non-volatile RAM (“NVRAM”) for storing basic routines that help to startup the computer700and to transfer information between the various components and devices. The ROM710or NVRAM can also store other software components necessary for the operation of the computer700in accordance with the configurations described herein. The computer700can operate in a networked environment using logical connections to remote computing devices and computer systems through a network, such as the network(s)108and/or the network(s)724. The chipset706can include functionality for providing network connectivity through a NIC712, such as a gigabit Ethernet adapter. The NIC712is capable of connecting the computer700to other computing devices over the network. It should be appreciated that multiple NICs712can be present in the computer700, connecting the computer to other types of networks and remote computer systems. In some examples, the NIC712may be configured to perform at least some of the techniques described herein and may include components for performing the techniques described herein. The computer700can be connected to a storage device718that provides non-volatile storage for the computer. The storage device718can store an operating system720, programs722, and data, which have been described in greater detail herein. The storage device718can be connected to the computer700through a storage controller714connected to the chipset706. The storage device718can consist of one or more physical storage units. The storage controller714can interface with the physical storage units through a serial attached SCSI (“SAS”) interface, a serial advanced technology attachment (“SATA”) interface, a fiber channel (“FC”) interface, or other type of interface for physically connecting and transferring data between computers and physical storage units. The computer700can store data on the storage device718by transforming the physical state of the physical storage units to reflect the information being stored. The specific transformation of physical state can depend on various factors, in different embodiments of this description. Examples of such factors can include, but are not limited to, the technology used to implement the physical storage units, whether the storage device718is characterized as primary or secondary storage, and the like. For example, the computer700can store information to the storage device718by issuing instructions through the storage controller714to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description. The computer700can further read information from the storage device718by detecting the physical states or characteristics of one or more particular locations within the physical storage units. In addition to the mass storage device718described above, the computer700can have access to other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media is any available media that provides for the non-transitory storage of data and that can be accessed by the computer700. In some examples, the operations performed by the system-architecture100and or any components included therein, may be supported by one or more devices similar to computer700. Stated otherwise, some or all of the operations performed by the system-architecture100, and or any components included therein, may be performed by one or more computer devices700operating in a cloud-based arrangement. By way of example, and not limitation, computer-readable storage media can include volatile and non-volatile, removable and non-removable media implemented in any method or technology. Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM (“EPROM”), electrically-erasable programmable ROM (“EEPROM”), flash memory or other solid-state memory technology, compact disc ROM (“CD-ROM”), digital versatile disk (“DVD”), high definition DVD (“HD-DVD”), BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information in a non-transitory fashion. As mentioned briefly above, the storage device718can store an operating system720utilized to control the operation of the computer700. According to one embodiment, the operating system comprises the LINUX operating system. According to another embodiment, the operating system comprises the WINDOWS® SERVER operating system from MICROSOFT Corporation of Redmond, Washington. According to further embodiments, the operating system can comprise the UNIX operating system or one of its variants. It should be appreciated that other operating systems can also be utilized. The storage device718can store other system or application programs and data utilized by the computer700. In one embodiment, the storage device718or other computer-readable storage media is encoded with computer-executable instructions which, when loaded into the computer700, transform the computer from a general-purpose computing system into a special-purpose computer capable of implementing the embodiments described herein. These computer-executable instructions transform the computer700by specifying how the CPUs704transition between states, as described above. According to one embodiment, the computer700has access to computer-readable storage media storing computer-executable instructions which, when executed by the computer700, perform the various processes described above with regard toFIGS.1-6. The computer700can also include computer-readable storage media having instructions stored thereupon for performing any of the other computer-implemented operations described herein. The computer700can also include one or more input/output controllers716for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic stylus, or other type of input device. Similarly, an input/output controller716can provide output to a display, such as a computer monitor, a flat-panel display, a digital projector, a printer, or other type of output device. It will be appreciated that the computer700might not include all of the components shown inFIG.7, can include other components that are not explicitly shown inFIG.7, or might utilize an architecture completely different than that shown inFIG.7. As described herein, the computer700may comprise one or more of data nodes, control nodes, firewall nodes, edge routers, and/or key-value stores. The computer700may include one or more hardware processors704(processors) configured to execute one or more stored instructions. The processor(s)704may comprise one or more cores. Further, the computer700may include one or more network interfaces (e.g., NIC712) configured to provide communications between the computer700and other devices over a network, such as the network(s)108and/or724. The network interfaces may include devices configured to couple to personal area networks (PANs), wired and wireless local area networks (LANs), wired and wireless wide area networks (WANs), and so forth. For example, the network interfaces may include devices compatible with Ethernet, Wi-Fi™, and so forth. The programs722may comprise any type of programs or processes to perform the techniques described in this disclosure for dynamically load balancing traffic based on predicted and actual load capacities of backend server nodes, as well as dynamically upgrading traffic classes of data flows based on available resources of data nodes. While the invention is described with respect to the specific examples, it is to be understood that the scope of the invention is not limited to these specific examples. For instance, while many of the examples are described with respect to IPsec protocols, it should be understood that the techniques described are applicable to other protocols. Since other modifications and changes varied to fit particular operating requirements and environments will be apparent to those skilled in the art, the invention is not considered limited to the example chosen for purposes of disclosure, and covers all changes and modifications which do not constitute departures from the true spirit and scope of this invention. Although the application describes embodiments having specific structural features and/or methodological acts, it is to be understood that the claims are not necessarily limited to the specific features or acts described. Rather, the specific features and acts are merely illustrative some embodiments that fall within the scope of the claims of the application.
57,450
11863454
DETAILED DESCRIPTION Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Overview Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein. Disclosed herein are systems, methods, and computer-readable media for a scalable process for validating multiple paths (e.g., Equal Cost Multiple Paths (ECMPs) used for routing network traffic in a network (e.g., in an IPv6 network) using segment routing (SRv6). In one aspect, a method of validating packet forwarding on multiple paths includes identifying, by a first network hop, one or more second network hops, for each of the one or more second network hops, determining a corresponding flow label, the corresponding flow label including a corresponding test packet for validating packet forwarding between the first network hop and a corresponding second network hop, and performing a validation process for validating packet forwarding from the first network hop to the corresponding second network hop using at least the corresponding flow label. The method further includes determining a queue of additional network hops to be validated based on a result of the validation process, and iteratively validating packet forwarding for each additional network hop in the queue. In another aspect, the validation process includes generating a validation data packet for the corresponding second network hop, the validation data packet including the corresponding flow label and a segment routing header identifying a Segment Identifier list (SID-list), sending the validation data packet through the first network hop, receiving a response data packet from the corresponding second network hop, and determining the result of the validation process as one of (1) successful packet forwarding from the first network hop to the corresponding second network hop when the response data packet includes a confirmation message, or (2) failure of packet forwarding from the first network hop to the corresponding second network hop when the message does not include the confirmation message. In another aspect, determining the queue includes adding the corresponding second network hop to the queue if the result of the validation process is the successful packet forwarding from the first network hop to the corresponding second network hop. In another aspect, iteratively validating the packet forwarding for each additional network hop in the queue includes selecting a network hop from the queue to yield a selected network hop, generating a corresponding query data packet for the selected network hop, receiving a response data packet in response to sending the query data packet to the selected network hop, and validating packet forwarding of the selected network hop based on the response data packet. In another aspect, the corresponding query data packet includes a path instruction of a previous network hop of the selected network hop, and the response data packet includes a path instruction of the selected network hop, an identifier of one or more next hops of the selected network hop and a corresponding flow label for each of the one or more next hops of the selected network hop. In another aspect, validating the packet forwarding of the selected network hop includes generating a validation data packet for validating packet forwarding of the selected network hop to each of the one or more next hops of the selected network hop, the validation data packet including the corresponding flow label and the path instruction of the previous network hop of the selected network hop, sending the validation data packet through the selected network hop, receiving a response data packet from a corresponding next hop of the selected network hop, and validating the packet forwarding of the selected network hop based on the response data packet. In another aspect, each of the first network hop and the one or more second network hops is one of a router or a switch for routing data plane traffic of an IPv6 network using segment routing. In one aspect, a device includes one or more memories including computer-readable instructions stored therein and one or more processors. The one or more processors are configured to execute the computer readable instructions to identify one or more network hops, for each of the one or more network hops, determine a corresponding flow label, the corresponding flow label including a corresponding test packet for validating packet forwarding between the device and a corresponding network hop, and perform a validation process for validating packet forwarding from the device to the corresponding network hop using at least the corresponding flow label. The one or more processors are further configured to execute the computer-readable instructions to determine a queue of additional network hops to be validated based on a result of the validation process, and iteratively validate packet forwarding for each additional network hop in the queue. In one aspect, one or more non-transitory computer-readable media include computer-readable instructions, which when executed by one or more processors of a first network hop, cause the first network hop to identify one or more second network hops, for each of the one or more second network hops, determine a corresponding flow label, the corresponding flow label including a corresponding test packet for validating packet forwarding between the first network hop and a corresponding second network hop, and perform a validation process for validating packet forwarding from the first network hop to the corresponding second network hop using at least the corresponding flow label. The execution of the computer-readable instructions by the one or more processors further cause the one or more processors to determine a queue of additional network hops to be validated based on a result of the validation process, and iteratively validate packet forwarding for each additional network hop in the queue. Description of Example Embodiments As noted above, large volumes of data can be difficult to manage and route within a network. Segment routing is an example of a process used for routing traffic within a given network (e.g., in an IPv6 network such as an IPv6 centric data center). Segment Routing over IPv6 (SRv6) can steer data packets between two network codes in a network using a Segment Identifier (SID) in the SR Header (SRH). A SID-list can result in multiple paths between a source node and a destination node of data packets (e.g., Equal Cost Multiple Paths (EMCPs)). When a SID-list results in ECMPs, network nodes (which may be referred to as hops throughout this disclosure) can determine next hops for the data packets typically based on a hash value of a data packet. The hash values can be determined based on source and destination IP addresses, a flow label, source and destination ports, next header, etc. (e.g., this is an example 6-tuple used by Junos). Thus, different data packets traverse different end-to-end ECMPs based on the content of the data packets. Before steering data packets using a SID-list, the SID-list may be validated. When validating a SID-list, each end-to-end ECMP of the SID-list is validated. Validation of an end-to-end ECMP can ensure links (between hops) on the end-to-end ECMP are operational, can ensure each hop on the end-to-end ECMP correctly forwards data packets with the SID-list, and can ensure reachability of the destination node via the end-to-end ECMP using the SID-list. Validation of a SID-list is performed by sending testing data packets with the SID-list via all end-to-end ECMPs, and confirming that the testing data packets can reach the destination node. Currently methods such as traceroute and ping may be used for validating a SID-list and associated ECMPs, which require calculation of flow labels for each egress interface of each node on the ECMPs and are computationally intensive as different vendors and router platforms use different hashing algorithms, load balancing algorithms, etc. As an example, when using a traceroute/ping method, the number of ping/traceroute packets increases exponentially with the number of ECMPs that are in series. For example, in a 5 node scenario (e.g., a headend node, a tailend node and 3 intermediary nodes) and assuming that there are 8 ECMPs between nodes 1 and 3, and 10 ECMPs between nodes 3 and 5, the total number of end-to-end ECMPs is 8×10=80. Hence, 80 ping packets are sent out. To test using traceroute over each end-to-end ECMP, 8×10×4=320 traceroute packets are generated and sent. Moreover, if there is an ECMP path with nodes or links that are not in the headend's topology database due to static configurations, etc., such ECMP may be missed and hence not tested. Another available method is MPLS LSP multipath tree trace, which is not applicable for SRv6 data plane. As described in more detail below, the present disclosure provides systems, methods and computer-readable media for validating end to end packet forwarding on SRv6 data plane for a SID-list that utilizes one or more ECMPs. Example advantages of the disclosed technologies for validating multiple paths include a significant reduction in the number of flow labels requires for a complete validation (e.g., one flow label per next hop), a linear increase in the number of test packets with an increase in the number of single-hop links on the end to end ECMPs of a SID list (as opposed to an exponential increase associated with methods described above), and using the same encapsulation as the data traffic for testing all paths between a headend and a tailend. This advantages are more evident in massively scaled networks that have 128 or more ECMPs between two network nodes. The present technology will be described in the following disclosure as follows. The discussion begins with an overview of SRv6 and IPv6. A description of an example cloud data center architecture and an example multi-cloud environment with an SRv6 overlay, as illustrated inFIGS.1and2, will then follow. A description of example IPv6 and SRv6 packets and their corresponding flow will then be described with reference toFIGS.3and4. The present technology and the example validation processes are then described with reference toFIGS.5A-5HandFIG.6. The discussion concludes with a description of an example network device and example computing devices, as illustrated inFIGS.7A-B, including example hardware components suitable for hosting software applications and performing computing operations. The disclosure now turns to an overview discussion of IPv6 and SRv6. The approaches herein can utilize segment routing (SR) to steer connection or communication requests between two network nodes such as servers or nodes on different clouds or cloud regions. IPv6 and SR, which are further described below, can be used to steer requests efficiently while limiting state information. The request will be routed to the nodes identified in the SR packet based on the IPv6 and SRv6 headers. The IPv6 header can include a Source Address (SA) and a Destination Address (DA), such as a destination server or node. An SR Header (SRH) can include a SID-list of SR nodes (e.g., S1, S2, S3, etc.) and a Segment Left (SL) counter which identifies the number of remaining destination servers or nodes. IPv6 Environment In an IPv6 environment, such as an IPv6-centric data center, network nodes (e.g., servers) can be reached via an IPv6 physical prefix. For example, servers can run application services in isolated environments, such as virtual machines (VMs) or software containers, which can be assigned an IPv6 virtual address (VIP). In some cases, a virtual switch (e.g., Open vSwitch, vector packet processing, etc.) can be deployed on a server to route packets between physical and virtual interfaces on the server. This allows the network (e.g., data center) to be fully Layer-3 routed, without having to deploy Layer-2 tunnels such as VLANs or VXLANs. Routing the VIPs corresponding to the different applications running in the data center can be achieved in several manners. In some examples, the virtual switches can run Interior Gateway Protocol (IGP) to propagate direct routes to the VIPs. Other examples may use a mobility protocol, such as Identifier-Locator Addressing for IPv6, wherein edge routers perform the translation between physical and virtual addresses. Moreover, network devices can use Border Gateway Protocol (BGP) to exchange routing information. As will be further explained below, the approaches herein implement segment routing to establish and manage connectivity between clouds. Segment Routing (SR) SR is a source-routing paradigm, initially designed for traffic engineering, which allows for a packet to follow a predefined path, defined by a list of segments (a SID list), inside an SR domain. The approaches herein leverage an SRv6 architecture and IPv6 connectivity to efficiently create and manage multi-cloud connectivity. SRv6 and IPv6 can be leveraged together by implementing an IPv6 and SRv6 header in an IPv6 packet. For example, in some cases, an IPv6 extension header can be implemented to identify a list of segments for SR and a counter Segments Left, indicating the number of remaining segments to be processed until the final destination of the packet is reached. In an SRv6 packet, the IPv6 destination address can be overwritten with the address of the next segment. This way, the packet can go through SR-capable routers until reaching the next intended SR hop. Upon receipt of an SRv6 packet, an SR-capable router will set the destination address to the address of the next segment, and decrease the Segments Left counter. When the packet reaches the last SR hop, the final destination of the packet is copied to the IPv6 destination address field. Depending on the value of a flag in the header, the SRv6 header can be stripped by the last SR hop so that the destination receives a vanilla IPv6 packet. FIG.1illustrates a diagram of an example cloud data center architecture, according to some aspects of the present disclosure. Architecture100can be implemented by one or more clouds in a multi-cloud environment. Architecture100is an example of a network in which SRv6 and IPv6 can be leveraged. However, the present disclosure is not limited thereto and can be applied to any other type of known or to be developed network in which SRv6 and IPv6 can be leveraged. The cloud data center architecture100can include a cloud104, which can be a private cloud, a public cloud, a hybrid cloud, a virtual private cloud (VPC), a cloud region, etc. The cloud104can host one or more data centers and/or networks. For example, the cloud104can include a single data center or a plurality of data centers. The cloud104can be physically located in one geographic location or distributed throughout multiple geographic locations. Moreover, the cloud104can include forwarder-side and server-side architectures or components. The cloud104switches106-1through106-N (collectively “106” hereinafter) and108-1through108-N (collectively “108” hereinafter) configured to route traffic in the cloud data center architecture100. The switches106,108can include any network device with layer 2 (L2) and/or layer 3 (L3) capabilities. In this example, the switches106represent spine switches and the switches108represent leaf switches. The client102can connect to the cloud104and access application servers110-1through110-N (collectively “110” hereinafter) via the switches106,108. The client102can be a network, such as a cloud network or data center (e.g., a private cloud, a public cloud, a hybrid cloud, a cloud region or segment, a virtual private cloud, etc.), or any computing device, such as a laptop, a desktop, a tablet computer, a mobile phone, a server, a smart device (e.g., smart television, smart watch, etc.), an internet of things (IoT) device, etc. The switches106can serve as edge devices in the cloud104, and route traffic to and from the cloud104. The switches106can thus serve as the egress and ingress point for the cloud104. The switches106can also route traffic to the switches108in the cloud104, which can route traffic to other nodes (e.g., appliances, firewalls, load balancers, etc.) and application servers110in the cloud104. The application servers110can represent physical machines and/or resources hosting applications, isolated environments, or services in the cloud104. For example, the application servers110can be physical servers running various applications in the cloud104. The application servers110can run some or all of their applications in isolated environments, such as VMs or software containers. In some cases, an application can by hosted by, and/or run on, multiple application servers110in the cloud104. For example, multiple application servers110can run instances of an application (e.g., virtual instances, replicas, parallel instances, mirror instances, etc.). The application servers110can include a physical network interface (e.g., NIC) to communicate with other devices or services (e.g., devices or services in the cloud data center architecture100). The physical network interface can be assigned a physical prefix or network address for such communications. The application servers110can also include one or more virtual interfaces (e.g., vNICs) which can provide virtualized or abstract representations of network interfaces and connections. Virtual interfaces can provide added flexibility and network capabilities, as well as various other benefits or services, such as aggregation of links or data, isolation of data or networks, decoupling of application and system traffic, expansion of network interfaces, network redundancy, dedicated links, and so forth. Virtual interfaces can be assigned virtual addresses (e.g., VIPs) in the cloud104. The virtual addresses can identify the virtual interfaces as well as any applications or isolated environments associated with the virtual addresses on the application servers110. For example, an application can be assigned a virtual address in the cloud104, which can be used to identify the application in the cloud104and route traffic to and from the application. The virtual address can be used to steer traffic to and from a virtual instance of the application running on one or more of the application servers110. In some cases, the virtual address can be mapped to the same application on multiple application servers110, and can be used to communicate with an instance of the application on any of the multiple application servers110. In some cases, the application servers110can include a virtual switch, such as OVS or VPP, which can route traffic to and from the application servers110. For example, a virtual switch can route traffic between physical and virtual network interfaces on an application server, between applications and/or isolated environments on the application server, and between the application server and devices or applications outside of the application server. To illustrate, an application server can run multiple workloads (e.g., applications in different VMs or containers) assigned to different virtual interfaces and virtual addresses. A virtual switch on the application server can route traffic to and from the different workloads by translating the virtual addresses of the workloads and communicating with the virtual interfaces as well as other network interfaces such as the physical network interface(s) on the application server. FIG.2illustrates a diagram of an example multi-cloud environment with an SRv6 overlay, according to some aspects of the present disclosure. The multi-cloud environment200includes clouds104A-G interconnected through an SRv6 overlay202which routes traffic between the clouds104A-G using SRv6. In this example, cloud104A represents a private cloud or site, and clouds104B-G represent public clouds. Moreover, the clouds104B,104C,104D include virtual private clouds (VPCs)206,208,210configured for cloud104A and hosted by the clouds104B,104C,104D. Clouds104E-G, as illustrated in this example, do not include VPCs associated with cloud104A. However, as described below, the approaches herein can allow VPCs to be created for cloud104A on any of the clouds104E-G. A controller212can interact with gateways216A-G on clouds104A-G to collect topology information, perform path computation, propagate routes across the clouds104A-G and/or VPCs206-210, propagate segment routing identifiers (SIDs) and policies across the clouds104A-G and/or VPCs206-210, perform traffic engineering, etc. The controller212can be, for example, a BGP controller with a path computation engine. The controller212can reside on cloud104A or any other network or cloud. The gateways216A-G can be, for example, virtual gateways available at the clouds104A-G. In some cases, the virtual gateways can include a vector packet processing engine (VPP). The controller212can collect topology information from the clouds104A-G and/or VPCs206-210and propagate forwarding rules and SR IDs (e.g., SIDs) and policies using one or more protocols such as OSPF (Open Shortest Path First), IS-IS (Intermediate System to Intermediate System), BGP Link-State (BGP-LS), BGP Traffic Engineering (BGP-TE), etc. For example, the controller212can collect topology information for the clouds104A-G and/or VPCs206-210from gateways216A-G using BGP-LS protocol. The controller212can also include a path computation engine (PCE) for computing the best paths between the gateways216A-G. The controller212can use the collected topology and/or cloud information to perform the path computation. The controller212can then use BGP-TE to populate reachability information, such as forwarding rules and SR IDs and policies, on the gateways216A-G. The gateways216A-G can include a control plane that interfaces with BGP-LS and BGP-TE to receive the forwarding rules and SR IDs policies from the controller212. The gateways216A-G can also include a data plane that processes IPv4 and/or IPv6 packets and is able to encapsulate/decapsulate IPv4 or IPv6 packets into SRv6 packets. Moreover, the gateways216A-G can include BGP agents218A-G, such as GoBGP agents, to interact with the controller212or any BGP peers. In some cases, the gateways216A-G can also include an active measurement system based on IP SLA (Internet Protocol Service Level Agreement) to collect network performance information and monitor quality-of-service (QoS) between the gateways216A-G. The controller212can communicate with the clouds104A-G via IPv4 or IPv6. The SRv6 overlay202can include SRv6-capable nodes that can route traffic over the SRv6 overlay202using SRv6, as further explained below. FIG.3Aillustrates an example SRv6 packet, according to some aspects of the present disclosure. SRv6 packet300for traffic routed via the SRv6 overlay202. The SRv6 packet300includes a payload302, an IPv6 header304, and an SR header306. The SR header306can include a segments field312containing a list of segments314or SR list. The list of segments314can include a set of destination nodes for the SRv6 packet300. For example, the list of segments314can include application server110-1(S1) and application server110-2(S2) from the cloud104shown inFIG.1. The destination nodes in the list of segments314can reside on one cloud (e.g.,104) or multiple clouds (e.g.,104A-G). The list of segments314can also include a respective function for each segment, as further described below with reference toFIG.3B. The list of segments314in the SR header306can be used by nodes in the SRv6 overlay202to steer the packet300to the destination nodes (e.g., application servers110-1and110-2) in the list of segments314. The list of segments314identifies each segment (e.g., SRv6-capable node) along a path for the packet. Each SRv6-capable node can maintain a list of SRv6 segments instantiated at the node. The SRv6-capable node can use its list of SRv6 segments to route the packet to the next segment in the list of segments314. The segments field312can also include a counter318, known as the Segments Left, which identifies the active segment. The value of the counter318is decreased by 1 each time it is received by an SRv6-capable node as the packet travels through the IPv6 network. The IPv6 header304can include a source address field310and a destination address field308. The source address field310can identify the source of the packet300, such as client102. The source address field310can include a network address of the original source of the packet300, a return destination for the packet300, and/or a current source or sender of the packet300. The source field310can also include commands or functions to be implemented by the node identified in the source field310, as will be further described below. The destination address field308can identify the next segment or node from the list of segments314. In this example, the destination address field308identifies server110-1(S1) which is the first destination node in the list of segments314for the packet300. The destination address field308can be used to steer the packet300to the next destination. The destination field308in the IPv6 header304can allow the packet300to be routed even if the packet300traverses SR-unaware nodes. The destination address field308can include a network prefix of the identified node or segment. For example, the destination address field308can include the physical prefix of server110-1(S1). This can ensure that the packet300is transmitted to that node or segment (e.g., server110-1(S1)), as the first destination for the packet300. After the server110-1(S1) processes the packet300, the server110-1(S1) can forward the packet300to the next segment in the list of segments314, which in this example is server110-2(S2). When forwarding the packet, the server110-1(S1) can overwrite the destination address field308on the IPv6 header304to identify the server110-2(S2) as the destination, which ensures that the packet300is routed to server110-2(S2). Server110-2(S2) can then receive the packet300based on the destination address field308. This way, the list of segments314in the SR header306as well as the destination address field308in the IPv6 header304can be used to push the packet300to the destination nodes in the list of segments314. As will be further explained, the list of segments314and/or destination address field308can include functions or commands (hereinafter “SR functions”) to be implemented by associated nodes or segments. For example, the destination address field308can identify application server110-1(S1) and include a function to be applied by application server110-1(S1), such as a connect function which application server110-1(S1) can interpret as a request to connect with an application or node associated with the function. The destination address field308can contain the state of the packet300, including the next destination of the packet, the source or return node, and any commands or functions for such nodes or segments. Similarly, the list of segments314can include commands or functions for the segments in the list of segments314. For example, the list of segments314can include a connect function for each of the destination node or segment, a force connect function for the last segment in the list of segments314, one or more parameters for one or more segments (e.g., resource identifier, flow identifier, etc.), state information, and so forth. SR functions can encode actions to be taken by a node directly in the SR header306and/or the IPv6 header304. SR functions are executed locally by the SRv6-capable nodes. Example SR functions include, without limitation, End (i.e., endpoint function), End.X (i.e., endpoint function with Layer-3 cross-connect), End.T (i.e., endpoint function with specific IPv6 table lookup), End.S (i.e., endpoint in search of a target in table T), End.B6 (i.e., endpoint bound to an SRv6 policy), etc. For example, in an SR header (306) containing s::cj, s::cj denotes the shortest-path to the node s and an x-connect function (function c) to the neighbor j. In some examples, each node can be assigned an entire IPv6 prefix. Accordingly, the lower-order bytes in the prefix can be used to designate different SR functions. In some cases, the SR functions may depend on the address of the first segment in the list of segments314(e.g., the “sender” of the function). To illustrate, when a node whose physical prefix is s receives a packet with the SR header306containing (x, . . . , s::ƒ, . . . ), the SR header306will trigger node s to perform a function ƒ with argument x, denoted by s.f(x). FIG.3Billustrates a schematic diagram of an example destination address field in an IPv6 header, according to some aspects of the present disclosure. Destination address field308can include 128 bits, which can be segmented to include a first segment320from the first 64 bits for the node prefix326, a second segment322from the next 32 bits for an SR function328, and a third segment324from the next 32 bits to include any arguments330for the SR function328. While this example illustrates the destination address field308segmented into a segment of 64 bits, a segment of 32 bits, and a segment of 32 bits, it should be noted that the destination address field308allows for flexible bit selection and thus can be segmented in other ways. The example inFIG.3Bis provided for illustration and explanation purposes. The node prefix326can include the physical prefix of the next segment or node. The SR function328can include a command or function associated with the node prefix326. In some cases, the third segment324can be further segmented into sub-segments which can include arguments for the SR function328. The arguments can be used to pass specific parameters for the SR function328. FIG.4illustrates an example flow of SRv6 traffic, according to some aspects of the present disclosure.FIG.4illustrates an example flow of SRv6 traffic (e.g., SRv6 packet300) based on corresponding IPv6 and SRv6 headers404,406,408. In this example, a client102sends a packet402to switch108-N. The packet402can identify the client device102as the source (can be referred to as a source node) and a destination address for the traffic (can be referred to as a destination node). The switch108-N can receive the packet402and forward the packet to application server110-1(S1) based on the IPv6 and SRv6 headers404. The SRv6 header in the headers404can include a list of segments (a SID-list)410identifying application servers110-1,110-2,110-3as the destination segments. The SRv6 header can in the headers404can also include a segments left (SL) counter412identifying the number of remaining segments or hops in the list of segments410. The application server110-1(S1) can receive the packet402from the switch108-N and process it. The application server110-1(S1) can then forward the packet402to application server110-2(S2), which is the next segment in the list of segments410, based on the list of segments410in the headers406. The application server110-1(S1) can also decrease the SL counter412identifying the number of remaining segments or hops in the list of segments410. The application server110-2(S2) can receive the packet402from the application server110-1(S1) and process it. The application server110-2(S2) can then forward the packet402to application server110-3(S3), which is the next segment in the list of segments410, based on the list of segments410in the headers408. The application server110-2(S2) can also decrease the SL counter412identifying the number of remaining segments or hops in the list of segments410. The application server110-3(S3) can receive the packet402from the application server110-2(S2) and process it. The application server110-3(S3) is the last segment in the list of segments410. Accordingly, the application server110-3(S3) can decrease the SL counter412identifying the number of remaining segments or hops in the list of segments410, without forwarding the packet to another destination segment. With example networks and flow of SRv6/IPv6 packets described above with reference toFIGS.1-4, the disclosure now turns to systems and techniques for validating multiple paths (e.g., Equal Cost Multiple Paths (ECMPs)) in SRv6 data plane before using such multiple paths to carry packet forwarding. FIGS.5A-5Hillustrate various stages of validating multiple paths for packet forwarding over SRv6 data plane, according to some aspects of the present disclosure. FIG.5Aillustrates a non-limiting example structure500where node502can be a source node (e.g., a user terminal, a server, etc.) with data traffic to be forwarded to node514that can be a destination node (e.g., a different user terminal, a different server, etc.). Nodes504,506,508,510and512can be network elements (e.g., switches and/or routers) that use segment routing for carrying network traffic from source node502to source node514. Each of nodes504,506,508,510and/or512may be referred to as a hop. Node504may be referred to as a headend node (or headend hop). Nodes506,508and510may be referred to as intermediate nodes (or intermediate hops). Node512may be referred to as a tailend node (or a tailend hop). In example ofFIG.5A, each of nodes502to514have a second alternative designation that corresponds to the prevalent literature on segment routing. For example, node502may also be designated and/o referred to as R1, node504may also be designated and/o referred to as R2, node506may also be designated and/o referred to as R3, node508may also be designated and/o referred to as R4, node510may also be designated and/o referred to as R5, node512may also be designated and/o referred to as R6 and node514may also be designated and/o referred to as R7. In this non-limiting example ofFIG.5A, headend node504may utilize an example SRv6 policy with SID list <S3,S6> for sending network traffic from node502to an egress of node512to be forwarded to node514. A SID-list <S3,S6> can identify the nodes that may form one or more multiple paths that will be traversed by packets originating from node502and destined for node514, once validated. Example SID-list <S3,S6> can indicate for headend node504that the initial hop is S3 (node506) and the final hop is S6 (node512). Within the context ofFIG.5A, three examples of multiple paths (e.g., ECMPs) are shown including a path formed by nodes504,506,508and512; a path formed by nodes504,506,510and512using the top link513between nodes510and512; and a path formed by nodes504,506,510and512using the bottom link515between510and512, etc. As described above, the present disclosure provides systems and techniques for validating all multiple paths between a source node and a destination node over a SRv6 data plane before routing network traffic there through. Various stages of validation techniques of the present disclosure will be described with reference toFIGS.5A-H. As will be described below, node504performs a two-step process on each of nodes506,508,510and512to complete an end-to-end validation of multiple paths (EMCPs). The first step is a query process for identifying information on one or more next hops. The second step is a validation (testing) process for validating their respective packet forwarding capabilities. Initially, node504can validate its own packet forwarding capabilities. In doing so, node504can query packet forwarding interface of node504to identify one or more next hops for first node in the SID-list (e.g., S3 in SID list <S3, S6>). In this example, there is one next hop, node506, with address A3_21. Address A3_21 can be derived from a format Axyn, designating IPv6 interface IP of node x for nthlink between node x and node y. Then node504can determine a (calculate) a flow label for a test data P2 to be forwarded to node506for testing IP forwarding of node504. For example, node502can calculate the flow label as F2_P2_31. Flow label F2_P2_31 can be derived from Fx_p_yn that is a flow label resulting in node x forwarding packet p to node y via nthlink. Moreover, node504can determine a path instruction (e.g. END.X SID with decapsulation) for node504, which can be S2_31. END.X SID S3_21 can be derived from Sx_yn that is END.X SID (with decapsulation) of node x for nth adjacency between x and y. FIG.5Billustrates an example process whereby node504can validate packet forwarding capability of node504. In doing so, node504can generate a testing data packet T2 (shown as data packet516inFIG.5B) that includes P2 and the flow label determined for node506, as described above. Testing data packet516can also include testing data (e.g., User Datagram Protocol (UDP) used as P2). In another example, testing data packet516can be provided to node504(e.g., by a network controller, e.g. controller212ofFIG.2). For example, testing data packet T2 can be determined as: T2: (A2, S3; HL=2; F2_P2_31)(S6, S3; SL=1) where A2 is the IPv6 loopback of node504, S3 is the segment ID of node506, HL is hop limit; F2_P2_31 is the flow label calculated above, S6 is the segment ID of the last hop (e.g., node512) in SID list, and SL is the number of segments left. As shown inFIG.5B, node504can provide testing data packet516to IP forwarding of node504and wait for a data packet518. In one example, data packet518can include an IPv6 header with SA (e.g., router ID of node506), a DA (router ID of node504) and HL parameter (e.g., HL=64). Data packet518can also include an ICMPv6 TTL expiry message from node506. Reception of data packet518with a TTL expiry message from node506and source IP of A3_21 can validate that node504can correctly forward packets with the SID-list <S3, S6>. If a TTL expiry message is not received with data packet518, an error message may be generated by node504indicating that packet forwarding to node506has failed. Having validated packet forwarding to node506, node504can then perform a two-step process to query node506and identify next hops of node506(as described below with reference toFIG.5C) followed by validating packet forwarding of node506to such nodes if applicable (as described below with reference toFIG.5D). As shown inFIG.5C, node504can send a query data packet520to node506to request information on multiple paths (ECMPs) of node506. Data packet520can be generated at node504or can be provided to node504(e.g., by a network controller, e.g. controller212ofFIG.2). Data packet520can include a SA (e.g., router ID of node504), a DA that can be path information of previous hop of node506(e.g., END.X SID of node504) and other information such as HL parameter (e.g., HL=64) in an outer IPv6 header of data packet520. Data packet520can also include an inner IPv6 header with information including SA (e.g., router ID of node504), interface IP for a link between nodes504and506, HL parameter (e.g., HL=1), testing data P2. Outer IPv6 header of data packet520may be removed at IP forwarding interface of node504before being forwarded (shown as data packet522inFIG.5C) to node506. Node506can then send a data packet524back to node504that includes information on next nodes (hops) of node506(e.g., node508(R4) and node510(R5)). In one example, data packet524can include an IPv6 header with SA (e.g., IPv6 interface IP of node506), a DA (router ID of node504) and HL parameter (e.g., HL=64). Data packet524can also include a reply message with identification of router ID of node506, interface IP of its next hop(s) (e.g., node508and node510), flow labels for its next hop(s) (e.g., node508and node510), and a test data P3 (similar to test data P2). Test data P3 may be generated by node506by updating P2's DA, SRH and SL as if P2 is processed by IP forwarding interface of node506. For example, a P2 such as (A2, S3; HL=2) (S6, S3; SL=1) can be modified as shown below to generate: P3: (A2, S6; HL=2) (S6, S3; SL=0). Next and upon receiving reply data packet524, node504can validate IP forwarding of node506. As shown inFIG.5D, node504can generate testing data packets such as testing data packet526(T31). In another example, testing data packet526can be provided to node504(e.g., by a network controller, e.g. controller212ofFIG.2). Data packet526can include an outer IPv6 header that is the same as outer IPv6 header of data packet520described above with reference toFIG.5C. Inner IPv6 header of testing data packet526can include SA (e.g., router ID of node504) and DA (e.g., SID of node506), HL parameter (e.g., HL=2), and flow label for node508(received by node504as part of reply message in data packet524described above with reference toFIG.5C). Data packet526can also include an SRH with SID-list and SL parameter (e.g., SL=1), and testing data (e.g., User Datagram Protocol (UDP)). After IP forwarding interface of node504removes the outer IPv6 header of data packet526, node504can send data packet528shown inFIG.5Dto node508, which is one of the next hops of node506. Node506can then update DA and HL parameter in IPv6 header of data packet528(e.g., set DA to S6, and decrement HL by one to HL=1) and update SL parameter (e.g., change SL parameter to SL=0). This updated data packet is shown as data packet530, which is forwarded by node506to node508. While not shown inFIG.5D, node504can generate and send similar data packets to data packets526,528and530for node510, which is another one of the next hops of node506. Testing data packet526(T31) for node508described above, and similarly a testing data packet T32 for node510can be as shown below: T31: (A2, S2_31) (A2, S3; HL=2; F3_P2_41) (S6, S3; SL=1) T32: (A2, S2_31) (A2, S3; HL=2; F3_P2_51) (S6, S3; SL=1). After forwarding data packet530to node508, Node506can wait for response data packet532from node508. Data packet532can include an IPv6 header with SA (e.g., IPv6 interface IP of node508), a DA (router ID of node504) and HL parameter (e.g., HL=64). Data packet532can include a reply message (e.g., an ICMPv6 TTL expiry messages that indicates validation of IP forwarding of node506to node508). Data Packet532is then sent back to node504by node506. A similar data packet may also be received from node510(not shown inFIG.5D). At this stage of the end-to-end validation of multiple paths, node504is aware of nodes506,508and510and has validates the IP forwarding of node506to node508(and/or node510, assuming receipt of corresponding ICMPv6 TTL expiry messages from node510in a data packet similar to response data packet532). Next, node504may repeat the above two-step process for nodes508and510to validate IP forwarding of nodes508and510.FIGS.5E and5Fdescribe the two-step process for node508(which can be similarly replicated for node510). As shown inFIG.5E, node504generates a data packet534for querying node508for next hops of node508(or alternatively receives the query data packet from a network controller (e.g., controller212ofFIG.2)). Data packet534can include a SA (e.g., router ID of node504), a DA that can be path information of previous hop of node508(e.g., END.X SID of node506) and other information such as HL parameter (e.g., HL=64) in an outer IPv6 header of data packet534. Data packet534can also include an inner IPv6 header with information including SA (e.g., router ID of node504), interface IP for a link between nodes506and508, HL parameter (e.g., HL=1), testing data P3 (as described above). Node504can forward data packet534to node506after updating HL parameter in the outer IP header (shown as data packet536inFIG.5E). Upon receipt, node506can remove outer IPv6 header of data packet536at IP forwarding interface of node506before forwarding the same to node508(shown as data packet538inFIG.5E). In response, node508can send data packet540back to node506(to be sent back to node504). Data packet540can include an IPv6 header with SA (e.g., IPv6 interface IP of node508), a DA (router ID of node504) and HL parameter (e.g., HL=64). Data packet540can also include a reply message that includes testing data P4 for testing IP forwarding of node508, interface IP of node512for link between nodes508and512, a flow label for testing IP forwarding of node508, and path information (e.g., END-X SID) of node508. In one example, using path information (e.g., END.X SID with decapsulation) can provide the following advantages. When node504receives a data packet with a reply message from node508such as data packet540(not an ICMP TTL expiry message from node508or any other node) for the query data packet534sent from node504to node508, node504can confirm that (1) END.X SID S3_41 (of node506) has forwarded decapsulated packet538to node508and (2) END.X SID S3_41 (of node506) has been configured at the immediate upstream node of node508, because data packet534uses HL=1 in inner IPv6 header, as shown inFIG.5E. Based on (1) and (2), node504can confirm that once data packet536was decapsulated, the decapsulated data packet538was directly sent to node508without going through any other node. Thus, when the same END.X SID (with decapsulation) is used for steering the testing data P3 (with corresponding flow label) to node508in the testing/validation phase (described below with reference toFIG.5F), testing data P3 is guaranteed to first reach node508. If node508correctly forwards packets with SID-list <S3, S6>, testing data P3 will reach node512(which is the correct next hop of node508) and node504subsequently receives an ICMP TTL expiry message from node512. Thus, to confirm traversal of testing packets via the node being tested (i.e., node508), the proposed method eliminates the requirement of an ICMP TTL expiry messages from node508being tested, which reduces the number of required testing packets by fifty percent. As shown inFIG.5F, node504can generate testing data packets such as data packet542(T4). In another example, data packet542can be provided to node504(e.g., by a network controller, e.g. controller212ofFIG.2). Testing data packet542can be as follows: T4: (A2, S3_41) (A2, S6; HL=2; F4_P3_61) (S6, S3; SL=0). Data packet542can include an outer IPv6 header that is the same as outer IPv6 header of data packet534described above with reference toFIG.5E. Inner IPv6 header of testing data packet542can include SA (e.g., router ID of node504) and a DA (e.g., SID of node512), HL parameter (e.g., HL=2), and flow label for node512(received by node504as part of reply message in data packet540described above with reference toFIG.5E). Data packet542can also include an SRH with SID-list and SL parameter (e.g., SL=0), and testing data (e.g., User Datagram Protocol (UDP)). Node504can send testing data packet542to node506after updating HL parameter in the outer IP header (shown as data packet544inFIG.5F). Upon receipt, node506may remove the outer IPv6 header of data packet544and transmit as data packet546to node508. Node508may then update one or more parameters such as HL (e.g., decrement HL by one to HL=1) and send as data packet548to node512. In response to receiving data packet548, node512may generate data packet550to be sent back to node504. Data packet550can include an IPv6 header with a SA (e.g., IPv6 interface IP of node512), a DA (e.g., address of node504) and a message. Since the next hop of node508is the destination node (tailend node512), the message received by node504as part of data packet550may be an ICMP parameter problem message with error code “SR Upper-layer Header Error” from node512. Reception of ICMP parameter problem message within data packet550can confirm that node508correctly forwards packets with SID-list <S3, S6>. The same process as described above for validating packet forwarding of node508with reference toFIGS.5E and5F, can be implemented to validate packet forwarding of node510. Once IP forwarding of node508(and/or node510) is validated, node504can perform a similar process to query and validate node512. In doing so, node504can generate a query data packet, with appropriate header information, and send the query data packet to node512requesting information on ECMP paths of node512. Such query data packet can be for example message Q shown below: Q: (A2, S4_61) (A2, A6_41; HL=1) [P4] Node512can then reply with a data packet including appropriate header information and a message to indicate that node512is the destination node in the SID-list <S3,S6>. For example, such reply message can be message M shown below: M: (A6_51, A2) [A6, is_destination=True] To ensure the last SID of the SID-list is programmed in IP forwarding of node512, node504can send testing message T, shown below: T: (A2, S4_61) (A2, S6; HL=1) (S6, S3; SL=0) Since S6 is the last SID and is a local SID of node512, node512can send node504an ICMP parameter problem message with error code “SR Upper-layer Header Error”. Reception of this with ICMP parameter problem message confirms that S6 is programmed in IP forwarding of node512. The above examples described with reference toFIGS.5A-5Fare based on the assumption that nodes506,508,510and/or512are SRv6 nodes.FIGS.5G and5Hdescribe examples where such nodes may be non-SRv6 nodes. In a non-SRv6 scenario, path information (e.g., END.X SID may not be available at a previous hop). Therefore, instead of relying on END.X SID as described above with reference toFIGS.5A-5F, node504can construct query and test packets and encapsulate them in an Operations, Administration and Management (OAM) header (e.g., IPv6 UDP) destined to the previous-hop of a hop (node) being queried/validated. The OAM header includes the IPv6 address of the interface at a node connected to corresponding previous node. Previous node can use that interface address to pre-route the encapsulated query and test packets to intended node. In non-limiting example ofFIG.5G, node506is a non-SRv6 node. Query data packet552constructed for querying node508is encapsulated in OAM header553of previous-hop of node508, which is node506(e.g., OAM (A4_31)). Aside from encapsulation of a query message (e.g., the flow label for node508, SRH, testing data, etc.) in an OAM header553, the remaining process of querying node508may be performed as described above (e.g., sending packet554to node506, which is decapsulated and forwarded to node508as data packet556, followed by reception of reply data packet558that can include revised testing data P4, interface IP address of node512and END.X SID of node508if node508is a SRv6 node). Similarly,FIG.5Hdescribes a process of validating IP forwarding of node508when node506is a non-SRv6 node. The process ofFIG.5His the same as that described above with reference toFIG.5Fexcept that the testing data packet is encapsulated in OAM header561. For example, data packet560is the same as data packet542ofFIG.5F(with the exception of OAM header561), data packet562is the same as data packet544ofFIG.5F(with the exception of OAM header561), data packet564is the same as data packet546ofFIG.5F, data packet566is the same as data packet548ofFIG.5F, and data packet568is the same as data packet550ofFIG.5F. Accordingly, the process ofFIG.5Hwill not be further described for sake of brevity. With several non-limiting examples of two-steps processes for validating end-to-end multiple paths including SRv6 and/or non-SRv6 nodes, with reference toFIGS.5A-5H, an overview of such two-step process is described below with reference toFIG.6. FIG.6is a flow chart of a multiple path validation process for segment routing, according to some aspects of the present disclosure.FIG.6is described with reference to node504. However, it should be understood that node504may have one or more memories having computer-readable instructions stored therein and one or more processors configured to execute the computer-readable instructions to perform the steps ofFIG.6.FIG.6will also be described with reference to example network structure500ofFIGS.5A-5H. However, the validation process is equally application to any other network structure with multiple paths (e.g., ECMPs) between a headend node and a tailend node thereof. Node504may have or be configured to route network traffic and corresponding data packets from a source node (e.g., node502) to a destination node (e.g., node514). In order to do so, node504performs a process to validate all possible multiple paths (e.g., ECMPs), which in examples ofFIGS.5A-5Hare formed of nodes504,506,508,510and512. At S600, node504(first network hop) can identify one or more second network hops. The one or more second network hops include nodes that are adjacent to (directly connected to) node504on a downstream direction toward a destination node (e.g., node514). In examples ofFIGS.5A-5H, node504has one second network hop, which is node506. In one example node504identifies the one or more second network hops by querying an IP forwarding for a given SID-list (e.g., SID-list <S3,S6> described above). The process at S600is described above with reference toFIG.5A. At S602, node504can determine, for each second network hop identified at S600(e.g., node506), a corresponding flow label. For example, node504can determine a flow label for node506using example process described above with reference toFIG.5A. A flow label can be included in a test packet for validating packet forwarding of node504for forwarding data packets to node506. Next, node504may perform a validation process to validate packet forwarding of node504to each of the second network hop identified at S600(e.g., node506) using the corresponding flow label and the corresponding path instructions for that particular second network hop. This validation process may be performed through steps S604, S606, S608and S610. This validation process can be performed, as described above with reference toFIG.5B. More specifically, at S604, node504may generate a data packet (a validation data packet) for each second network hop for validating the packet forwarding of node504. Such validation data packet can be generated based on the corresponding flow label for each second network hop (e.g., flow label for node506), as described above with reference toFIG.5B(e.g., data packet516). At S606, node504can send a corresponding validation data packet generated for each second network hop to the corresponding second network hop. For example, node504can send a validation data packet determined at S604for node506, to node506. At S608, node504may receive a response data packet after sending a corresponding validation data packet to the corresponding one of the one or more second network hops. For example, node506can generate a response data packet upon receiving the validation data packet sent thereto by node504at S606. At S610, node504can validate packet forwarding of node504to each second network hop from which a corresponding response data packet is received at S608is validated. For example, if the response data packet received from node506includes a confirmation message (e.g., an ICMPv6 TTL expiry message), then node504determines that packet forwarding of node504to node506has been successful. Otherwise, node504can generate and output an error message (e.g., “Packet forwarding to this next-hop failed.”). At S612, node504can generate a queue of network hops (a queue of network nodes) based on a validation result (successful packet forwarding or failure of packet forwarding) of performing the validation process. A queue can include any one of the one or more second network hops that node504has successfully forwarded packets to, as determined at S610. In example ofFIG.5B, node504validates its packet forwarding to node506and thus node506is included in the queue. If there are more than one second network hop to which packet forwarding of node504is validated, then such nodes can be included in the queue as well. Depending on outcome of the remaining steps ofFIG.6described below, the queue may be updated to include additional network hops. The remaining steps ofFIG.6described below, are directed to performing a two-step process to (1) query each hop in the queue to obtain information of corresponding subsequent hops (e.g., adjacent hops) of that node and (2) validate packet forwarding of each hop in the queue. Examples of this two-step process are described above with reference toFIGS.5C-5F. At S614, node504can determine whether the queue is empty. If the queue is empty, the process proceeds to S632, where the process ends. If the queue is not empty, the process proceeds to S616and node504can iteratively perform processes of steps S616to S630to validate packet forwarding for each node in the queue. At S616, node504can select a next hop from the queue and generate a query data packet to be sent to the selected hop to obtain information on next hop(s) (adjacent hop(s)) of the selected hop in the queue. In one example, network hops in the queue may have an assigned order and each time node504performs S616, node504selects a “first in line” of the network hops in the queue for generation of a corresponding query data packet. For example, node504can generate a query data packet for node506in order to obtain identification of next hop(s) of node506(e.g., nodes508and510). In one example, node504may generate the query data packet as described above with reference toFIG.5C(e.g., data packet520ofFIG.5C). In some cases, node506may not be a SRv6 nodes and thus may not have corresponding END.X SID with decapsulation. As described above, OAM headers may be used instead for the validation of packet forwarding of node506. At S618, node504can transmit the query data packet to the selected next hop (e.g., node506). At S620, node504can receive a response data packet from the selected next hop to which the query data packet is transmitted at S618. At S622, node504can determine, based on the response data packet received at S620, whether the selected next hop is a destination of a SID-list included in the query data packet sent to the selected next hop at S616and whether the selected next hop has a corresponding next hop. A response data packet can indicate that the selected next hop is a destination of SID-list (e.g., when the corresponding next hop is node512in example ofFIGS.5A-5H). This may be indicated via, for example, “is_destination=True” message. Node504can further determine if the selected network hop has a corresponding next hop (e.g., node508and/or node510relative to node506). In one example, node504can determine if the selected network hop has a corresponding next hop or not, if the response data packet received at S620does not include an identification (e.g. router ID or an interface IP of a corresponding next hop). If at S622, node504determines that the selected network hop (e.g., node506) is not the destination of the SID-list and the selected network hop does not have a corresponding next hop (or determines that the selected network hop is the destination of the SID-list and the selected network hop has a corresponding next hop), then at S624, node504can output an error message (e.g., “Destination cannot be reached via this ECMP path”). Thereafter, the process reverts back to S614, where node504can select a next hop in the queue to be queried (per S614-S624), if the queue is not empty, or can end the process if the queue is empty. However, if at S622, node504determines that the selected network hop is the destination of the SID-list or the selected next hop has corresponding next hop(s), then at S626, node504can perform a validation process to validate packet forwarding of the selected network hop to each of the corresponding next hop(s) of the selected network hop. For example, node504can perform a validation process to determine whether node506can forward data packets to node508and/or node510. This validation process can be perform in the same manner as described above with reference toFIGS.5D and5Eand can be similarly repeated for node510as well (e.g., by generating and sending query data packet such as data packet526and receiving reply data packet540). At S628, node504can determine whether packet forwarding of the selected node to a corresponding next hop of the selected node is successful or not. As noted above, if a reply data packet received from a corresponding next hop of the selected node includes an “ICMPv6 TTL expiry message,” then node504can determine that packet forwarding of the selected node to its corresponding next hop is successful (e.g., inFIG.5D, reply data packet532includes an “ICMPv6 TTL expiry message.”). If such message is not included in the reply data packet, then at S628, node504determines that packet forwarding is not successful and may output an error message (e.g., “Packet forwarding to this next hop failed.”). Otherwise, when the packet forwarding of the selected node to its corresponding next hop is successful, at S630, node504adds the corresponding next hop of the selected node to the queue. Thereafter, the process reverts back to S614and node504may iteratively repeat S614to S630until the queue is empty. One example iteration of this process (e.g., for a corresponding next hop (e.g., node512) of a selected network hop (e.g., node508) is described above with reference toFIGS.5E and5G. A non-limiting example of a set of computer-readable instructions for performing the process ofFIG.6is provided below. Algorithm 1: Algorithm executed at the headendInput: SID-list=<S1, S2,...,Sn>, headendOutput: “Success” when all nodes in all ECMP paths forward packets correctly; or “Failed” and a list ofnodes failed to forward packets correctlytypedef struct {in6_addr_t ip; /* interface ip (initially) or router id (later) */uint32_t flow_label; /* used for testing upstream node */in6_addr_t end_x_sid; /* used for testing this node */ipv6_packet test_packet; /* used for testing this node */} node_t;typedef struct {in6_addr_t router_id;bool is_destination;/* next hops programmed for the appropriate SID of the SID-list */uint8_t num_next_hops;node_t *next_hops;} query_node_t; /* stores info in the node being queried *//* NOTE: test_packet for all the next hops of a node is the same. It has been added to node_t, instead ofquery_node_t, to simplify the illustration of the algorithm. */nodes_to_query_and_test = [ ];failed_next_hops = [ ];query_node_t query_node;ipv6_packet P1;/* Let H_RID denote the router ID of headend */Initialize P1 such that P1: (H_RID, S1; HL=2)(Sn,...,S2,S1; SL=n−1)./* Query headend and find info on ECMP paths */Headend queries itself and finds next hops for P1.For each next hop, it finds an END.X SID with decapsulation.For each next hop, headend calculates a flow label based on P1 s.t. with this flow label, P1 is forwarded tothe next hop.Headend fills query_node structure with next hops. END.X SID's, and flow labels, test_packet of eachnode_t structure in query_node.next_hops is initaized to P1. Then, test_packets' DA and SRH are updatedif S1 is local SID of headend./* NOTE:updated test_packet in node_t is used for testing packet forwarding of the node which correspondsto the node_t *//* Test headend's forwarding */for all node in query_node.next_hops:Headend crafts a testing data packet T which is P1 with node.flow_label./* T:(H_RID, S1; HL=2; node.flow_label)(Sn,...,S2,S1; SL=n−1) */Headend submits T its IP forwarding.if received an ICMPv6 TTL expiry message with SA=node.ip:/* Forwarding to this next hop validated */nodes_to_query_and_test.add(node) /* enqueue at tail */else:/* Forwarding to this next hop failed */failed_next_hops.add((headend, node))/* Query and test transit and destination nodes */node = nodes_to_query_and_test.get_first( )while node is not NULL:Send query message Q and waits for a reply, whereQ:(H_RID, node.end_x_sid)(H_RID, node.ip; HL=1)[node.test_packet].When END.X SID is not available, use OAM control plane method to send the query message.Headend receives reply M, and it fills a new query_node_t structure called query_node with datafrom payload of M.if there is node-t in nodes_to_query_and_test s.t. node-t.ip == query_node.router_id:/* This node has already been queried and tested */node = nodes_to_query_and_test.get_next( )Continue:Set node.ip = query_node.router_idif (query_node.is_destination is False) and (query_node.next_hops is empty):/* Error: Destination is not reachable via this ECMP path */failed_next_hops.add((node))node = nodes_to_query_and_test.get_next( )Continue:/* Validate destination node's forwarding */if query_node.is_destination is True:Create a testing data packet P by copying node.test_packet, and setting HL=1.Headend sends data packet T which is created by encapsulating P with an outer IPv6header, whereT:(H_RID, node.end_x_sid)(P).When END.X SID is not available, use OAM control plane method to send test packet.if received an ICMP parameter problem message with error code “SR Upper-layer HeaderError” and SA=node.ip:/* Validated this node's forwarding *//* Note: info on ICMP parameter problem message with error code “SR Upper-layer Header Error” is in section 4.3.1 of SRH draft */else:/* Forwarding at this node failed */failed_next_hops.add((node))node = nodes_to_query_and_test.get_next( )Continue:/* Validate node's ability to forward packets to each next hop (this is a transit node) */for all node-nh in query_node.next_hops:Create a testing data packet P by copying node.test_packet and setting its flow label to benode-nh.flow_label./* HL=2 in P */Headend sends data packet T which is created by encapsulating P with an outer IPv6header, whereT:(H_RID, node.end_x_sid)(P).When END.X SID is not available, use OAM control plane method to send test packet.if received (ICMP parameter problem message with error code “SR Upper-layer HeaderError” and SA=node-nh.ip) or (ICMPv6 TTL expiry message with SA=node-nh.ip):/* Forwarding to this next hop validated */nodes_to_query_and_test.add(node-nh) /*enqueue at tail*/else:/* Forwarding to this next hop failed */failed_next_hops.add((node, node-nh))node = nodes_to_query_and_test.get_next( )/* end of while loop */if failed_next_hops empty:return “Success”else:return “Failed” and failed_next_hops Algorithm 2: Algorithm executed at all nodes except headendInput: Query message Q′ received by node nodeOutput: Reply to the query message/* When headend send query message Q.Q:(H_RID, node.end_x_sid)(H_RID, node.ip: HL=1)[node.test_packet],node which has an interface ip of node.ip receives Q′,Q′:(H_RID, node.ip; HL=1)[node.test_packet].This is due to immediate upstream node of node strips outer IPv6 header with END.X (with decapsulation)SID. */if (DA of node.test_packet is a SID of node) and (node.test_packet does not have an SRH or last SID ofSRH is DA):/* node is the destination node */node fills a query_node_t structure with its router id and is_destination=True.else:if DA of node.test_packet is a SID of node:node updates DA and SRH of node.test_packet, as it would normally processnode.test_packet in forwarding.node finds next hops for DA of (updated) node.test_packet.If node is a SRv6 node, for each next hop, node finds an END.X SID with decapsulation.For each next hop, node calculates a flow label based on the (updated) node.test_packet s.t. withthis flow label, (updated) node.test_packet is forwarded to the next hop.node fills a query_node_t structure with its router id, next hops' IP's, flow labels, END.X SID's withdecapsulation (if available), and (updated) node.test_packet.node sends reply M to the headend, whereM:(node.ip, H_RID)[data in query_node_t structure] Examples of validating end-to-end multiple paths for SRv6 data plane described above can provide the following advantages compared to available methods described above. For example, the validation process of the present disclosure is computationally efficient by having each network hop determine one flow label per next hop (e.g., an egress interface of next hop) and eliminate the need to determine a flow label that is common among flow label sets corresponding to network hops on an end-to-end ECMP. A further example advantage is the ability to test all end-to-end ECMPs including end-to-end ECMPs that may be traversed by data packets when the data packets' destination IP, next header or flow label is changed by an intermediate node and end-to-end ECMPs due to static routes. Other example advantages include validation of end-to-end ECMPs when some network hops on a path are non-SRv6 nodes, validation of the hardware forwarding path, testing of each physical member of a bundle/port channel, and the need for local routing information of a network hop (therefore, inter-Autonomous System (AS) paths may be queried and verified). Further example advantages include actual testing data packets (validation data packets) being a part of the query data packets, and the flow labels being calculated based on the actual testing data packets, with the testing data packets being any type of message such as UDP, Transmission Control Protocol (TCP), Internet Control Message Protocol (ICMP) and any other type of known or to be developed testing data packets. Another advantage of the validation process described herein is that the last SID can be of any type including END, END.X and VPN SID. For example, when the tailend node (e.g., node512ofFIG.5) is being validated, tailend node can send to headend node (e.g., node504) an ICMP parameter problem message with error code “SR Upper-layer Header Error” as the upper layer header of testing data packet is neither IPv4 nor IPv6 (instead such upper layer header can be one of UDP, TCP or ICMP). Finally, the validation process described herein allows for validation of partial SID-lists. With various examples of validating end-to-end multiple paths for SRv6 data plane with reference toFIGS.5A-HandFIG.6, the disclosure now turns toFIGS.7and8, which illustrate example network devices and computing devices, such as switches, routers, load balancers, client devices, and so forth. Such example network and computing devices may be used to implement various components described above with reference toFIGS.1-6including, but not limited to, network controller212, any one of nodes502,504,506,508,510,512and/or514, etc. FIG.7illustrates an example network device, according to some aspects of the present disclosure. Network device700can be suitable for performing switching, routing, load balancing, and other networking operations. Network device700includes a central processing unit (CPU)704, interfaces702, and a connection710(e.g., a PCI bus). When acting under the control of appropriate software or firmware, the CPU704is responsible for executing packet management, error detection, and/or routing functions. The CPU704preferably accomplishes all these functions under the control of software including an operating system and any appropriate applications software. CPU704may include one or more processors708, such as a processor from the INTEL X86 family of microprocessors. In some cases, processor708can be specially designed hardware for controlling the operations of network device700. In some cases, a memory706(e.g., non-volatile RAM, ROM, etc.) also forms part of CPU704. However, there are many different ways in which memory could be coupled to the system. The interfaces702are typically provided as modular interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets over the network and sometimes support other peripherals used with the network device700. Among the interfaces that may be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as fast token ring interfaces, wireless interfaces, Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces, WIFI interfaces, 3G/4G/5G cellular interfaces, CAN BUS, LoRA, and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may control such communications intensive tasks as packet switching, media control, signal processing, crypto processing, and management. By providing separate processors for the communications intensive tasks, these interfaces allow the master microprocessor704to efficiently perform routing computations, network diagnostics, security functions, etc. Although the system shown inFIG.7is one specific network device of the present technologies, it is by no means the only network device architecture on which the present technologies can be implemented. For example, an architecture having a single processor that handles communications as well as routing computations, etc., is often used. Further, other types of interfaces and media could also be used with the network device700. Regardless of the network device's configuration, it may employ one or more memories or memory modules (including memory706) configured to store program instructions for the general-purpose network operations and mechanisms for roaming, route optimization and routing functions described herein. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store tables such as mobility binding, registration, and association tables, etc. Memory706could also hold various software containers and virtualized execution environments and data. The network device700can also include an application-specific integrated circuit (ASIC), which can be configured to perform routing and/or switching operations. The ASIC can communicate with other components in the network device700via the connection710, to exchange data and signals and coordinate various types of operations by the network device700, such as routing, switching, and/or data storage operations, for example. FIG.8illustrates a computing system architecture, according to some aspects of the present disclosure. Architecture800can have components that are in electrical communication with each other using a connection805, such as a bus. Exemplary system800includes a processing unit (CPU or processor)810and a system connection805that couples various system components including the system memory815, such as read only memory (ROM)820and random access memory (RAM)825, to the processor810. The system800can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of the processor810. The system800can copy data from the memory815and/or the storage device830to the cache812for quick access by the processor810. In this way, the cache can provide a performance boost that avoids processor810delays while waiting for data. These and other modules can control or be configured to control the processor810to perform various actions. Other system memory815may be available for use as well. The memory815can include multiple different types of memory with different performance characteristics. The processor810can include any general purpose processor and a hardware or software service, such as service 1832, service 2834, and service 3836stored in storage device830, configured to control the processor810as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor810may be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. To enable user interaction with the computing device800, an input device845can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device835can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing device800. The communications interface840can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. Storage device830is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs)825, read only memory (ROM)820, and hybrids thereof. The storage device830can include services832,834,836for controlling the processor810. Other hardware or software modules are contemplated. The storage device830can be connected to the system connection805. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor810, connection805, output device835, and so forth, to carry out the function. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se. Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on. Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures. Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims. Claim language reciting “at least one of” a set indicates that one member of the set or multiple members of the set satisfy the claim. For example, claim language reciting “at least one of A and B” means A, B, or A and B.
81,759
11863455
DETAILED DESCRIPTION In the following description, various embodiments will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described. Embodiments of the present disclosure provide techniques for implementing a cloud based cross-domain solution. A cross-domain solution can, in some examples, restrict the access or transfer of information between two or more security domains. The proposed system may be implemented with a network interface card (NIC) associated with a disconnected network. A disconnected network can be a secure computer network that is isolated from communication with unsecured networks. Disconnected networks can be configured to permit inbound traffic while prohibiting outbound traffic. In one implementation, a message intended for the disconnected network can be received at a first node of the NIC. The received message being sent with a first communication protocol (e.g., transmission control protocol (TCP), user datagram protocol (UDP)), or message queue telemetry transport (MQTT). In some implementations, multiple communication protocols can be supported. The received message can be forwarded to a second node of the network interface card using a second communication protocol. The second network communication protocol can be any protocol configured for one-way communication including UDP. Messages can be received from the first node at the second node but the second protocol will not permit traffic in the other direction. Once the traffic is received at the second node, the message is forwarded to a destination node in the disconnected network using a third communication protocol. Cross-domain solutions can include disconnected networks that are separated from unsecure networks (e.g., the Internet or other public networks) by physical isolation (e.g., air gap) or by hardware that enforces a one-way communication (e.g., bump-in-the-wire/data diode). While such networks are secure, the systems are unwieldy and expensive to maintain, and, because of the specialized hardware involved, the networks are generally used in limited circumstances (e.g., military or governmental networks, industrial control systems, or life-critical systems). Additionally, physical isolation or hardware implemented one-way communication are not feasible for cloud networks. A software implemented cross-domain solution can be used to create a cloud based disconnected network without the inconvenience of physically moving data to the disconnected network (e.g., air gap) or hardware to physically enforce one-way communication (e.g., data diode). Traffic traveling into a NIC of a disconnected region can be interrupted and transmitted within the NIC using a one-way protocol. The protocol enforces one-way traffic to ensure information within the disconnected network is less susceptible to compromise. In some circumstances, the cross-domain solution can include a separate one-way communication pathway from the disconnected network to a trusted source outside of the network. Before reaching the destination node within the disconnected network, messages sent from the second node can pass through a series of filters. The filters can analyze the messages in an effort to protect the disconnected network from infiltration. The filters can be configurable via an application programming interface (API) so that a client can select an appropriate set of filters based on the client's need for security. The client can also select a time period for the cloud based domain system. In some implementations, the order of filters, or the individual filters used, can be changed between messages in an attempt to counter attempts to infiltrate the network. Traditional cross-domain solutions are implemented using custom hardware. The hardware can be expensive to design and difficult to maintain. To add a filter or change the order of the filters in a traditional cross-domain solution, the hardware containing the filters would have to be removed, altered, and replaced. A cloud based cross-domain system can be fully or partially implemented in the cloud. For instance, a cross-domain solution can use hardware enforced one-way communication, such as a data diode, and cloud implemented content filters. Alternatively, the cross-domain solution may include software enforced one-way communication and hardware implemented content filters. A cloud based cross-domain solution allows for flexibility in constructing a cross-domain solution. A cloud based cross-domain solution system allows for flexibility and such a system is adaptable for different use cases. For instance, different message configurations can be applied to traffic from different sources with fewer filters applied to messages from trustworthy sources. The order of filters can be altered between messages, or at regular intervals, to complicate attempts by attackers to design messages that can evade the filters. In some circumstances, one-way communication can also be enforced only on a subset of the messages received at the cross-domain solution. Data about the messages received at the cross-domain solution can be used to train a or artificial intelligence and/or machine learning (AI/ML) content filter model. The data can include the packet origin, characteristics of known virus or malware, or traffic patterns. The AI/ML content filter can determine that packets from certain sources are suspect or trustworthy based on information supplied by the other content filters. For example, if traffic from a particular internet protocol (IP) address is consistently flagged as containing malware the AI/ML filter may subject packets from that IP address or the same origin to extra filtering. The AI/ML filter can use information obtained from packets flagged by content filters as containing malware or viruses to identify known or unknown viruses so that the cross-domain solution can adapt to new threats. The AI/ML filter can also use traffic patterns to identify threats. For example, a substantial increase in traffic from a source can indicate a potential threat. The AI/ML model can be continuously trained by data marked as “test or learning data” that is sent from a trusted source. The test data can contains reference data that should be blocked or allowed to pass. So when a new malware or disallowed content is detected, the test data can contain the signature of the malware or other another characteristic, like origin and a hint for the learning algorithm to block such data when transferred as a real payload into the trusted network. Test data can indicate to malware patterns or define specific attributes in structured data, e.g., data range for MQTT data exceeding a certain range. In some circumstances, the source of the learning data has to be trusted. Using cryptographic methods, the authenticity of the source (sender) of test data can be established. In one embodiment, the test data can be encrypted using the public key of the AI/ML algorithm and then signed with a private key only known to the sender. The AI/ML algorithm that is associated with the filter can have the corresponding public key of the test data source configured allowing to verify the signature of the training data after using its private key to decrypt the data itself. The AI/ML learning can be extended to content filtering on payloads such as images, for restricting the resolution, metadata or content to known patterns. The AI/ML algorithm can further instruct the filter to change or re-encode the image to remove hidden malware or otherwise undesirable content. An advantage of a cloud implemented cross-domain solution is that the cross-domain solution can be exposed as a service to a client. The cross-domain solution can allow a customer (e.g., a client) to monitor or audit the cloud domain service. A customer can configure the cross-domain solution to select filters and/or the order of filters, and the customer can designate what traffic passes through the cross-domain solution. For example, the customer can whitelist certain sources so that two way communication is possible between the disconnected network and the whitelisted sources. A cloud-based cross-domain solution allows for flexibility of use that is not possible in a hardware-based cross-domain solution. Additionally, a cloud-based cross-domain solution can be implemented without expensive and inflexible specialized hardware. In an illustrative example, a customer is presented with an API for configuring a cloud based cross-domain solution and selects a time period for the cloud based cross-domain solution and a series of filters. In this case, the customer selects a month time period for the cloud based distributed network and a malware filter followed by a content filter. After configuration, a message, intended for a destination node inside of the disconnected network, is sent from a source node. The message is sent using transmission control protocol/Internet protocol (TCP/IP) and the message is received at a first node of the NIC. In order to pass across the NIC, the message can be converted from TCP/IP to a protocol suitable for one-way communication. A communication protocol can be modified so that the communication protocol is configured for only one-way communication. The NIC, at the first node, converts the message to a one-way communication protocol, in this case User Datagram Protocol (UDP), and forwards the message to a second node in the NIC. In circumstances where the message is sent via a streaming protocol (e.g., Real Time Messaging Protocol (RTMP)), the entire message is intercepted at the first node, as if the first node were the destination node, before the message is forwarded to the second node. In this case, the message is not streamed and the message packets are accepted, stored, and forwarded to the second node via a connectionless protocol such as UDP as the packets are received. At the second node, the message is forwarded to a destination node inside of the secured network using a network protocol that is employed in the secured network. In this case, the secured network uses TCP/IP but the network could use a third protocol, such as File Transfer Protocol (FTP), TCP/IP, User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), Post Office Protocol (POP3), Internet Message Access Protocol (IMAP), Simple Mail Transfer Protocol (SMTP), etc. for internal communication. After leaving the second node, but before reaching the destination node, the message is passed through a series of configurable filters. In this case, the message is scanned by malware filter, to ensure that the message does not contain any malware that may compromise the network, and a content filter, to check that the message reaching the network is the appropriate type of content. After the message passes through all the filters, the message is forwarded to its destination node. FIG.1shows a simplified diagram100of a hardware implemented disconnected network according to some embodiments. A disconnected network can be a computer network that is physically isolated from other networks by removing physical and wireless network connections. Data is moved between these air-gapped networks using physical storage media such as thumb drives. While these networks are secure, transferring data with thumb drives is cumbersome. Other disconnected networks use data diodes that permit one-way traffic into the disconnected network, while preventing the broadcast of sensitive information from the disconnected network. Simplified diagram100shows computer device A connected to a router A104according to some embodiments. Computer device A102can be a personal computer, a server computer, a virtual machine, a tablet device, a mobile phone, or any other computer device. Computer device A104can be physically connected to router A104, for example, by a network cable or computer device A104can be connected to router A104wirelessly (e.g., WiFi). In some implementations, computer device A102can be connected to the internet or a private network through router A. Computer A102can be connected to computer B106through communication between router A104and router B110. A network cable112containing a data diode108can connect router A104and router B110. Hardware data diodes can enforce the one way direction by physical means, e.g. an optical link comprising of optical sender, often a laser or light emitting diode (LED) and a receiver, a photo sensitive semiconductor such as a photelectric transistor108. Other one way systems can be utilized to implement the functionality of a one way transfer device. Messages received at a first terminal114of data diode108can be passed to the diode's second terminal116, but a message cannot be sent from the second terminal116to the first terminal114. In some implementations, the disconnected network exists behind the second terminal116of the data diode108. Messages can be sent across data diode108into the disconnected network. However, messages cannot leave the disconnected terminal via the data diode. In these implementations, router B110and computer device B106are isolated from outside networks, but computer device B106can still be connected to other devices inside the disconnected network through router B106. For example, computer B could be part of a network containing confidential information where the ability to send information outside of the network could pose a security threat. In other implementations, the disconnected network exists behind the first terminal116of the data diode108. In these implementations, the disconnected network includes Messages can be sent from the disconnected network to an outside network via data diode108but messages cannot be received by the disconnected network. Such a network could be used in an electronic voting system where the system should be able to provide results to the public while being immune from inbound attacks. FIG.2shows a process for communicating with a hardware implemented disconnected network according to certain embodiments. This process is illustrated as a logical flow diagram, each operation of which can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations may represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the process. Turning to process200in greater detail, at block202, a message is generated by a computer device A102. The message can be sent from computer device A102to a router A104that can forward the message to the message's destination. Computer device A102can be a personal computer, a mobile device, a tablet, or a server computer. Router A104can be physically connected to computer device A102by a cable that permits message transmission (e.g., by an Ethernet cable), or the message can be sent from computer device A102to router A104via radio waves (e.g., WiFi). At block204, the message, sent by computer device A102, is sent to the second computer device B106after passing through a data diode108. The message can be forwarded from router A104to a router B110via an Ethernet cable112containing data diode108(e.g., bump-in-the-wire). Data diode108can permit the data comprising the message sent by router A104to pass through data diode108to router B110because the data diode can allow transfer of data one way. Router B110can forward the message that was received from router A104to computer device B114. At block206, responses generated by computer device B114are blocked by data diode108. While messages passing from router A104to router B110can pass through data diode108, one way transfer restriction from data diode108along, messages passing from router B110to router A104are blocked. Accordingly computer device B114can be disconnected from other computer devices because computer device B114can be prevented from sending outgoing messages. FIG.3shows a simplified representation of a cloud based cross-domain solution300that can be used to control access between domains according to certain embodiments. Cross-domain solutions can include implementations that allow restricted two-way communication between networks, or implementations that include disconnected networks. FIG.4shows a process for controlling access between domains using a cloud based domain service according to certain embodiments. This process is illustrated as a logical flow diagram, each operation of which can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations may represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the process. Turning to process400in greater detail, at block402, a message can be sent from domain A302to the restrictive gateway304via the network306. The message can be generated within domain A302at a customer premise308. The restrictive gateway304can be a smart network interface card (Smart NIC) and the network306can be a private network or the Internet. At block404, the message can be analyzed by the restrictive gateway to determine if the message from domain A302should be permitted access to domain B310. Restrictive gateway304can determine if the message should be permitted access using a predetermined access policy. Restrictive gateway304can also use filters to analyze messages before permitting access to domain B310. The filters can include a malware filter to check for malware and viruses in the messages. Restrictive gateway304can also include a signature filter to determine if the message has cryptographically verifiable signatures that attest to the message's provenance. The filters can also include a content analyzer to determine the message's validity. The content analyzer can, for instance, check checksums received out-of-band or in-band with the apparent related payload. The data in the message can contain a checksum to prove the validity of the data. The checksum can be attached to the data itself. The checksum can also be transferred as part of data in a separate message. The filters can also include an artificial intelligence or machine learning filter that has been trained to determine if a message should be permitted access to domain B310. At block406, the restrictive gateway304can forward the message to domain B310after determining that the message should be permitted access. The second domain can be a virtual cloud network312. In some implementations, the destination node for the message can be a workload314in virtual cloud network312. Workloads314can include virtual machines, databases, containers, and applications. FIG.5shows a simplified diagram500of the user datagram protocol (UDP) according to certain embodiments. Communications protocols can permit one-way or two-way communication; however, disconnected networks may use hardware to enforce one-way communication. In the example ofFIG.5, one-way communication can be enforced by a protocol, such as UDP. Turning to diagram500in greater detail, sender506and receiver502can be computing devices that are capable of network communication. Sender506and reviver502can be a personal computer, a server computer, a mobile device, a tablet device etc. Sender506and receiver502can comprise a cross-domain solution. Sender506can be a first domain in a cross-domain solution and receiver502can be a second domain in a cross-domain solution. Receiver502can be part of a disconnected region508. Disconnected region508can be a network that is isolated from other networks. Devices in disconnected region508can be configured so that the devices are capable of receiving traffic from other networks but not capable of sending traffic from disconnected region508to other networks. Sender506and receiver502. can be connected by any communication link including a physical connection (e.g., connected by a network cable or fiber optic cable). Sender506and receiver502can be wirelessly connected (e.g., connected by WiFi). Messages504a-ccan be traffic that is sent between the sender506and the receiver502. Traffic sent via UDP, including messages508a-c, can be sent without handshaking dialogs. Sender506can send messages504a-cto receiver502without a request from receiver502. FIG.6shows a process for communicating with user datagram protocol (UDP) according to certain embodiments. This process is illustrated as a logical flow diagram, each operation of which can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations may represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the process. Turning to process600in greater detail, at block602, sender506can initiate a UDP stream. The stream can consist of a series of packets sent from sender506to receiver502. Sender506can initiate a transmission without receiving a request from the receiver502. Receiver502can be configured to receive packets sent by sender506. Receiver502can be configured so that receiver502is incapable of sending messages to sender506. The UDP stream can be a stream of messages sent with any communication protocol that can be configured for one-way communication. At block604, the messages508a-csent by sender506are received by receiver502. Receiver502can receive messages508a-cwithout receiver502providing a response to sender506. Messages508a-ccan be packets with a source port number, a destination port number, and checksums, for error checking and security. Sender506can send responses508a-cin a continuous stream, beginning with response 1508a, without any communication from receiver502. Once sender506has sent the responses, sender506can stop transmission without receiving confirmation that the messages arrived at receiver502. Sender506can be configured so that sender506is incapable of receiving any messages. FIG.7shows a diagram700of a data pipeline including a software implemented cross-domain solution according to certain embodiments. Turning to diagram700in greater detail, as part of the data pipeline smart network interface card (Smart NIC)706contains two sets of nodes: first nodes704a-cand second nodes708a-c. Communication between first node a704aand704bcan occur using a communication protocol configured for one-way traffic (e.g., UDP). Messages702a-creceived at first nodes704a-ccan be passed to second nodes708a-c, but first nodes704a-ccan be configured to ignore messages sent from second nodes708a-c. Smart NIC706, in some implementations, can contain a secure pathway for communicating from Smart NIC706to trusted repositories722. Messages received from host machine718at secure first node726can be passed to secure second node728using a one-way communication protocol. Once the message is received at secure second node728, the message can be forwarded to trusted repository722using a one-way or two-way communication protocol. Host machine718an contain one or more filters including malware filters710, content analyzers714, content filters730, content recreation filters732, validators734, artificial intelligence/machine learning filters716, and signature filters712. The filters can be arranged in a chain with messages received from second nodes708a-cbeing passed through the filters in sequential order. Host machine718can be a virtual computer device or a bare metal computer device. In some circumstances, a message can pass through one or more filters before the message arrives at the first node. One or more filters can be arranged between the first node and the second node. A message traveling from the first node to the second node can pass through the one or more filters. Malware filter710can check for malware or viruses in the messages passing through the data pipeline. Messages containing malware or viruses can be rejected before the message reaches the disconnected network. Content filter730can check for banned words, banned byte sequences, fragments of files or other content that is banned by the content filter's logic. Content filter730can remove the banned content from the message before forwarding the message or content filter730can reject the message. Signature filter712can check a message to determine if the message has cryptographically verifiable signatures that attest to the messages' provenance. Content analyzers714can analyze the message to determine the message's validity. For instance, content analyzer714can check checksums received out-of-band or in-band with the related message. An artificial intelligence/machine learning filter716can be a filter that uses a trained machine learning algorithm to determine whether a message should be allowed to pass through the data pipeline. In hardware implemented cross-domain solutions, the filters, such as the ones contained in host machine718, can be in a fixed order that is difficult to rearrange. In a software implemented cross-domain solution, the order of individual filters can be changed depending on the type of message and the message source. Messages from trustworthy sources can be passed through fewer filters, while messages from less trustworthy sources can be passed through more filters. In some circumstances, the filter order, or the list of filters in the filter chain, can be changed between messages. Host machine718can also include a logging network720to provide information about events occurring in the data pipeline between Smart NIC706and host machine718. In Smart NIC706, information can be provided to the logging network from second nodes708a-cor secure first node726. In host machine718, information about events occurring in the filters can be provided to the logging network. The logging network can be a network bus for shipping logs from components to a security information and event management (STEM) system for accepting logs of events taking place in the data pipeline at the operating system (OS) level, the application level, and the payload level. The STEM system can use the logs to perform analyses, to raise the alarm about potential malware in the data stream, and to take remedial action such as quarantining the data in question. Host machine718can also include an independent reverse pipeline that provides messages to trusted repositories722through Smart NIC706via first secure node726and second secure node728. Messages for the reverse pipeline are provided from filters to a secure hash algorithm (SHA) validation system724in host machine718. SHA validation724can provide messages to trusted repositories722through Smart NIC706. The independent reverse pipeline is separate from the data pipeline and the reverse pipeline can be used to help a trusted system using trusted repositories722to learn about messages that are weeded out by the filters. Information provided by the reverse pipeline can also be used to learn about valid messages that are inappropriately excluded by the filters. A trusted system can use information about inappropriately excluded messages to increase throughput by fixing the issues causing the inappropriate exclusion. FIG.8shows a process for communicating using a data pipeline that includes a software implemented cross-domain solution according to certain embodiments. This process is illustrated as a logical flow diagram, each operation of which can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations may represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the process. Turning to the method800in greater detail, at block802, messages702a-ccan be received, as part of a data pipeline, at a first nodes704a-cof a smart network interface card406(smartNIC) from a first domain. A Smart NIC device can be a device that contains two logical and or physical interfaces and a processing system comprising of hardware to process data entering on one of the interfaces and forwarding it to the other interface. Processing the data can mean that the data is analyzed, re-formatted, aggregated etc. The hardware can comprise micro-processors running software based algorithms that are invoked by the data. Messages702a-ccan be sent using a first communication protocol and, in some implementations, first nodes704a-ccan be configured to receive messages sent with more than one communication protocol. In some implementations, first nodes704a-ccan be configured to receive all incoming traffic. At block804, messages702a-ccan be sent from first nodes704a-cto second nodes708a-cusing a one-way communication protocol (e.g., UDP). Smart NIC706can be a cross-domain solution because messages702a-ccan be received at first nodes704a-cfrom a first domain and the messages can be forwarded from second nodes708a-cto a second domain. In some implementations, the one-way communication protocol can allow messages in the data pipeline to be sent from first nodes704a-cto second nodes708a-c, but messages are prevented from being sent from second nodes708a-cto first nodes704a-c. In some implementations, any messages sent from second nodes708a-cto first nodes704a-cwill not be accepted. Messages702a-creceived at second nodes708a-cas packets sent using a one-way communication protocol can be unpacked and reconstructed as forward able payloads that can be sent to destination nodes in the second domain. Non-streaming messages received at first nodes704a-ccan be accepted, stored and forwarded to second node708a-cas the messages are received. Streaming messages can be intercepted at first nodes704a-cas if first nodes704a-cwere the destination nodes. The streaming messages can be repackaged into a format defined by the one-way communication protocol and forwarded to second nodes708a-c. Streaming messages can be reconstructed and forwarded from second nodes708a-cto destination nodes as if the messages originated at second nodes708a-c. At block806, the messages, as part of the data pipeline, can be passed through a sequence of filters before the message reaches the second domain. The filters can include malware filters710to check for malware and viruses in the messages, signature filters712to determine if the message has cryptographically verifiable signatures that attest to the message's provenance, content analyzers714to determine the message's validity, artificial intelligence or machine learning filters716that have been trained to determine if a message should be permitted access to the second domain. The content filters710-716can be hosted in a host machine718, where the host machine can be a virtual machine or a bare metal server. The filters can be modules that can accept a message payload, reject a message payload, or transform a message payload into a different format. In some implementations, an application programming interface (API) can be provided to the client so that the client can generate the data pipeline. The data pipeline can include a sequence of content filters that can be used to analyze messages. The client can select the sequence of content filters via the API. As part of generating the data pipeline, the client can define the attributes of a cross domain solution (CDS) via the API. The data pipeline can be constructed based at least in part on the defined attributes. In an additional implementation, the client can select, using the API, an order for the sequence of content filters. The order for the content filters can be variable and the order for content filters can change between messages. The client can also select multiple sequences of content filters where the sequence of filters for a given message can change based on indicators of trustworthiness for that message. For example, messages from known internet protocol (IP) addresses can be analyzed by fewer content filters. In some implementations, events generated by the content filters710-716can be provided to a logging network720as part of the data pipeline. The events received at logging network720can be provided by host machine718as a log of events occurring in the data pipeline. The log of events can be accepted ant a security information and event management (STEM) system and the logs, or information about the logs, can be provided to the client via the API. At block808, the message in the data pipeline can be forwarded to a destination node in the second domain. In some implementations, after receiving the message, the client can terminate the data pipeline using the API. In some implementations, the client can generate, and terminate, a data pipeline for individual messages. In some implementations, information about the messages can be provided to a trusted repositories722using a secure pipeline. In one embodiment, information about the message, in this case secure hash algorithm validation724information, can be provided to a secure first node726in the secure pipeline. Secure first node726can be configured like first nodes704a-cand messages can be sent from secure first node726to a secure second node728using a one-way communication protocol. The message can be received at secure second node728and secure second node728can be configured like second nodes708a-c. Messages received at the secure second node728can be forwarded to trusted repositories722. FIG.9Ashows a user interface (UI)900for configuring a cloud network according to an embodiment. A user can configure the cloud network by accessing the user interface with a computing device. The cloud network can be configured to include a cross domain solution gateway. A user can select a cross domain solution gateway menu by selecting the cross domain solution gateway button902. FIG.9Bshows a user interface (UI)901for configuring a cross domain solution according to an embodiment. The cross domain solution can be a virtual cross domain solution. The virtual cross domain solution can be an appliance created via an application programming interface (API). A user can create a cross domain solution gateway by selecting the “create cross domain solution gateway” button904. The user can configure the gateway using the user interface901. For instance, the user can select the direction for the cross domain solution. The user can also select which networks, or subnetworks, are connected by the cross domain solution. A user can also select one or more filters that can scan messages received at the cross domain system through the UI. The user can provide a filter sequence through the UI. FIG.10shows a method for a software implemented cross-domain solution according to certain embodiments. In some implementations, one or more process blocks ofFIG.10may be performed by a network interface card. In some implementations, the network interface card can be a smart network interface card (e.g., Smart NIC). In some implementations, one or more process blocks ofFIG.10may be performed by another device or a group of devices separate from or including the network interface card. Turning to process1000in further detail, at block1010, a message intended for the disconnected network and sent using a first communication protocol is received at a first node of a network interface card (NIC) associated with a disconnected network. The first node can be similar to first nodes404a-cfromFIG.4and the message can be received from a private network or a public network such as the Internet. The first node can be configured so that the first node cannot receive messages sent by the second node. At block1020, the message is sent from the first node to a second node of the network interface card using a second communication protocol. The second communication protocol can be configured for unidirectional (e.g., one-way) communication. In some implementations, the second communication protocol can be user datagram protocol (UDP). The second communication protocol can be any communication protocol that can be configured to allow communication exclusively in one direction. The second node can be similar to second nodes708a-cdescribed above in relation toFIG.7. The first node and the second node can be connected by a network cable such as an Ethernet cable or fiber optic cable. In some implementations, the network cable connecting the first node and the second node does not include a diode. In some implementations, the second communication protocol can be the same as the first communication protocol. In some implementations the first node and the second node are connected wirelessly. The first node and the second node can be located on separate devices. At block1030, the message is received at the second node. In some implementations, the second node is configured so that messages cannot be sent from the second node to the first node. In some implementations, the first node and the second node can be located on different devices. The first node and second node can communicate via a wireless connection. At block1040, the message is sent from the second node to a destination node of the disconnected network using a third communication protocol. In some implementations, the disconnected network can be isolated from a public network (e.g., the Internet). In some implementations, the disconnected network is configured to only receive messages and cannot send messages to destination nodes outside of the disconnected network. In some implementations, the disconnected network comprises a virtual cloud network. In some implementations, the message, after leaving the second node, passes through a filter chain before arriving at the destination node. The filter chain can include one or more of a malware filter, a content filter, a signature filter, a content analyzer. The aforementioned filters can use artificial intelligence and/or machine learning (AI/ML) to adapt to new malware or attacks. In some embodiments, training or test data is sent inline from a trusted source. In other embodiments, pre-trained AI/ML models produced elsewhere are uploaded from a trusted source to perform the filtering. In some implementations, the third communication protocol can be the same protocol as the first or second communication protocol. Process1000may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. AlthoughFIG.10shows example blocks of process1000, in some implementations, process1000may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.10. Additionally, or alternatively, two or more of the blocks of process1000may be performed in parallel. FIG.11shows a method for a software as a service (SaaS) based cross-domain solution according to certain embodiments. In some implementations, one or more process blocks ofFIG.11may be performed by a computer device of a virtual cloud network. In some implementations, one or more process blocks ofFIG.11may be performed by another device or a group of devices separate from or including the network interface card. At block1110, selecting, one or more filters are selected by a computer device of a virtual cloud network from a plurality of filters for a data pipeline, the plurality of filters comprising at least one of: a malware filter; a content filter; a signature filter; a content analyzer; AI/ML and the ability to update the filters can be exposed to a customer via API. The customer can send marked training (test) data through the system. In another embodiment other sources such as the cloud service provider, the owner of the disconnected network, security analysts, or other trusted sources can send learning and training data into the AI/ML system. The customer may select the sources and may define the criteria, such as frequency, applicable filter, and/or audit period. A customer (e.g., client or user) can select the plurality of filters for a data pipeline. In some other embodiments, the customer may pre-train the AI/ML model and send the trained model instead of the training data. In some implementations, the virtual cloud network is a virtual machine. In some implementations, the one or more selected filters are selected based at least in part on a source of the message. In some implementations, a plurality of the one or more filters are selected for a same source of the message. At block1120, a sequential order for the one or more selected filters in the data pipeline is determined. A customer (e.g., client or user) can determine the sequential order. In some implementations, the determined sequential order is determined based at least in part on a source of the message. In some implementations, the order of the one or more selected filters are determined based at least in part on a source of the message. The filters can include an artificial intelligence and/or machine learning (AI/ML) filter. The AI/ML filter can use a pretrained artificial intelligence or machine learning model. The AI/ML filter can also use an artificial intelligence or machine learning model that is trained on training data obtained from the disconnected network. The training data can include the packet origin, characteristics of known viruses or malware, or traffic patterns of traffic received at the disconnected network. The AI/ML filter can be trained on training data including packets flagged by content filters. The flagged packets can be packets that were identified as containing malware or a virus. The AI/ML filter can be trained to identify packets containing malware or a virus using the flagged packets. At block1130, a message in the data pipeline from a network interface card (NIC) is received, the network interface card being configured as a one-way transfer device. In some implementations, the network interface card comprises a software-based one-way transfer device. The network interface card can be a single device or one or more devices. At block1140, the message in the data pipeline is filtered by passing the message through the one or more selected filters in the determined sequential order. The sequential order can change based on the source of a message. In some circumstances, the number of filters can depend on the message. The sequential order of the filters can also vary between messages. The number of filters can also vary from message to message. At block1150, logs of events occurring in the data pipeline are provided via a logging network. The logs can be provided to a set of trusted repositories and in some implementations, information from the logs can be provided to the client via the application programming interface (API). Process1100may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. In some implementations, process1100includes removing the one or more selected filters from the data pipeline after the message is processed by the one or more selected filters in the determined sequential order. AlthoughFIG.11shows example blocks of process1100, in some implementations, process1100may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.11. Additionally, or alternatively, two or more of the blocks of process1100may be performed in parallel. FIG.12shows a method1200for a cross-domain solution with disaggregated parts according to certain embodiments. In some implementations, one or more process blocks ofFIG.12may be performed by a computing device of a disconnected network. In some implementations, one or more process blocks ofFIG.12may be performed by another device or a group of devices separate from or including the network interface card. At block1210, an application programming interface (API) configured to present a set of filter types is generated by a computing device of a disconnected network. In some implementations the API can be a user interface (e.g., console) such as the user interface described above in relation toFIG.9. The filter types can include one or more of a malware filter, a content filter, a signature filter, a content analyzer, a machine learning filter, or an artificial intelligence filter. The API can be part of providing a cross-domain solution as a service. The filters can include one or more artificial intelligence and/or machine learning (AI/ML) filters. The AI/ML filter can use a pretrained artificial intelligence and/or machine learning model. The AI/ML filter can also use an artificial intelligence and/or machine learning model that is trained on training data obtained from the disconnected network. The training data can include the packet origin, characteristics of known viruses or malware, or traffic patterns of traffic received at the disconnected network. The AI/ML filter can be trained on training data including packets flagged by content filters. The flagged packets can be packets that were identified as containing malware or a virus. The AI/ML filter can be trained to identify packets containing malware or a virus using the flagged packets. At block1220, a selection of one or more filter types, from the set of filter types, is received via the application programming interface. The selection of one or more filter types can be provided by a customer (e.g., client or user). The one or more filter types can be selected as part of configuring a cross-domain solution. The cross-domain solution can be configured via an application programming interface (API). The API can be provided to a user through a web service (e.g., cross-domain solution as a service (CDSaaS)). The API can be used to construct, generate or modify one or more cross domain solution instances. In some implementations, the selection of filter types can change between messages. In some implementations, the selection of the filter types can be based in part on the source of the message At block1230, a sequential order for the selected filter types is received via the application programming interface. The sequential order of the one or more filter types can be provided by a customer (e.g., client or user). The order of the one or more filter types can be selected as part of configuring a cross-domain solution. The cross-domain solution can be provided as a cross-domain solution as a service. In some implementations, the sequential order of filter types can change between messages. In some implementations, the sequential order of the filter types can be based in part on the source of the message. At block1240, a data pipeline, with the selection of filters in the sequential order, is generated by the computing device of the disconnected network and in response to a command received via the application programming interface. In some implementations, the disconnected network can be a virtual cloud network. The customer (e.g., client or user) can configure the virtual cloud network as part of providing a cross-domain solution as a service. At block1250, a message received at a one-way transfer device is analyzed by the computing device of the disconnected network by passing the message through the selected filters in the sequential order. The one-way transfer device can be a software-based one-way transfer device. In some implementations, the one-way transfer device can be a smart network interface card (Smart NIC). At block1260, a log of events occurring in the data pipeline is received by a logging network of the disconnected network. The log of events can include events taking place at an operating system (OS) level, an application level, and a payload level. At block1270, the log of events is presented via the application programming interface. At block1280, the data pipeline is terminated upon receiving a termination command via the application programming interface. Process1200may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. In some implementations, process1200includes sending messages from the disconnected network to a trusted repository via a one-way transfer device. AlthoughFIG.12shows example blocks of process1200, in some implementations, process1200may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.12. Additionally, or alternatively, two or more of the blocks of process1200may be performed in parallel. The term cloud service is generally used to refer to a service that is made available by a cloud services provider (CSP) to users or customers on demand (e.g., via a subscription model) using systems and infrastructure (cloud infrastructure) provided by the CSP. Typically, the servers and systems that make up the CSP's infrastructure are separate from the customer's own on-premise servers and systems. Customers can thus avail themselves of cloud services provided by the CSP without having to purchase separate hardware and software resources for the services. Cloud services are designed to provide a subscribing customer easy, scalable access to applications and computing resources without the customer having to invest in procuring the infrastructure that is used for providing the services. There are several cloud service providers that offer various types of cloud services. There are various different types or models of cloud services including Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), Infrastructure-as-a-Service (IaaS), and others. A customer can subscribe to one or more cloud services provided by a CSP. The customer can be any entity such as an individual, an organization, an enterprise, and the like. When a customer subscribes to or registers for a service provided by a CSP, a tenancy or an account is created for that customer. The customer can then, via this account, access the subscribed-to one or more cloud resources associated with the account. As noted above, infrastructure as a service (IaaS) is one particular type of cloud computing service. In an IaaS model, the CSP provides infrastructure (referred to as cloud services provider infrastructure or CSPI) that can be used by customers to build their own customizable networks and deploy customer resources. The customer's resources and networks are thus hosted in a distributed environment by infrastructure provided by a CSP. This is different from traditional computing, where the customer's resources and networks are hosted by infrastructure provided by the customer. The CSPI may comprise interconnected high-performance compute resources including various host machines, memory resources, and network resources that form a physical network, which is also referred to as a substrate network or an underlay network. The resources in CSPI may be spread across one or more data centers that may be geographically spread across one or more geographical regions. Virtualization software may be executed by these physical resources to provide a virtualized distributed environment. The virtualization creates an overlay network (also known as a software-based network, a software-defined network, or a virtual network) over the physical network. The CSPI physical network provides the underlying basis for creating one or more overlay or virtual networks on top of the physical network. The virtual or overlay networks can include one or more virtual cloud networks (VCNs). The virtual networks are implemented using software virtualization technologies (e.g., hypervisors, functions performed by network virtualization devices (NVDs) (e.g., smartNICs), top-of-rack (TOR) switches, smart TORs that implement one or more functions performed by an NVD, and other mechanisms) to create layers of network abstraction that can be run on top of the physical network. Virtual networks can take on many forms, including peer-to-peer networks, IP networks, and others. Virtual networks are typically either Layer-3 IP networks or Layer-2 VLANs. This method of virtual or overlay networking is often referred to as virtual or overlay Layer-3 networking. Examples of protocols developed for virtual networks include IP-in-IP (or Generic Routing Encapsulation (GRE)), Virtual Extensible LAN (VXLAN—IETF RFC 7348), Virtual Private Networks (VPNs) (e.g., MPLS Layer-3 Virtual Private Networks (RFC 4364)), VMware's NSX, GENEVE (Generic Network Virtualization Encapsulation), and others. For IaaS, the infrastructure (CSPI) provided by a CSP can be configured to provide virtualized computing resources over a public network (e.g., the Internet). In an IaaS model, a cloud computing services provider can host the infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., a hypervisor layer), or the like). In some cases, an IaaS provider may also supply a variety of services to accompany those infrastructure components (e.g., billing, monitoring, logging, security, load balancing and clustering, etc.). Thus, as these services may be policy-driven, IaaS users may be able to implement policies to drive load balancing to maintain application availability and performance. CSPI provides infrastructure and a set of complementary cloud services that enable customers to build and run a wide range of applications and services in a highly available hosted distributed environment. CSPI offers high-performance compute resources and capabilities and storage capacity in a flexible virtual network that is securely accessible from various networked locations such as from a customer's on-premises network. When a customer subscribes to or registers for an IaaS service provided by a CSP, the tenancy created for that customer is a secure and isolated partition within the CSPI where the customer can create, organize, and administer their cloud resources. Customers can build their own virtual networks using compute, memory, and networking resources provided by CSPI. One or more customer resources or workloads, such as compute instances, can be deployed on these virtual networks. For example, a customer can use resources provided by CSPI to build one or multiple customizable and private virtual network(s) referred to as virtual cloud networks (VCNs). A customer can deploy one or more customer resources, such as compute instances, on a customer VCN. Compute instances can take the form of virtual machines, bare metal instances, and the like. The CSPI thus provides infrastructure and a set of complementary cloud services that enable customers to build and run a wide range of applications and services in a highly available virtual hosted environment. The customer does not manage or control the underlying physical resources provided by CSPI but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., firewalls). The CSP may provide a console that enables customers and network administrators to configure, access, and manage resources deployed in the cloud using CSPI resources. In certain embodiments, the console provides a web-based user interface that can be used to access and manage CSPI. In some implementations, the console is a web-based application provided by the CSP. CSPI may support single-tenancy or multi-tenancy architectures. In a single tenancy architecture, a software (e.g., an application, a database) or a hardware component (e.g., a host machine or a server) serves a single customer or tenant. In a multi-tenancy architecture, a software or a hardware component serves multiple customers or tenants. Thus, in a multi-tenancy architecture, CSPI resources are shared between multiple customers or tenants. In a multi-tenancy situation, precautions are taken and safeguards put in place within CSPI to ensure that each tenant's data is isolated and remains invisible to other tenants. In a physical network, a network endpoint (“endpoint”) refers to a computing device or system that is connected to a physical network and communicates back and forth with the network to which it is connected. A network endpoint in the physical network may be connected to a Local Area Network (LAN), a Wide Area Network (WAN), or other type of physical network. Examples of traditional endpoints in a physical network include modems, hubs, bridges, switches, routers, and other networking devices, physical computers (or host machines), and the like. Each physical device in the physical network has a fixed network address that can be used to communicate with the device. This fixed network address can be a Layer-2 address (e.g., a MAC address), a fixed Layer-3 address (e.g., an IP address), and the like. In a virtualized environment or in a virtual network, the endpoints can include various virtual endpoints such as virtual machines that are hosted by components of the physical network (e.g., hosted by physical host machines). These endpoints in the virtual network are addressed by overlay addresses such as overlay Layer-2 addresses (e.g., overlay MAC addresses) and overlay Layer-3 addresses (e.g., overlay IP addresses). Network overlays enable flexibility by allowing network managers to move around the overlay addresses associated with network endpoints using software management (e.g., via software implementing a control plane for the virtual network). Accordingly, unlike in a physical network, in a virtual network, an overlay address (e.g., an overlay IP address) can be moved from one endpoint to another using network management software. Since the virtual network is built on top of a physical network, communications between components in the virtual network involves both the virtual network and the underlying physical network. In order to facilitate such communications, the components of CSPI are configured to learn and store mappings that map overlay addresses in the virtual network to actual physical addresses in the substrate network, and vice versa. These mappings are then used to facilitate the communications. Customer traffic is encapsulated to facilitate routing in the virtual network. Accordingly, physical addresses (e.g., physical IP addresses) are associated with components in physical networks and overlay addresses (e.g., overlay IP addresses) are associated with entities in virtual networks. Both the physical IP addresses and overlay IP addresses are types of real IP addresses. These are separate from virtual IP addresses, where a virtual IP address maps to multiple real IP addresses. A virtual IP address provides a 1-to-many mapping between the virtual IP address and multiple real IP addresses. The cloud infrastructure or CSPI is physically hosted in one or more data centers in one or more regions around the world. The CSPI may include components in the physical or substrate network and virtualized components (e.g., virtual networks, compute instances, virtual machines, etc.) that are in an virtual network built on top of the physical network components. In certain embodiments, the CSPI is organized and hosted in realms, regions and availability domains. A region is typically a localized geographic area that contains one or more data centers. Regions are generally independent of each other and can be separated by vast distances, for example, across countries or even continents. For example, a first region may be in Australia, another one in Japan, yet another one in India, and the like. CSPI resources are divided among regions such that each region has its own independent subset of CSPI resources. Each region may provide a set of core infrastructure services and resources, such as, compute resources (e.g., bare metal servers, virtual machine, containers and related infrastructure, etc.); storage resources (e.g., block volume storage, file storage, object storage, archive storage); networking resources (e.g., virtual cloud networks (VCNs), load balancing resources, connections to on-premise networks), database resources; edge networking resources (e.g., DNS); and access management and monitoring resources, and others. Each region generally has multiple paths connecting it to other regions in the realm. Generally, an application is deployed in a region (i.e., deployed on infrastructure associated with that region) where it is most heavily used, because using nearby resources is faster than using distant resources. Applications can also be deployed in different regions for various reasons, such as redundancy to mitigate the risk of region-wide events such as large weather systems or earthquakes, to meet varying requirements for legal jurisdictions, tax domains, and other business or social criteria, and the like. The data centers within a region can be further organized and subdivided into availability domains (ADs). An availability domain may correspond to one or more data centers located within a region. A region can be composed of one or more availability domains. In such a distributed environment, CSPI resources are either region-specific, such as a virtual cloud network (VCN), or availability domain-specific, such as a compute instance. ADs within a region are isolated from each other, fault tolerant, and are configured such that they are very unlikely to fail simultaneously. This is achieved by the ADs not sharing critical infrastructure resources such as networking, physical cables, cable paths, cable entry points, etc., such that a failure at one AD within a region is unlikely to impact the availability of the other ADs within the same region. The ADs within the same region may be connected to each other by a low latency, high bandwidth network, which makes it possible to provide high-availability connectivity to other networks (e.g., the Internet, customers' on-premise networks, etc.) and to build replicated systems in multiple ADs for both high-availability and disaster recovery. Cloud services use multiple ADs to ensure high availability and to protect against resource failure. As the infrastructure provided by the IaaS provider grows, more regions and ADs may be added with additional capacity. Traffic between availability domains is usually encrypted. In certain embodiments, regions are grouped into realms. A realm is a logical collection of regions. Realms are isolated from each other and do not share any data. Regions in the same realm may communicate with each other, but regions in different realms cannot. A customer's tenancy or account with the CSP exists in a single realm and can be spread across one or more regions that belong to that realm. Typically, when a customer subscribes to an IaaS service, a tenancy or account is created for that customer in the customer-specified region (referred to as the “home” region) within a realm. A customer can extend the customer's tenancy across one or more other regions within the realm. A customer cannot access regions that are not in the realm where the customer's tenancy exists. An IaaS provider can provide multiple realms, each realm catered to a particular set of customers or users. For example, a commercial realm may be provided for commercial customers. As another example, a realm may be provided for a specific country for customers within that country. As yet another example, a government realm may be provided for a government, and the like. For example, the government realm may be catered for a specific government and may have a heightened level of security than a commercial realm. For example, Oracle Cloud Infrastructure (OCI) currently offers a realm for commercial regions and two realms (e.g., FedRAMP authorized and IL5 authorized) for government cloud regions. In certain embodiments, an AD can be subdivided into one or more fault domains. A fault domain is a grouping of infrastructure resources within an AD to provide anti-affinity. Fault domains allow for the distribution of compute instances such that the instances are not on the same physical hardware within a single AD. This is known as anti-affinity. A fault domain refers to a set of hardware components (computers, switches, and more) that share a single point of failure. A compute pool is logically divided up into fault domains. Due to this, a hardware failure or compute hardware maintenance event that affects one fault domain does not affect instances in other fault domains. Depending on the embodiment, the number of fault domains for each AD may vary. For instance, in certain embodiments each AD contains three fault domains. A fault domain acts as a logical data center within an AD. When a customer subscribes to an IaaS service, resources from CSPI are provisioned for the customer and associated with the customer's tenancy. The customer can use these provisioned resources to build private networks and deploy resources on these networks. The customer networks that are hosted in the cloud by the CSPI are referred to as virtual cloud networks (VCNs). A customer can set up one or more virtual cloud networks (VCNs) using CSPI resources allocated for the customer. A VCN is a virtual or software defined private network. The customer resources that are deployed in the customer's VCN can include compute instances (e.g., virtual machines, bare-metal instances) and other resources. These compute instances may represent various customer workloads such as applications, load balancers, databases, and the like. A compute instance deployed on a VCN can communicate with public accessible endpoints (“public endpoints”) over a public network such as the Internet, with other instances in the same VCN or other VCNs (e.g., the customer's other VCNs, or VCNs not belonging to the customer), with the customer's on-premise data centers or networks, and with service endpoints, and other types of endpoints. The CSP may provide various services using the CSPI. In some instances, customers of CSPI may themselves act like service providers and provide services using CSPI resources. A service provider may expose a service endpoint, which is characterized by identification information (e.g., an IP Address, a DNS name and port). A customer's resource (e.g., a compute instance) can consume a particular service by accessing a service endpoint exposed by the service for that particular service. These service endpoints are generally endpoints that are publicly accessible by users using public IP addresses associated with the endpoints via a public communication network such as the Internet. Network endpoints that are publicly accessible are also sometimes referred to as public endpoints. In certain embodiments, a service provider may expose a service via an endpoint (sometimes referred to as a service endpoint) for the service. Customers of the service can then use this service endpoint to access the service. In certain implementations, a service endpoint provided for a service can be accessed by multiple customers that intend to consume that service. In other implementations, a dedicated service endpoint may be provided for a customer such that only that customer can access the service using that dedicated service endpoint. In certain embodiments, when a VCN is created, it is associated with a private overlay Classless Inter-Domain Routing (CIDR) address space, which is a range of private overlay IP addresses that are assigned to the VCN (e.g., 10.0/16). A VCN includes associated subnets, route tables, and gateways. A VCN resides within a single region but can span one or more or all of the region's availability domains. A gateway is a virtual interface that is configured for a VCN and enables communication of traffic to and from the VCN to one or more endpoints outside the VCN. One or more different types of gateways may be configured for a VCN to enable communication to and from different types of endpoints. A VCN can be subdivided into one or more sub-networks such as one or more subnets. A subnet is thus a unit of configuration or a subdivision that can be created within a VCN. A VCN can have one or multiple subnets. Each subnet within a VCN is associated with a contiguous range of overlay IP addresses (e.g., 10.0.0.0/24 and 10.0.1.0/24) that do not overlap with other subnets in that VCN and which represent an address space subset within the address space of the VCN. Each compute instance is associated with a virtual network interface card (VNIC), that enables the compute instance to participate in a subnet of a VCN. A VNIC is a logical representation of physical Network Interface Card (NIC). In general. a VNIC is an interface between an entity (e.g., a compute instance, a service) and a virtual network. A VNIC exists in a subnet, has one or more associated IP addresses, and associated security rules or policies. A VNIC is equivalent to a Layer-2 port on a switch. A VNIC is attached to a compute instance and to a subnet within a VCN. A VNIC associated with a compute instance enables the compute instance to be a part of a subnet of a VCN and enables the compute instance to communicate (e.g., send and receive packets) with endpoints that are on the same subnet as the compute instance, with endpoints in different subnets in the VCN, or with endpoints outside the VCN. The VNIC associated with a compute instance thus determines how the compute instance connects with endpoints inside and outside the VCN. A VNIC for a compute instance is created and associated with that compute instance when the compute instance is created and added to a subnet within a VCN. For a subnet comprising a set of compute instances, the subnet contains the VNICs corresponding to the set of compute instances, each VNIC attached to a compute instance within the set of computer instances. Each compute instance is assigned a private overlay IP address via the VNIC associated with the compute instance. This private overlay IP address is assigned to the VNIC that is associated with the compute instance when the compute instance is created and used for routing traffic to and from the compute instance. All VNICs in a given subnet use the same route table, security lists, and DHCP options. As described above, each subnet within a VCN is associated with a contiguous range of overlay IP addresses (e.g., 10.0.0.0/24 and 10.0.1.0/24) that do not overlap with other subnets in that VCN and which represent an address space subset within the address space of the VCN. For a VNIC on a particular subnet of a VCN, the private overlay IP address that is assigned to the VNIC is an address from the contiguous range of overlay IP addresses allocated for the subnet. In certain embodiments, a compute instance may optionally be assigned additional overlay IP addresses in addition to the private overlay IP address, such as, for example, one or more public IP addresses if in a public subnet. These multiple addresses are assigned either on the same VNIC or over multiple VNICs that are associated with the compute instance. Each instance however has a primary VNIC that is created during instance launch and is associated with the overlay private IP address assigned to the instance—this primary VNIC cannot be removed. Additional VNICs, referred to as secondary VNICs, can be added to an existing instance in the same availability domain as the primary VNIC. All the VNICs are in the same availability domain as the instance. A secondary VNIC can be in a subnet in the same VCN as the primary VNIC, or in a different subnet that is either in the same VCN or a different one. A compute instance may optionally be assigned a public IP address if it is in a public subnet. A subnet can be designated as either a public subnet or a private subnet at the time the subnet is created. A private subnet means that the resources (e.g., compute instances) and associated VNICs in the subnet cannot have public overlay IP addresses. A public subnet means that the resources and associated VNICs in the subnet can have public IP addresses. A customer can designate a subnet to exist either in a single availability domain or across multiple availability domains in a region or realm. As described above, a VCN may be subdivided into one or more subnets. In certain embodiments, a Virtual Router (VR) configured for the VCN (referred to as the VCN VR or just VR) enables communications between the subnets of the VCN. For a subnet within a VCN, the VR represents a logical gateway for that subnet that enables the subnet (i.e., the compute instances on that subnet) to communicate with endpoints on other subnets within the VCN, and with other endpoints outside the VCN. The VCN VR is a logical entity that is configured to route traffic between VNICs in the VCN and virtual gateways (“gateways”) associated with the VCN. Gateways are further described below with respect toFIG.1. A VCN VR is a Layer-3/IP Layer concept. In one embodiment, there is one VCN VR for a VCN where the VCN VR has potentially an unlimited number of ports addressed by IP addresses, with one port for each subnet of the VCN. In this manner, the VCN VR has a different IP address for each subnet in the VCN that the VCN VR is attached to. The VR is also connected to the various gateways configured for a VCN. In certain embodiments, a particular overlay IP address from the overlay IP address range for a subnet is reserved for a port of the VCN VR for that subnet. For example, consider a VCN having two subnets with associated address ranges 10.0/16 and 10.1/16, respectively. For the first subnet within the VCN with address range 10.0/16, an address from this range is reserved for a port of the VCN VR for that subnet. In some instances, the first IP address from the range may be reserved for the VCN VR. For example, for the subnet with overlay IP address range 10.0/16, IP address 10.0.0.1 may be reserved for a port of the VCN VR for that subnet. For the second subnet within the same VCN with address range 10.1/16, the VCN VR may have a port for that second subnet with IP address 10.1.0.1. The VCN VR has a different IP address for each of the subnets in the VCN. In some other embodiments, each subnet within a VCN may have its own associated VR that is addressable by the subnet using a reserved or default IP address associated with the VR. The reserved or default IP address may, for example, be the first IP address from the range of IP addresses associated with that subnet. The VNICs in the subnet can communicate (e.g., send and receive packets) with the VR associated with the subnet using this default or reserved IP address. In such an embodiment, the VR is the ingress/egress point for that subnet. The VR associated with a subnet within the VCN can communicate with other VRs associated with other subnets within the VCN. The VRs can also communicate with gateways associated with the VCN. The VR function for a subnet is running on or executed by one or more NVDs executing VNICs functionality for VNICs in the subnet. Route tables, security rules, and DHCP options may be configured for a VCN. Route tables are virtual route tables for the VCN and include rules to route traffic from subnets within the VCN to destinations outside the VCN by way of gateways or specially configured instances. A VCN's route tables can be customized to control how packets are forwarded/routed to and from the VCN. DHCP options refers to configuration information that is automatically provided to the instances when they boot up. Security rules configured for a VCN represent overlay firewall rules for the VCN. The security rules can include ingress and egress rules, and specify the types of traffic (e.g., based upon protocol and port) that is allowed in and out of the instances within the VCN. The customer can choose whether a given rule is stateful or stateless. For instance, the customer can allow incoming SSH traffic from anywhere to a set of instances by setting up a stateful ingress rule with source CIDR 0.0.0.0/0, and destination TCP port22. Security rules can be implemented using network security groups or security lists. A network security group consists of a set of security rules that apply only to the resources in that group. A security list, on the other hand, includes rules that apply to all the resources in any subnet that uses the security list. A VCN may be provided with a default security list with default security rules. DHCP options configured for a VCN provide configuration information that is automatically provided to the instances in the VCN when the instances boot up. In certain embodiments, the configuration information for a VCN is determined and stored by a VCN Control Plane. The configuration information for a VCN may include, for example, information about: the address range associated with the VCN, subnets within the VCN and associated information, one or more VRs associated with the VCN, compute instances in the VCN and associated VNICs, NVDs executing the various virtualization network functions (e.g., VNICs, VRs, gateways) associated with the VCN, state information for the VCN, and other VCN-related information. In certain embodiments, a VCN Distribution Service publishes the configuration information stored by the VCN Control Plane, or portions thereof, to the NVDs. The distributed information may be used to update information (e.g., forwarding tables, routing tables, etc.) stored and used by the NVDs to forward packets to and from the compute instances in the VCN. In certain embodiments, the creation of VCNs and subnets are handled by a VCN Control Plane (CP) and the launching of compute instances is handled by a Compute Control Plane. The Compute Control Plane is responsible for allocating the physical resources for the compute instance and then calls the VCN Control Plane to create and attach VNICs to the compute instance. The VCN CP also sends VCN data mappings to the VCN data plane that is configured to perform packet forwarding and routing functions. In certain embodiments, the VCN CP provides a distribution service that is responsible for providing updates to the VCN data plane. Examples of a VCN Control Plane are also depicted inFIGS.18,19,20, and21(see references1816,1916,2016, and2116) and described below. A customer may create one or more VCNs using resources hosted by CSPI. A compute instance deployed on a customer VCN may communicate with different endpoints. These endpoints can include endpoints that are hosted by CSPI and endpoints outside CSPI. Various different architectures for implementing cloud-based service using CSPI are depicted inFIGS.13,14,15,16,17,18,19,20, and22, and are described below.FIG.13is a high level diagram of a distributed environment1300showing an overlay or customer VCN hosted by CSPI according to certain embodiments. The distributed environment depicted inFIG.13includes multiple components in the overlay network. Distributed environment1300depicted inFIG.13is merely an example and is not intended to unduly limit the scope of claimed embodiments. Many variations, alternatives, and modifications are possible. For example, in some implementations, the distributed environment depicted inFIG.13may have more or fewer systems or components than those shown inFIG.1, may combine two or more systems, or may have a different configuration or arrangement of systems. As shown in the example depicted inFIG.13, distributed environment1300comprises CSPI1301that provides services and resources that customers can subscribe to and use to build their virtual cloud networks (VCNs). In certain embodiments, CSPI1301offers IaaS services to subscribing customers. The data centers within CSPI1301may be organized into one or more regions. One example region “Region US”1302is shown inFIG.13. A customer has configured a customer VCN1304for region1302. The customer may deploy various compute instances on VCN1304, where the compute instances may include virtual machines or bare metal instances. Examples of instances include applications, database, load balancers, and the like. In the embodiment depicted inFIG.13, customer VCN1304comprises two subnets, namely, “Subnet-1” and “Subnet-2”, each subnet with its own CIDR IP address range. InFIG.13, the overlay IP address range for Subnet-1 is 16.0/16 and the address range for Subnet-2 is 16.1/16. A VCN Virtual Router1305represents a logical gateway for the VCN that enables communications between subnets of the VCN1304, and with other endpoints outside the VCN. VCN VR1305is configured to route traffic between VNICs in VCN1304and gateways associated with VCN1304. VCN VR1305provides a port for each subnet of VCN1304. For example, VR1305may provide a port with IP address 10.0.0.1 for Subnet-1 and a port with IP address 10.1.0.1 for Subnet-2. Multiple compute instances may be deployed on each subnet, where the compute instances can be virtual machine instances, and/or bare metal instances. The compute instances in a subnet may be hosted by one or more host machines within CSPI 61301. A compute instance participates in a subnet via a VNIC associated with the compute instance. For example, as shown inFIG.13, a compute instance C1 is part of Subnet-1 via a VNIC associated with the compute instance. Likewise, compute instance C2 is part of Subnet-1 via a VNIC associated with C2. In a similar manner, multiple compute instances, which may be virtual machine instances or bare metal instances, may be part of Subnet-1. Via its associated VNIC, each compute instance is assigned a private overlay IP address and a MAC address. For example, inFIG.13, compute instance C1 has an overlay IP address of 10.0.0.2 and a MAC address of M1, while compute instance C2 has an private overlay IP address of 10.0.0.3 and a MAC address of M2. Each compute instance in Subnet-1, including compute instances C1 and C2, has a default route to VCN VR 61305using IP address 10.0.0.1, which is the IP address for a port of VCN VR 61305for Subnet-1. Subnet-2 can have multiple compute instances deployed on it, including virtual machine instances and/or bare metal instances. For example, as shown inFIG.13, compute instances D1 and D2 are part of Subnet-2 via VNICs associated with the respective compute instances. In the embodiment depicted inFIG.13, compute instance D1 has an overlay IP address of. 1.0.2 and a MAC address of MM1, while compute instance D2 has an private overlay IP address of 10.1.0.3 and a MAC address of MM2. Each compute instance in Subnet-2, including compute instances D1 and D2, has a default route to VCN VR 61305using IP address 10.1.0.1, which is the IP address for a port of VCN VR1305for Subnet-2. VCN A1304may also include one or more load balancers. For example, a load balancer may be provided for a subnet and may be configured to load balance traffic across multiple compute instances on the subnet. A load balancer may also be provided to load balance traffic across subnets in the VCN. A particular compute instance deployed on VCN1304can communicate with various different endpoints. These endpoints may include endpoints that are hosted by CSPI1600and endpoints outside CSPI1600. Endpoints that are hosted by CSPI1301may include: an endpoint on the same subnet as the particular compute instance (e.g., communications between two compute instances in Subnet-1); an endpoint on a different subnet but within the same VCN (e.g., communication between a compute instance in Subnet-1 and a compute instance in Subnet-2); an endpoint in a different VCN in the same region (e.g., communications between a compute instance in Subnet-1 and an endpoint in a VCN in the same region1306or1310, communications between a compute instance in Subnet-1 and an endpoint in service network1310in the same region); or an endpoint in a VCN in a different region (e.g., communications between a compute instance in Subnet-1 and an endpoint in a VCN in a different region1308). A compute instance in a subnet hosted by CSPI1301may also communicate with endpoints that are not hosted by CSPI1301(i.e., are outside CSPI1301). These outside endpoints include endpoints in the customer's on-premise network1316, endpoints within other remote cloud hosted networks1318, public endpoints1314accessible via a public network such as the Internet, and other endpoints. Communications between compute instances on the same subnet are facilitated using VNICs associated with the source compute instance and the destination compute instance. For example, compute instance C1 in Subnet-1 may want to send packets to compute instance C2 in Subnet-1. For a packet originating at a source compute instance and whose destination is another compute instance in the same subnet, the packet is first processed by the VNIC associated with the source compute instance. Processing performed by the VNIC associated with the source compute instance can include determining destination information for the packet from the packet headers, identifying any policies (e.g., security lists) configured for the VNIC associated with the source compute instance, determining a next hop for the packet, performing any packet encapsulation/decapsulation functions as needed, and then forwarding/routing the packet to the next hop with the goal of facilitating communication of the packet to its intended destination. When the destination compute instance is in the same subnet as the source compute instance, the VNIC associated with the source compute instance is configured to identify the VNIC associated with the destination compute instance and forward the packet to that VNIC for processing. The VNIC associated with the destination compute instance is then executed and forwards the packet to the destination compute instance. For a packet to be communicated from a compute instance in a subnet to an endpoint in a different subnet in the same VCN, the communication is facilitated by the VNICs associated with the source and destination compute instances and the VCN VR. For example, if compute instance C1 in Subnet-1 inFIG.13wants to send a packet to compute instance D1 in Subnet-2, the packet is first processed by the VNIC associated with compute instance C1. The VNIC associated with compute instance C1 is configured to route the packet to the VCN VR1305using default route or port 10.0.0.1 of the VCN VR. VCN VR1305is configured to route the packet to Subnet-2 using port 10.1.0.1. The packet is then received and processed by the VNIC associated with D1 and the VNIC forwards the packet to compute instance D1. For a packet to be communicated from a compute instance in VCN1304to an endpoint that is outside VCN1304, the communication is facilitated by the VNIC associated with the source compute instance, VCN VR1305, and gateways associated with VCN1304. One or more types of gateways may be associated with VCN1304. A gateway is an interface between a VCN and another endpoint, where the another endpoint is outside the VCN. A gateway is a Layer-3/IP layer concept and enables a VCN to communicate with endpoints outside the VCN. A gateway thus facilitates traffic flow between a VCN and other VCNs or networks. Various different types of gateways may be configured for a VCN to facilitate different types of communications with different types of endpoints. Depending upon the gateway, the communications may be over public networks (e.g., the Internet) or over private networks. Various communication protocols may be used for these communications. For example, compute instance C1 may want to communicate with an endpoint outside VCN1304. The packet may be first processed by the VNIC associated with source compute instance C1. The VNIC processing determines that the destination for the packet is outside the Subnet-1 of C1. The VNIC associated with C1 may forward the packet to VCN VR1305for VCN1304. VCN VR1305then processes the packet and as part of the processing, based upon the destination for the packet, determines a particular gateway associated with VCN1304as the next hop for the packet. VCN VR1305may then forward the packet to the particular identified gateway. For example, if the destination is an endpoint within the customer's on-premise network, then the packet may be forwarded by VCN VR1305to Dynamic Routing Gateway (DRG) gateway1322configured for VCN1304. The packet may then be forwarded from the gateway to a next hop to facilitate communication of the packet to it final intended destination. Various different types of gateways may be configured for a VCN. Examples of gateways that may be configured for a VCN are depicted inFIG.13and described below. Examples of gateways associated with a VCN are also depicted inFIGS.18,19,20, and21(for example, gateways referenced by reference numbers1834,1836,1838,1934,1936,1938,2034,2036,2038,2134,2136, and2138) and described below. As shown in the embodiment depicted inFIG.13, a Dynamic Routing Gateway (DRG)1322may be added to or be associated with customer VCN1304and provides a path for private network traffic communication between customer VCN1304and another endpoint, where the another endpoint can be the customer's on-premise network1316, a VCN1308in a different region of CSPI1301, or other remote cloud networks1318not hosted by CSPI1301. Customer on-premise network1316may be a customer network or a customer data center built using the customer's resources. Access to customer on-premise network1316is generally very restricted. For a customer that has both a customer on-premise network1316and one or more VCNs1304deployed or hosted in the cloud by CSPI1301, the customer may want their on-premise network1316and their cloud-based VCN1304to be able to communicate with each other. This enables a customer to build an extended hybrid environment encompassing the customer's VCN1304hosted by CSPI1301and their on-premises network1316. DRG1322enables this communication. To enable such communications, a communication channel1324is set up where one endpoint of the channel is in customer on-premise network1316and the other endpoint is in CSPI1301and connected to customer VCN1304. Communication channel1324can be over public communication networks such as the Internet or private communication networks. Various different communication protocols may be used such as IPsec VPN technology over a public communication network such as the Internet, Oracle's FastConnect technology that uses a private network instead of a public network, and others. The device or equipment in customer on-premise network1316that forms one end point for communication channel1324is referred to as the customer premise equipment (CPE), such as CPE1326depicted inFIG.13. On the CSPI1301side, the endpoint may be a host machine executing DRG1322. In certain embodiments, a Remote Peering Connection (RPC) can be added to a DRG, which allows a customer to peer one VCN with another VCN in a different region. Using such an RPC, customer VCN1304can use DRG1322to connect with a VCN1308in another region. DRG1322may also be used to communicate with other remote cloud networks1318, not hosted by CSPI1301such as a Microsoft Azure cloud, Amazon AWS cloud, and others. As shown inFIG.13, an Internet Gateway (IGW)1320may be configured for customer VCN1304the enables a compute instance on VCN1304to communicate with public endpoints1314accessible over a public network such as the Internet. IGW15120is a gateway that connects a VCN to a public network such as the Internet. IGW1320enables a public subnet (where the resources in the public subnet have public overlay IP addresses) within a VCN, such as VCN1304, direct access to public endpoints1312on a public network1314such as the Internet. Using IGW1320, connections can be initiated from a subnet within VCN1304or from the Internet. A Network Address Translation (NAT) gateway1328can be configured for customer's VCN1304and enables cloud resources in the customer's VCN, which do not have dedicated public overlay IP addresses, access to the Internet and it does so without exposing those resources to direct incoming Internet connections (e.g., L4-L7 connections). This enables a private subnet within a VCN, such as private Subnet-1 in VCN1304, with private access to public endpoints on the Internet. In NAT gateways, connections can be initiated only from the private subnet to the public Internet and not from the Internet to the private subnet. In certain embodiments, a Service Gateway (SGW)1326can be configured for customer VCN1304and provides a path for private network traffic between VCN1304and supported services endpoints in a service network1310. In certain embodiments, service network1310may be provided by the CSP and may provide various services. An example of such a service network is Oracle's Services Network, which provides various services that can be used by customers. For example, a compute instance (e.g., a database system) in a private subnet of customer VCN1304can back up data to a service endpoint (e.g., Object Storage) without needing public IP addresses or access to the Internet. In certain embodiments, a VCN can have only one SGW, and connections can only be initiated from a subnet within the VCN and not from service network1310. If a VCN is peered with another, resources in the other VCN typically cannot access the SGW. Resources in on-premises networks that are connected to a VCN with FastConnect or VPN Connect can also use the service gateway configured for that VCN. In certain implementations, SGW1326uses the concept of a service Classless Inter-Domain Routing (CIDR) label, which is a string that represents all the regional public IP address ranges for the service or group of services of interest. The customer uses the service CIDR label when they configure the SGW and related route rules to control traffic to the service. The customer can optionally utilize it when configuring security rules without needing to adjust them if the service's public IP addresses change in the future. A Local Peering Gateway (LPG)1332is a gateway that can be added to customer VCN1304and enables VCN1304to peer with another VCN in the same region. Peering means that the VCNs communicate using private IP addresses, without the traffic traversing a public network such as the Internet or without routing the traffic through the customer's on-premises network1316. In preferred embodiments, a VCN has a separate LPG for each peering it establishes. Local Peering or VCN Peering is a common practice used to establish network connectivity between different applications or infrastructure management functions. Service providers, such as providers of services in service network1310, may provide access to services using different access models. According to a public access model, services may be exposed as public endpoints that are publicly accessible by compute instance in a customer VCN via a public network such as the Internet and or may be privately accessible via SGW1326. According to a specific private access model, services are made accessible as private IP endpoints in a private subnet in the customer's VCN. This is referred to as a Private Endpoint (PE) access and enables a service provider to expose their service as an instance in the customer's private network. A Private Endpoint resource represents a service within the customer's VCN. Each PE manifests as a VNIC (referred to as a PE-VNIC, with one or more private IPs) in a subnet chosen by the customer in the customer's VCN. A PE thus provides a way to present a service within a private customer VCN subnet using a VNIC. Since the endpoint is exposed as a VNIC, all the features associates with a VNIC such as routing rules, security lists, etc., are now available for the PE VNIC. A service provider can register their service to enable access through a PE. The provider can associate policies with the service that restricts the service's visibility to the customer tenancies. A provider can register multiple services under a single virtual IP address (VIP), especially for multi-tenant services. There may be multiple such private endpoints (in multiple VCNs) that represent the same service. Compute instances in the private subnet can then use the PE VNIC's private IP address or the service DNS name to access the service. Compute instances in the customer VCN can access the service by sending traffic to the private IP address of the PE in the customer VCN. A Private Access Gateway (PAGW)1330is a gateway resource that can be attached to a service provider VCN (e.g., a VCN in service network1310) that acts as an ingress/egress point for all traffic from/to customer subnet private endpoints. PAGW1330enables a provider to scale the number of PE connections without utilizing its internal IP address resources. A provider needs only configure one PAGW for any number of services registered in a single VCN. Providers can represent a service as a private endpoint in multiple VCNs of one or more customers. From the customer's perspective, the PE VNIC, which, instead of being attached to a customer's instance, appears attached to the service with which the customer wishes to interact. The traffic destined to the private endpoint is routed via PAGW1330to the service. These are referred to as customer-to-service private connections (C2S connections). The PE concept can also be used to extend the private access for the service to customer's on-premises networks and data centers, by allowing the traffic to flow through FastConnect/IPsec links and the private endpoint in the customer VCN. Private access for the service can also be extended to the customer's peered VCNs, by allowing the traffic to flow between LPG1332and the PE in the customer's VCN. A customer can control routing in a VCN at the subnet level, so the customer can specify which subnets in the customer's VCN, such as VCN1304, use each gateway. A VCN's route tables are used to decide if traffic is allowed out of a VCN through a particular gateway. For example, in a particular instance, a route table for a public subnet within customer VCN1304may send non-local traffic through IGW1320. The route table for a private subnet within the same customer VCN1304may send traffic destined for CSP services through SGW1326. All remaining traffic may be sent via the NAT gateway1328. Route tables only control traffic going out of a VCN. Security lists associated with a VCN are used to control traffic that comes into a VCN via a gateway via inbound connections. All resources in a subnet use the same route table and security lists. Security lists may be used to control specific types of traffic allowed in and out of instances in a subnet of a VCN. Security list rules may comprise ingress (inbound) and egress (outbound) rules. For example, an ingress rule may specify an allowed source address range, while an egress rule may specify an allowed destination address range. Security rules may specify a particular protocol (e.g., TCP, ICMP), a particular port (e.g., 22 for SSH, 3389 for Windows RDP), etc. In certain implementations, an instance's operating system may enforce its own firewall rules that are aligned with the security list rules. Rules may be stateful (e.g., a connection is tracked and the response is automatically allowed without an explicit security list rule for the response traffic) or stateless. Access from a customer VCN (i.e., by a resource or compute instance deployed on VCN1304) can be categorized as public access, private access, or dedicated access. Public access refers to an access model where a public IP address or a NAT is used to access a public endpoint. Private access enables customer workloads in VCN1304with private IP addresses (e.g., resources in a private subnet) to access services without traversing a public network such as the Internet. In certain embodiments, CSPI1301enables customer VCN workloads with private IP addresses to access the (public service endpoints of) services using a service gateway. A service gateway thus offers a private access model by establishing a virtual link between the customer's VCN and the service's public endpoint residing outside the customer's private network. Additionally, CSPI may offer dedicated public access using technologies such as FastConnect public peering where customer on-premises instances can access one or more services in a customer VCN using a FastConnect connection and without traversing a public network such as the Internet. CSPI also may also offer dedicated private access using FastConnect private peering where customer on-premises instances with private IP addresses can access the customer's VCN workloads using a FastConnect connection. FastConnect is a network connectivity alternative to using the public Internet to connect a customer's on-premise network to CSPI and its services. FastConnect provides an easy, elastic, and economical way to create a dedicated and private connection with higher bandwidth options and a more reliable and consistent networking experience when compared to Internet-based connections. FIG.13and the accompanying description above describes various virtualized components in an example virtual network. As described above, the virtual network is built on the underlying physical or substrate network.FIG.14depicts a simplified architectural diagram of the physical components in the physical network within CSPI1400that provide the underlay for the virtual network according to certain embodiments. As shown, CSPI1400provides a distributed environment comprising components and resources (e.g., compute, memory, and networking resources) provided by a cloud service provider (CSP). These components and resources are used to provide cloud services (e.g., IaaS services) to subscribing customers, i.e., customers that have subscribed to one or more services provided by the CSP. Based upon the services subscribed to by a customer, a subset of resources (e.g., compute, memory, and networking resources) of CSPI1400are provisioned for the customer. Customers can then build their own cloud-based (i.e., CSPI-hosted) customizable and private virtual networks using physical compute, memory, and networking resources provided by CSPI1400. As previously indicated, these customer networks are referred to as virtual cloud networks (VCNs). A customer can deploy one or more customer resources, such as compute instances, on these customer VCNs. Compute instances can be in the form of virtual machines, bare metal instances, and the like. CSPI1400provides infrastructure and a set of complementary cloud services that enable customers to build and run a wide range of applications and services in a highly available hosted environment. In the example embodiment depicted inFIG.14, the physical components of CSPI1400include one or more physical host machines or physical servers (e.g.,1402,1406,1408), network virtualization devices (NVDs) (e.g.,1410,1412), top-of-rack (TOR) switches (e.g.,1414,1416), and a physical network (e.g.,1418), and switches in physical network1418. The physical host machines or servers may host and execute various compute instances that participate in one or more subnets of a VCN. The compute instances may include virtual machine instances, and bare metal instances. For example, the various compute instances depicted inFIG.13may be hosted by the physical host machines depicted inFIG.14. The virtual machine compute instances in a VCN may be executed by one host machine or by multiple different host machines. The physical host machines may also host virtual host machines, container-based hosts or functions, and the like. The VNICs and VCN VR depicted inFIG.13may be executed by the NVDs depicted inFIG.14. The gateways depicted inFIG.13may be executed by the host machines and/or by the NVDs depicted inFIG.14. The host machines or servers may execute a hypervisor (also referred to as a virtual machine monitor or VMM) that creates and enables a virtualized environment on the host machines. The virtualization or virtualized environment facilitates cloud-based computing. One or more compute instances may be created, executed, and managed on a host machine by a hypervisor on that host machine. The hypervisor on a host machine enables the physical computing resources of the host machine (e.g., compute, memory, and networking resources) to be shared between the various compute instances executed by the host machine. For example, as depicted inFIG.14, host machines1402and1408execute hypervisors1460and1466, respectively. These hypervisors may be implemented using software, firmware, or hardware, or combinations thereof. Typically, a hypervisor is a process or a software layer that sits on top of the host machine's operating system (OS), which in turn executes on the hardware processors of the host machine. The hypervisor provides a virtualized environment by enabling the physical computing resources (e.g., processing resources such as processors/cores, memory resources, networking resources) of the host machine to be shared among the various virtual machine compute instances executed by the host machine. For example, inFIG.14, hypervisor1460may sit on top of the OS of host machine1402and enables the computing resources (e.g., processing, memory, and networking resources) of host machine1402to be shared between compute instances (e.g., virtual machines) executed by host machine1402. A virtual machine can have its own operating system (referred to as a guest operating system), which may be the same as or different from the OS of the host machine. The operating system of a virtual machine executed by a host machine may be the same as or different from the operating system of another virtual machine executed by the same host machine. A hypervisor thus enables multiple operating systems to be executed alongside each other while sharing the same computing resources of the host machine. The host machines depicted inFIG.14may have the same or different types of hypervisors. A compute instance can be a virtual machine instance or a bare metal instance. InFIG.14, compute instances1468on host machine1402and1474on host machine1408are examples of virtual machine instances. Host machine1406is an example of a bare metal instance that is provided to a customer. In certain instances, an entire host machine may be provisioned to a single customer, and all of the one or more compute instances (either virtual machines or bare metal instance) hosted by that host machine belong to that same customer. In other instances, a host machine may be shared between multiple customers (i.e., multiple tenants). In such a multi-tenancy scenario, a host machine may host virtual machine compute instances belonging to different customers. These compute instances may be members of different VCNs of different customers. In certain embodiments, a bare metal compute instance is hosted by a bare metal server without a hypervisor. When a bare metal compute instance is provisioned, a single customer or tenant maintains control of the physical CPU, memory, and network interfaces of the host machine hosting the bare metal instance and the host machine is not shared with other customers or tenants. As previously described, each compute instance that is part of a VCN is associated with a VNIC that enables the compute instance to become a member of a subnet of the VCN. The VNIC associated with a compute instance facilitates the communication of packets or frames to and from the compute instance. A VNIC is associated with a compute instance when the compute instance is created. In certain embodiments, for a compute instance executed by a host machine, the VNIC associated with that compute instance is executed by an NVD connected to the host machine. For example, inFIG.14, host machine1402executes a virtual machine compute instance1468that is associated with VNIC1476, and VNIC1476is executed by NVD1410connected to host machine1402. As another example, bare metal instance1472hosted by host machine1406is associated with VNIC1480that is executed by NVD1412connected to host machine1406. As yet another example, VNIC1484is associated with compute instance1474executed by host machine1408, and VNIC1484is executed by NVD1412connected to host machine1408. For compute instances hosted by a host machine, an NVD connected to that host machine also executes VCN VRs corresponding to VCNs of which the compute instances are members. For example, in the embodiment depicted inFIG.14, NVD1410executes VCN VR1477corresponding to the VCN of which compute instance1468is a member. NVD1412may also execute one or more VCN VRs1483corresponding to VCNs corresponding to the compute instances hosted by host machines1406and1408. A host machine may include one or more network interface cards (NIC) that enable the host machine to be connected to other devices. A NIC on a host machine may provide one or more ports (or interfaces) that enable the host machine to be communicatively connected to another device. For example, a host machine may be connected to an NVD using one or more ports (or interfaces) provided on the host machine and on the NVD. A host machine may also be connected to other devices such as another host machine. For example, inFIG.14, host machine1402is connected to NVD1410using link1420that extends between a port1434provided by a NIC1432of host machine1402and between a port1436of NVD1410. Host machine1406is connected to NVD1412using link1424that extends between a port1446provided by a NIC1444of host machine1406and between a port1448of NVD1412. Host machine1408is connected to NVD1412using link1426that extends between a port1452provided by a NIC1450of host machine1408and between a port1454of NVD1412. The NVDs are in turn connected via communication links to top-of-the-rack (TOR) switches, which are connected to physical network1418(also referred to as the switch fabric). In certain embodiments, the links between a host machine and an NVD, and between an NVD and a TOR switch are Ethernet links. For example, inFIG.14, NVDs1410and1412are connected to TOR switches1414and1416, respectively, using links1428and1430. In certain embodiments, the links1420,1424,1426,1428, and1430are Ethernet links. The collection of host machines and NVDs that are connected to a TOR is sometimes referred to as a rack. Physical network1418provides a communication fabric that enables TOR switches to communicate with each other. Physical network1418can be a multi-tiered network. In certain implementations, physical network1418is a multi-tiered Clos network of switches, with TOR switches1414and1416representing the leaf level nodes of the multi-tiered and multi-node physical switching network1418. Different Clos network configurations are possible including but not limited to a 2-tier network, a 3-tier network, a 4-tier network, a 9-tier network, and in general a “n”-tiered network. An example of a Clos network is depicted inFIG.17and described below. Various different connection configurations are possible between host machines and NVDs such as one-to-one configuration, many-to-one configuration, one-to-many configuration, and others. In a one-to-one configuration implementation, each host machine is connected to its own separate NVD. For example, inFIG.14, host machine1402is connected to NVD1410via NIC1432of host machine1402. In a many-to-one configuration, multiple host machines are connected to one NVD. For example, inFIG.14, host machines1406and1408are connected to the same NVD1412via NICs1444and1450, respectively. In a one-to-many configuration, one host machine is connected to multiple NVDs.FIG.15shows an example within CSPI1500where a host machine is connected to multiple NVDs. As shown inFIG.15, host machine1502comprises a network interface card (NIC)1504that includes multiple ports1506and1508. Host machine1502is connected to a first NVD1510via port1506and link1520, and connected to a second NVD1512via port1508and link1522. Ports1506and1508may be Ethernet ports and the links1520and1522between host machine1502and NVDs1510and1512may be Ethernet links. NVD1510is in turn connected to a first TOR switch1514and NVD1512is connected to a second TOR switch1516. The links between NVDs1510and1512, and TOR switches1514and1516may be Ethernet links. TOR switches1514and1516represent the Tier-0 switching devices in multi-tiered physical network1518. The arrangement depicted inFIG.15provides two separate physical network paths to and from physical switch network1518to host machine1502: a first path traversing TOR switch1514to NVD1510to host machine1502, and a second path traversing TOR switch1516to NVD1512to host machine1502. The separate paths provide for enhanced availability (referred to as high availability) of host machine1502. If there are problems in one of the paths (e.g., a link in one of the paths goes down) or devices (e.g., a particular NVD is not functioning), then the other path may be used for communications to/from host machine1502. In the configuration depicted inFIG.15, the host machine is connected to two different NVDs using two different ports provided by a NIC of the host machine. In other embodiments, a host machine may include multiple NICs that enable connectivity of the host machine to multiple NVDs. Referring back toFIG.14, an NVD is a physical device or component that performs one or more network and/or storage virtualization functions. An NVD may be any device with one or more processing units (e.g., CPUs, Network Processing Units (NPUs), FPGAs, packet processing pipelines, etc.), memory including cache, and ports. The various virtualization functions may be performed by software/firmware executed by the one or more processing units of the NVD. An NVD may be implemented in various different forms. For example, in certain embodiments, an NVD is implemented as an interface card referred to as a smartNIC or an intelligent NIC with an embedded processor onboard. A smartNIC is a separate device from the NICs on the host machines. InFIG.14, the NVDs1410and1412may be implemented as smartNICs that are connected to host machines1402, and host machines1406and1408, respectively. A smartNIC is however just one example of an NVD implementation. Various other implementations are possible. For example, in some other implementations, an NVD or one or more functions performed by the NVD may be incorporated into or performed by one or more host machines, one or more TOR switches, and other components of CSPI1400. For example, an NVD may be embodied in a host machine where the functions performed by an NVD are performed by the host machine. As another example, an NVD may be part of a TOR switch or a TOR switch may be configured to perform functions performed by an NVD that enables the TOR switch to perform various complex packet transformations that are used for a public cloud. A TOR that performs the functions of an NVD is sometimes referred to as a smart TOR. In yet other implementations, where virtual machines (VMs) instances, but not bare metal (BM) instances, are offered to customers, functions performed by an NVD may be implemented inside a hypervisor of the host machine. In some other implementations, some of the functions of the NVD may be offloaded to a centralized service running on a fleet of host machines. In certain embodiments, such as when implemented as a smartNIC as shown inFIG.14, an NVD may comprise multiple physical ports that enable it to be connected to one or more host machines and to one or more TOR switches. A port on an NVD can be classified as a host-facing port (also referred to as a “south port”) or a network-facing or TOR-facing port (also referred to as a “north port”). A host-facing port of an NVD is a port that is used to connect the NVD to a host machine. Examples of host-facing ports inFIG.14include port1436on NVD1410, and ports1448and1454on NVD1412. A network-facing port of an NVD is a port that is used to connect the NVD to a TOR switch. Examples of network-facing ports inFIG.14include port1456on NVD1410, and port1458on NVD1412. As shown inFIG.14, NVD1410is connected to TOR switch1414using link1428that extends from port1456of NVD1410to the TOR switch1414. Likewise, NVD1412is connected to TOR switch1416using link1430that extends from port1458of NVD1412to the TOR switch1416. An NVD receives packets and frames from a host machine (e.g., packets and frames generated by a compute instance hosted by the host machine) via a host-facing port and, after performing the necessary packet processing, may forward the packets and frames to a TOR switch via a network-facing port of the NVD. An NVD may receive packets and frames from a TOR switch via a network-facing port of the NVD and, after performing the necessary packet processing, may forward the packets and frames to a host machine via a host-facing port of the NVD. In certain embodiments, there may be multiple ports and associated links between an NVD and a TOR switch. These ports and links may be aggregated to form a link aggregator group of multiple ports or links (referred to as a LAG). Link aggregation allows multiple physical links between two end-points (e.g., between an NVD and a TOR switch) to be treated as a single logical link. All the physical links in a given LAG may operate in full-duplex mode at the same speed. LAGs help increase the bandwidth and reliability of the connection between two endpoints. If one of the physical links in the LAG goes down, traffic is dynamically and transparently reassigned to one of the other physical links in the LAG. The aggregated physical links deliver higher bandwidth than each individual link. The multiple ports associated with a LAG are treated as a single logical port. Traffic can be load-balanced across the multiple physical links of a LAG. One or more LAGs may be configured between two endpoints. The two endpoints may be between an NVD and a TOR switch, between a host machine and an NVD, and the like. An NVD implements or performs network virtualization functions. These functions are performed by software/firmware executed by the NVD. Examples of network virtualization functions include without limitation: packet encapsulation and de-capsulation functions; functions for creating a VCN network; functions for implementing network policies such as VCN security list (firewall) functionality; functions that facilitate the routing and forwarding of packets to and from compute instances in a VCN; and the like. In certain embodiments, upon receiving a packet, an NVD is configured to execute a packet processing pipeline for processing the packet and determining how the packet is to be forwarded or routed. As part of this packet processing pipeline, the NVD may execute one or more virtual functions associated with the overlay network such as executing VNICs associated with cis in the VCN, executing a Virtual Router (VR) associated with the VCN, the encapsulation and decapsulation of packets to facilitate forwarding or routing in the virtual network, execution of certain gateways (e.g., the Local Peering Gateway), the implementation of Security Lists, Network Security Groups, network address translation (NAT) functionality (e.g., the translation of Public IP to Private IP on a host by host basis), throttling functions, and other functions. In certain embodiments, the packet processing data path in an NVD may comprise multiple packet pipelines, each composed of a series of packet transformation stages. In certain implementations, upon receiving a packet, the packet is parsed and classified to a single pipeline. The packet is then processed in a linear fashion, one stage after another, until the packet is either dropped or sent out over an interface of the NVD. These stages provide basic functional packet processing building blocks (e.g., validating headers, enforcing throttle, inserting new Layer-2 headers, enforcing L4 firewall, VCN encapsulation/decapsulation, etc.) so that new pipelines can be constructed by composing existing stages, and new functionality can be added by creating new stages and inserting them into existing pipelines. An NVD may perform both control plane and data plane functions corresponding to a control plane and a data plane of a VCN. Examples of a VCN Control Plane are also depicted inFIGS.18,19,20, and21(see references1816,1916,2016, and2116) and described below. Examples of a VCN Data Plane are depicted inFIGS.18,19,20, and21(see references1818,1918,2018, and2118) and described below. The control plane functions include functions used for configuring a network (e.g., setting up routes and route tables, configuring VNICs, etc.) that controls how data is to be forwarded. In certain embodiments, a VCN Control Plane is provided that computes all the overlay-to-substrate mappings centrally and publishes them to the NVDs and to the virtual network edge devices such as various gateways such as the DRG, the SGW, the IGW, etc. Firewall rules may also be published using the same mechanism. In certain embodiments, an NVD only gets the mappings that are relevant for that NVD. The data plane functions include functions for the actual routing/forwarding of a packet based upon configuration set up using control plane. A VCN data plane is implemented by encapsulating the customer's network packets before they traverse the substrate network. The encapsulation/decapsulation functionality is implemented on the NVDs. In certain embodiments, an NVD is configured to intercept all network packets in and out of host machines and perform network virtualization functions. As indicated above, an NVD executes various virtualization functions including VNICs and VCN VRs. An NVD may execute VNICs associated with the compute instances hosted by one or more host machines connected to the VNIC. For example, as depicted inFIG.14, NVD1410executes the functionality for VNIC1476that is associated with compute instance1468hosted by host machine1402connected to NVD1410. As another example, NVD1412executes VNIC1480that is associated with bare metal compute instance1472hosted by host machine1406, and executes VNIC1484that is associated with compute instance1474hosted by host machine1408. A host machine may host compute instances belonging to different VCNs, which belong to different customers, and the NVD connected to the host machine may execute the VNICs (i.e., execute VNICs-relate functionality) corresponding to the compute instances. An NVD also executes VCN Virtual Routers corresponding to the VCNs of the compute instances. For example, in the embodiment depicted inFIG.14, NVD1410executes VCN VR1477corresponding to the VCN to which compute instance1468belongs. NVD1412executes one or more VCN VRs1483corresponding to one or more VCNs to which compute instances hosted by host machines1406and1408belong. In certain embodiments, the VCN VR corresponding to that VCN is executed by all the NVDs connected to host machines that host at least one compute instance belonging to that VCN. If a host machine hosts compute instances belonging to different VCNs, an NVD connected to that host machine may execute VCN VRs corresponding to those different VCNs. In addition to VNICs and VCN VRs, an NVD may execute various software (e.g., daemons) and include one or more hardware components that facilitate the various network virtualization functions performed by the NVD. For purposes of simplicity, these various components are grouped together as “packet processing components” shown inFIG.14. For example, NVD1410comprises packet processing components1486and NVD1412comprises packet processing components1488. For example, the packet processing components for an NVD may include a packet processor that is configured to interact with the NVD's ports and hardware interfaces to monitor all packets received by and communicated using the NVD and store network information. The network information may, for example, include network flow information identifying different network flows handled by the NVD and per flow information (e.g., per flow statistics). In certain embodiments, network flows information may be stored on a per VNIC basis. The packet processor may perform packet-by-packet manipulations as well as implement stateful NAT and L4 firewall (FW). As another example, the packet processing components may include a replication agent that is configured to replicate information stored by the NVD to one or more different replication target stores. As yet another example, the packet processing components may include a logging agent that is configured to perform logging functions for the NVD. The packet processing components may also include software for monitoring the performance and health of the NVD and, also possibly of monitoring the state and health of other components connected to the NVD. FIG.13shows the components of an example virtual or overlay network including a VCN, subnets within the VCN, compute instances deployed on subnets, VNICs associated with the compute instances, a VR for a VCN, and a set of gateways configured for the VCN. The overlay components depicted inFIG.13may be executed or hosted by one or more of the physical components depicted inFIG.14. For example, the compute instances in a VCN may be executed or hosted by one or more host machines depicted inFIG.14. For a compute instance hosted by a host machine, the VNIC associated with that compute instance is typically executed by an NVD connected to that host machine (i.e., the VNIC functionality is provided by the NVD connected to that host machine). The VCN VR function for a VCN is executed by all the NVDs that are connected to host machines hosting or executing the compute instances that are part of that VCN. The gateways associated with a VCN may be executed by one or more different types of NVDs. For example, certain gateways may be executed by smartNICs, while others may be executed by one or more host machines or other implementations of NVDs. As described above, a compute instance in a customer VCN may communicate with various different endpoints, where the endpoints can be within the same subnet as the source compute instance, in a different subnet but within the same VCN as the source compute instance, or with an endpoint that is outside the VCN of the source compute instance. These communications are facilitated using VNICs associated with the compute instances, the VCN VRs, and the gateways associated with the VCNs. For communications between two compute instances on the same subnet in a VCN, the communication is facilitated using VNICs associated with the source and destination compute instances. The source and destination compute instances may be hosted by the same host machine or by different host machines. A packet originating from a source compute instance may be forwarded from a host machine hosting the source compute instance to an NVD connected to that host machine. On the NVD, the packet is processed using a packet processing pipeline, which can include execution of the VNIC associated with the source compute instance. Since the destination endpoint for the packet is within the same subnet, execution of the VNIC associated with the source compute instance results in the packet being forwarded to an NVD executing the VNIC associated with the destination compute instance, which then processes and forwards the packet to the destination compute instance. The VNICs associated with the source and destination compute instances may be executed on the same NVD (e.g., when both the source and destination compute instances are hosted by the same host machine) or on different NVDs (e.g., when the source and destination compute instances are hosted by different host machines connected to different NVDs). The VNICs may use routing/forwarding tables stored by the NVD to determine the next hop for the packet. For a packet to be communicated from a compute instance in a subnet to an endpoint in a different subnet in the same VCN, the packet originating from the source compute instance is communicated from the host machine hosting the source compute instance to the NVD connected to that host machine. On the NVD, the packet is processed using a packet processing pipeline, which can include execution of one or more VNICs, and the VR associated with the VCN. For example, as part of the packet processing pipeline, the NVD executes or invokes functionality corresponding to the VNIC (also referred to as executes the VNIC) associated with source compute instance. The functionality performed by the VNIC may include looking at the VLAN tag on the packet. Since the packet's destination is outside the subnet, the VCN VR functionality is next invoked and executed by the NVD. The VCN VR then routes the packet to the NVD executing the VNIC associated with the destination compute instance. The VNIC associated with the destination compute instance then processes the packet and forwards the packet to the destination compute instance. The VNICs associated with the source and destination compute instances may be executed on the same NVD (e.g., when both the source and destination compute instances are hosted by the same host machine) or on different NVDs (e.g., when the source and destination compute instances are hosted by different host machines connected to different NVDs). If the destination for the packet is outside the VCN of the source compute instance, then the packet originating from the source compute instance is communicated from the host machine hosting the source compute instance to the NVD connected to that host machine. The NVD executes the VNIC associated with the source compute instance. Since the destination end point of the packet is outside the VCN, the packet is then processed by the VCN VR for that VCN. The NVD invokes the VCN VR functionality, which may result in the packet being forwarded to an NVD executing the appropriate gateway associated with the VCN. For example, if the destination is an endpoint within the customer's on-premise network, then the packet may be forwarded by the VCN VR to the NVD executing the DRG gateway configured for the VCN. The VCN VR may be executed on the same NVD as the NVD executing the VNIC associated with the source compute instance or by a different NVD. The gateway may be executed by an NVD, which may be a smartNIC, a host machine, or other NVD implementation. The packet is then processed by the gateway and forwarded to a next hop that facilitates communication of the packet to its intended destination endpoint. For example, in the embodiment depicted inFIG.14, a packet originating from compute instance1468may be communicated from host machine1402to NVD1410over link1420(using NIC1432). On NVD1410, VNIC1476is invoked since it is the VNIC associated with source compute instance1468. VNIC1476is configured to examine the encapsulated information in the packet, and determine a next hop for forwarding the packet with the goal of facilitating communication of the packet to its intended destination endpoint, and then forward the packet to the determined next hop. A compute instance deployed on a VCN can communicate with various different endpoints. These endpoints may include endpoints that are hosted by CSPI1400and endpoints outside CSPI1400. Endpoints hosted by CSPI1400may include instances in the same VCN or other VCNs, which may be the customer's VCNs, or VCNs not belonging to the customer. Communications between endpoints hosted by CSPI1400may be performed over physical network1418. A compute instance may also communicate with endpoints that are not hosted by CSPI1400, or are outside CSPI1400. Examples of these endpoints include endpoints within a customer's on-premise network or data center, or public endpoints accessible over a public network such as the Internet. Communications with endpoints outside CSPI1400may be performed over public networks (e.g., the Internet) (not shown inFIG.14) or private networks (not shown inFIG.14) using various communication protocols. The architecture of CSPI1400depicted inFIG.14is merely an example and is not intended to be limiting. Variations, alternatives, and modifications are possible in alternative embodiments. For example, in some implementations, CSPI1400may have more or fewer systems or components than those shown inFIG.14, may combine two or more systems, or may have a different configuration or arrangement of systems. The systems, subsystems, and other components depicted inFIG.14may be implemented in software (e.g., code, instructions, program) executed by one or more processing units (e.g., processors, cores) of the respective systems, using hardware, or combinations thereof. The software may be stored on a non-transitory storage medium (e.g., on a memory device). FIG.16depicts connectivity between a host machine and an NVD for providing I/O virtualization for supporting multitenancy according to certain embodiments. As depicted inFIG.16, host machine1602executes a hypervisor1604that provides a virtualized environment. Host machine1602executes two virtual machine instances, VM11606belonging to customer/tenant #1 and VM21608belonging to customer/tenant #2. Host machine1602comprises a physical NIC1610that is connected to an NVD1612via link1614. Each of the compute instances is attached to a VNIC that is executed by NVD1612. In the embodiment inFIG.16, VM11606is attached to VNIC-VM11620and VM21608is attached to VNIC-VM21622. As shown inFIG.16, NIC1610comprises two logical NICs, logical NIC A1616and logical NIC B1618. Each virtual machine is attached to and configured to work with its own logical NIC. For example, VM11606is attached to logical MC A1616and VM21608is attached to logical NIC B1618. Even though host machine1602comprises only one physical NIC1610that is shared by the multiple tenants, due to the logical NICs, each tenant's virtual machine believes they have their own host machine and NIC. In certain embodiments, each logical NIC is assigned its own VLAN ID. Thus, a specific VLAN ID is assigned to logical NIC A1616for Tenant #1 and a separate VLAN ID is assigned to logical NIC B1618for Tenant #2. When a packet is communicated from VM11606, a tag assigned to Tenant #1 is attached to the packet by the hypervisor and the packet is then communicated from host machine1602to NVD1612over link1614. In a similar manner, when a packet is communicated from VM21608, a tag assigned to Tenant #2 is attached to the packet by the hypervisor and the packet is then communicated from host machine1602to NVD1612over link1614. Accordingly, a packet1624communicated from host machine1602to NVD1612has an associated tag1626that identifies a specific tenant and associated VM. On the NVD, for a packet1624received from host machine1602, the tag1626associated with the packet is used to determine whether the packet is to be processed by VNIC-VM11620or by VNIC-VM21622. The packet is then processed by the corresponding VNIC. The configuration depicted inFIG.16enables each tenant's compute instance to believe that they own their own host machine and NIC. The setup depicted inFIG.16provides for I/O virtualization for supporting multi-tenancy. FIG.17depicts a simplified block diagram of a physical network1700according to certain embodiments. The embodiment depicted inFIG.17is structured as a Clos network. A Clos network is a particular type of network topology designed to provide connection redundancy while maintaining high bisection bandwidth and maximum resource utilization. A Clos network is a type of non-blocking, multistage or multi-tiered switching network, where the number of stages or tiers can be two, three, four, five, etc. The embodiment depicted inFIG.17is a 3-tiered network comprising tiers 1, 2, and 3. The TOR switches1704represent Tier-0 switches in the Clos network. One or more NVDs are connected to the TOR switches. Tier-0 switches are also referred to as edge devices of the physical network. The Tier-0 switches are connected to Tier-1 switches, which are also referred to as leaf switches. In the embodiment depicted inFIG.17, a set of “n” Tier-0 TOR switches are connected to a set of “n” Tier-1 switches and together form a pod. Each Tier-0 switch in a pod is interconnected to all the Tier-1 switches in the pod, but there is no connectivity of switches between pods. In certain implementations, two pods are referred to as a block. Each block is served by or connected to a set of “n” Tier-2 switches (sometimes referred to as spine switches). There can be several blocks in the physical network topology. The Tier-2 switches are in turn connected to “n” Tier-3 switches (sometimes referred to as super-spine switches). Communication of packets over physical network1700is typically performed using one or more Layer-3 communication protocols. Typically, all the layers of the physical network, except for the TORs layer are n-ways redundant thus allowing for high availability. Policies may be specified for pods and blocks to control the visibility of switches to each other in the physical network so as to enable scaling of the physical network. A feature of a Clos network is that the maximum hop count to reach from one Tier-0 switch to another Tier-0 switch (or from an NVD connected to a Tier-0-switch to another NVD connected to a Tier-0 switch) is fixed. For example, in a 3-Tiered Clos network at most seven hops are needed for a packet to reach from one NVD to another NVD, where the source and target NVDs are connected to the leaf tier of the Clos network. Likewise, in a 4-tiered Clos network, at most nine hops are needed for a packet to reach from one NVD to another NVD, where the source and target NVDs are connected to the leaf tier of the Clos network. Thus, a Clos network architecture maintains consistent latency throughout the network, which is important for communication within and between data centers. A Clos topology scales horizontally and is cost effective. The bandwidth/throughput capacity of the network can be easily increased by adding more switches at the various tiers (e.g., more leaf and spine switches) and by increasing the number of links between the switches at adjacent tiers. In certain embodiments, each resource within CSPI is assigned a unique identifier called a Cloud Identifier (CID). This identifier is included as part of the resource's information and can be used to manage the resource, for example, via a Console or through APIs. An example syntax for a CID is: ocid1.<RESOURCE TYPE>.<REALM>. [REGION] [.FUTURE USE].<UNIQUE ID> where, ocid1: The literal string indicating the version of the CID; resource type: The type of resource (for example, instance, volume, VCN, subnet, user, group, and so on); realm: The realm the resource is in. Example values are “c1” for the commercial realm, “c2” for the Government Cloud realm, or “c3” for the Federal Government Cloud realm, etc. Each realm may have its own domain name; region: The region the resource is in. If the region is not applicable to the resource, this part might be blank; future use: Reserved for future use. unique ID: The unique portion of the ID. The format may vary depending on the type of resource or service. As noted above, infrastructure as a service (IaaS) is one particular type of cloud computing. IaaS can be configured to provide virtualized computing resources over a public network (e.g., the Internet). In an IaaS model, a cloud computing provider can host the infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., a hypervisor layer), or the like). In some cases, an IaaS provider may also supply a variety of services to accompany those infrastructure components (e.g., billing, monitoring, logging, security, load balancing and clustering, etc.). Thus, as these services may be policy-driven, IaaS users may be able to implement policies to drive load balancing to maintain application availability and performance. In some instances, IaaS customers may access resources and services through a wide area network (WAN), such as the Internet, and can use the cloud provider's services to install the remaining elements of an application stack. For example, the user can log in to the IaaS platform to create virtual machines (VMs), install operating systems (OSs) on each VM, deploy middleware such as databases, create storage buckets for workloads and backups, and even install enterprise software into that VM. Customers can then use the provider's services to perform various functions, including balancing network traffic, troubleshooting application issues, monitoring performance, managing disaster recovery, etc. In most cases, a cloud computing model will require the participation of a cloud provider. The cloud provider may, but need not be, a third-party service that specializes in providing (e.g., offering, renting, selling) IaaS. An entity might also opt to deploy a private cloud, becoming its own provider of infrastructure services. In some examples, IaaS deployment is the process of putting a new application, or a new version of an application, onto a prepared application server or the like. It may also include the process of preparing the server (e.g., installing libraries, daemons, etc.). This is often managed by the cloud provider, below the hypervisor layer (e.g., the servers, storage, network hardware, and virtualization). Thus, the customer may be responsible for handling (OS), middleware, and/or application deployment (e.g., on self-service virtual machines (e.g., that can be spun up on demand) or the like. In some examples, IaaS provisioning may refer to acquiring computers or virtual hosts for use, and even installing needed libraries or services on them. In most cases, deployment does not include provisioning, and the provisioning may need to be performed first. In some cases, there are two different challenges for IaaS provisioning. First, there is the initial challenge of provisioning the initial set of infrastructure before anything is running. Second, there is the challenge of evolving the existing infrastructure (e.g., adding new services, changing services, removing services, etc.) once everything has been provisioned. In some cases, these two challenges may be addressed by enabling the configuration of the infrastructure to be defined declaratively. In other words, the infrastructure (e.g., what components are needed and how they interact) can be defined by one or more configuration files. Thus, the overall topology of the infrastructure (e.g., what resources depend on which, and how they each work together) can be described declaratively. In some instances, once the topology is defined, a workflow can be generated that creates and/or manages the different components described in the configuration files. In some examples, an infrastructure may have many interconnected elements. For example, there may be one or more virtual private clouds (VPCs) (e.g., a potentially on-demand pool of configurable and/or shared computing resources), also known as a core network. In some examples, there may also be one or more security group rules provisioned to define how the security of the network will be set up and one or more virtual machines (VMs). Other infrastructure elements may also be provisioned, such as a load balancer, a database, or the like. As more and more infrastructure elements are desired and/or added, the infrastructure may incrementally evolve. In some instances, continuous deployment techniques may be employed to enable deployment of infrastructure code across various virtual computing environments. Additionally, the described techniques can enable infrastructure management within these environments. In some examples, service teams can write code that is desired to be deployed to one or more, but often many, different production environments (e.g., across various different geographic locations, sometimes spanning the entire world). However, in some examples, the infrastructure on which the code will be deployed must first be set up. In some instances, the provisioning can be done manually, a provisioning tool may be utilized to provision the resources, and/or deployment tools may be utilized to deploy the code once the infrastructure is provisioned. FIG.18is a block diagram1800illustrating an example pattern of an IaaS architecture, according to at least one embodiment. Service operators1802can be communicatively coupled to a secure host tenancy1804that can include a virtual cloud network (VCN)1806and a secure host subnet1808. In some examples, the service operators1802may be using one or more client computing devices, which may be portable handheld devices (e.g., an iPhone®, cellular telephone, an iPad®, computing tablet, a personal digital assistant (PDA)) or wearable devices (e.g., a Google Glass® head mounted display), running software such as Microsoft Windows Mobile®, and/or a variety of mobile operating systems such as iOS, Windows Phone, Android, BlackBerry 13, Palm OS, and the like, and being Internet, e-mail, short message service (SMS), Blackberry®, or other communication protocol enabled. Alternatively, the client computing devices can be general purpose personal computers including, by way of example, personal computers and/or laptop computers running various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems. The client computing devices can be workstation computers running any of a variety of commercially-available UNIX® or UNIX-like operating systems, including without limitation the variety of GNU/Linux operating systems, such as for example, Google Chrome OS. Alternatively, or in addition, client computing devices may be any other electronic device, such as a thin-client computer, an Internet-enabled gaming system (e.g., a Microsoft Xbox gaming console with or without a Kinect® gesture input device), and/or a personal messaging device, capable of communicating over a network that can access the VCN1806and/or the Internet. The VCN1806can include a local peering gateway (LPG)1810that can be communicatively coupled to a secure shell (SSH) VCN1812via an LPG1810contained in the SSH VCN1812. The SSH VCN1812can include an SSH subnet1814, and the SSH VCN1812can be communicatively coupled to a control plane VCN1816via the LPG1810contained in the control plane VCN1816. Also, the SSH VCN1812can be communicatively coupled to a data plane VCN1818via an LPG1810. The control plane VCN1816and the data plane VCN1818can be contained in a service tenancy1819that can be owned and/or operated by the IaaS provider. The control plane VCN1816can include a control plane demilitarized zone (DMZ) tier1820that acts as a perimeter network (e.g., portions of a corporate network between the corporate intranet and external networks). The DMZ-based servers may have restricted responsibilities and help keep security breaches contained. Additionally, the DMZ tier1820can include one or more load balancer (LB) subnet(s)1822, a control plane app tier1824that can include app subnet(s)1826, a control plane data tier1828that can include database (DB) subnet(s)1830(e.g., frontend DB subnet(s) and/or backend DB subnet(s)). The LB subnet(s)1822contained in the control plane DMZ tier1820can be communicatively coupled to the app subnet(s)1826contained in the control plane app tier1824and an Internet gateway1834that can be contained in the control plane VCN1816, and the app subnet(s)1826can be communicatively coupled to the DB subnet(s)1830contained in the control plane data tier1828and a service gateway1836and a network address translation (NAT) gateway1838. The control plane VCN1816can include the service gateway1836and the NAT gateway1838. The control plane VCN1816can include a data plane mirror app tier1840that can include app subnet(s)1826. The app subnet(s)1826contained in the data plane mirror app tier1840can include a virtual network interface controller (VNIC)1842that can execute a compute instance1844. The compute instance1844can communicatively couple the app subnet(s)1826of the data plane mirror app tier1840to app subnet(s)1826that can be contained in a data plane app tier1846. The data plane VCN1818can include the data plane app tier1846, a data plane DMZ tier1848, and a data plane data tier1850. The data plane DMZ tier1848can include LB subnet(s)1822that can be communicatively coupled to the app subnet(s)1826of the data plane app tier1846and the Internet gateway1834of the data plane VCN1818. The app subnet(s)1826can be communicatively coupled to the service gateway1836of the data plane VCN1818and the NAT gateway1838of the data plane VCN1818. The data plane data tier1850can also include the DB subnet(s)1830that can be communicatively coupled to the app subnet(s)1826of the data plane app tier1846. The Internet gateway1834of the control plane VCN1816and of the data plane VCN1818can be communicatively coupled to a metadata management service1852that can be communicatively coupled to public Internet1854. Public Internet1854can be communicatively coupled to the NAT gateway1838of the control plane VCN1816and of the data plane VCN1818. The service gateway1836of the control plane VCN1816and of the data plane VCN1818can be communicatively couple to cloud services1856. In some examples, the service gateway1836of the control plane VCN1816or of the data plane VCN1818can make application programming interface (API) calls to cloud services1856without going through public Internet1854. The API calls to cloud services1856from the service gateway1836can be one-way: the service gateway1836can make API calls to cloud services1856, and cloud services1856can send requested data to the service gateway1836. But, cloud services1856may not initiate API calls to the service gateway1836. In some examples, the secure host tenancy1804can be directly connected to the service tenancy1819, which may be otherwise isolated. The secure host subnet1808can communicate with the SSH subnet1814through an LPG1810that may enable two-way communication over an otherwise isolated system. Connecting the secure host subnet1808to the SSH subnet1814may give the secure host subnet1808access to other entities within the service tenancy1819. The control plane VCN1816may allow users of the service tenancy1819to set up or otherwise provision desired resources. Desired resources provisioned in the control plane VCN1816may be deployed or otherwise used in the data plane VCN1818. In some examples, the control plane VCN1816can be isolated from the data plane VCN1818, and the data plane mirror app tier1840of the control plane VCN1816can communicate with the data plane app tier1846of the data plane VCN1818via VNICs1842that can be contained in the data plane mirror app tier1840and the data plane app tier1846. In some examples, users of the system, or customers, can make requests, for example create, read, update, or delete (CRUD) operations, through public Internet1854that can communicate the requests to the metadata management service1852. The metadata management service1852can communicate the request to the control plane VCN1816through the Internet gateway1834. The request can be received by the LB subnet(s)1822contained in the control plane DMZ tier1820. The LB subnet(s)1822may determine that the request is valid, and in response to this determination, the LB subnet(s)1822can transmit the request to app subnet(s)1826contained in the control plane app tier1824. If the request is validated and requires a call to public Internet1854, the call to public Internet1854may be transmitted to the NAT gateway1838that can make the call to public Internet1854. Memory that may be desired to be stored by the request can be stored in the DB subnet(s)1830. In some examples, the data plane mirror app tier1840can facilitate direct communication between the control plane VCN1816and the data plane VCN1818. For example, changes, updates, or other suitable modifications to configuration may be desired to be applied to the resources contained in the data plane VCN1818. Via a VNIC1842, the control plane VCN1816can directly communicate with, and can thereby execute the changes, updates, or other suitable modifications to configuration to, resources contained in the data plane VCN1818. In some embodiments, the control plane VCN1816and the data plane VCN1818can be contained in the service tenancy1819. In this case, the user, or the customer, of the system may not own or operate either the control plane VCN1816or the data plane VCN1818. Instead, the IaaS provider may own or operate the control plane VCN1816and the data plane VCN1818, both of which may be contained in the service tenancy1819. This embodiment can enable isolation of networks that may prevent users or customers from interacting with other users', or other customers', resources. Also, this embodiment may allow users or customers of the system to store databases privately without needing to rely on public Internet1854, which may not have a desired level of security, for storage. In other embodiments, the LB subnet(s)1822contained in the control plane VCN1816can be configured to receive a signal from the service gateway1836. In this embodiment, the control plane VCN1816and the data plane VCN1818may be configured to be called by a customer of the IaaS provider without calling public Internet1854. Customers of the IaaS provider may desire this embodiment since database(s) that the customers use may be controlled by the IaaS provider and may be stored on the service tenancy1819, which may be isolated from public Internet1854. FIG.19is a block diagram1900illustrating another example pattern of an IaaS architecture, according to at least one embodiment. Service operators1902(e.g. service operators1802ofFIG.18) can be communicatively coupled to a secure host tenancy1904(e.g. the secure host tenancy1804ofFIG.18) that can include a virtual cloud network (VCN)1906(e.g. the VCN1806ofFIG.18) and a secure host subnet1908(e.g. the secure host subnet1808ofFIG.18). The VCN1906can include a local peering gateway (LPG)1910(e.g. the LPG1810ofFIG.18) that can be communicatively coupled to a secure shell (SSH) VCN1912(e.g. the SSH VCN1812ofFIG.18) via an LPG1810contained in the SSH VCN1912. The SSH VCN1912can include an SSH subnet1914(e.g. the SSH subnet1814ofFIG.18), and the SSH VCN1912can be communicatively coupled to a control plane VCN1916(e.g. the control plane VCN1816ofFIG.18) via an LPG1910contained in the control plane VCN1916. The control plane VCN1916can be contained in a service tenancy1919(e.g. the service tenancy1819ofFIG.18), and the data plane VCN1918(e.g. the data plane VCN1818ofFIG.18) can be contained in a customer tenancy1921that may be owned or operated by users, or customers, of the system. The control plane VCN1916can include a control plane DMZ tier1920(e.g. the control plane DMZ tier1820ofFIG.18) that can include LB subnet(s)1922(e.g. LB subnet(s)1822ofFIG.18), a control plane app tier1924(e.g. the control plane app tier1824ofFIG.18) that can include app subnet(s)1926(e.g. app subnet(s)1826ofFIG.18), a control plane data tier1928(e.g. the control plane data tier1828ofFIG.18) that can include database (DB) subnet(s)1930(e.g. similar to DB subnet(s)1830ofFIG.18). The LB subnet(s)1922contained in the control plane DMZ tier1920can be communicatively coupled to the app subnet(s)1926contained in the control plane app tier1924and an Internet gateway1934(e.g. the Internet gateway1834ofFIG.18) that can be contained in the control plane VCN1916, and the app subnet(s)1926can be communicatively coupled to the DB subnet(s)1930contained in the control plane data tier1928and a service gateway1936(e.g. the service gateway ofFIG.18) and a network address translation (NAT) gateway1938(e.g. the NAT gateway1838ofFIG.18). The control plane VCN1916can include the service gateway1936and the NAT gateway1938. The control plane VCN1916can include a data plane mirror app tier1940(e.g. the data plane mirror app tier1840ofFIG.18) that can include app subnet(s)1926. The app subnet(s)1926contained in the data plane mirror app tier1940can include a virtual network interface controller (VNIC)1942(e.g. the VNIC of1842) that can execute a compute instance1944(e.g. similar to the compute instance1844ofFIG.18). The compute instance1944can facilitate communication between the app subnet(s)1926of the data plane mirror app tier1940and the app subnet(s)1926that can be contained in a data plane app tier1946(e.g. the data plane app tier1846ofFIG.18) via the VNIC1942contained in the data plane mirror app tier1940and the VNIC1942contained in the data plane app tier1946. The Internet gateway1934contained in the control plane VCN1916can be communicatively coupled to a metadata management service1952(e.g. the metadata management service1852ofFIG.18) that can be communicatively coupled to public Internet1954(e.g. public Internet1854ofFIG.18). Public Internet1954can be communicatively coupled to the NAT gateway1938contained in the control plane VCN1916. The service gateway1936contained in the control plane VCN1916can be communicatively couple to cloud services1956(e.g. cloud services1856ofFIG.18). In some examples, the data plane VCN1918can be contained in the customer tenancy1921. In this case, the IaaS provider may provide the control plane VCN1916for each customer, and the IaaS provider may, for each customer, set up a unique compute instance1944that is contained in the service tenancy1919. Each compute instance1944may allow communication between the control plane VCN1916, contained in the service tenancy1919, and the data plane VCN1918that is contained in the customer tenancy1921. The compute instance1944may allow resources, that are provisioned in the control plane VCN1916that is contained in the service tenancy1919, to be deployed or otherwise used in the data plane VCN1918that is contained in the customer tenancy1921. In other examples, the customer of the IaaS provider may have databases that live in the customer tenancy1921. In this example, the control plane VCN1916can include the data plane mirror app tier1940that can include app subnet(s)1926. The data plane mirror app tier1940can reside in the data plane VCN1918, but the data plane mirror app tier1940may not live in the data plane VCN1918. That is, the data plane mirror app tier1940may have access to the customer tenancy1921, but the data plane mirror app tier1940may not exist in the data plane VCN1918or be owned or operated by the customer of the IaaS provider. The data plane mirror app tier1940may be configured to make calls to the data plane VCN1918but may not be configured to make calls to any entity contained in the control plane VCN1916. The customer may desire to deploy or otherwise use resources in the data plane VCN1918that are provisioned in the control plane VCN1916, and the data plane mirror app tier1940can facilitate the desired deployment, or other usage of resources, of the customer. In some embodiments, the customer of the IaaS provider can apply filters to the data plane VCN1918. In this embodiment, the customer can determine what the data plane VCN1918can access, and the customer may restrict access to public Internet1954from the data plane VCN1918. The IaaS provider may not be able to apply filters or otherwise control access of the data plane VCN1918to any outside networks or databases. Applying filters and controls by the customer onto the data plane VCN1918, contained in the customer tenancy1921, can help isolate the data plane VCN1918from other customers and from public Internet1954. In some embodiments, cloud services1956can be called by the service gateway1936to access services that may not exist on public Internet1954, on the control plane VCN1916, or on the data plane VCN1918. The connection between cloud services1956and the control plane VCN1916or the data plane VCN1918may not be live or continuous. Cloud services1956may exist on a different network owned or operated by the IaaS provider. Cloud services1956may be configured to receive calls from the service gateway1936and may be configured to not receive calls from public Internet1954. Some cloud services1956may be isolated from other cloud services1956, and the control plane VCN1916may be isolated from cloud services1956that may not be in the same region as the control plane VCN1916. For example, the control plane VCN1916may be located in “Region 1,” and cloud service “Deployment 18,” may be located in Region 1 and in “Region 2.” If a call to Deployment 18 is made by the service gateway1936contained in the control plane VCN1916located in Region 1, the call may be transmitted to Deployment 18 in Region 1. In this example, the control plane VCN1916, or Deployment 18 in Region 1, may not be communicatively coupled to, or otherwise in communication with, Deployment 18 in Region 2. FIG.20is a block diagram2000illustrating another example pattern of an IaaS architecture, according to at least one embodiment. Service operators2002(e.g. service operators1802ofFIG.18) can be communicatively coupled to a secure host tenancy2004(e.g. the secure host tenancy1804ofFIG.18) that can include a virtual cloud network (VCN)2006(e.g. the VCN1806ofFIG.18) and a secure host subnet2008(e.g. the secure host subnet1808ofFIG.18). The VCN2006can include an LPG2010(e.g. the LPG1810ofFIG.18) that can be communicatively coupled to an SSH VCN2012(e.g. the SSH VCN1812ofFIG.18) via an LPG2010contained in the SSH VCN2012. The SSH VCN2012can include an SSH subnet2014(e.g. the SSH subnet1814ofFIG.18), and the SSH VCN2012can be communicatively coupled to a control plane VCN2016(e.g. the control plane VCN1816ofFIG.18) via an LPG2010contained in the control plane VCN2016and to a data plane VCN2018(e.g. the data plane1818ofFIG.18) via an LPG2010contained in the data plane VCN2018. The control plane VCN2016and the data plane VCN2018can be contained in a service tenancy2019(e.g. the service tenancy1819ofFIG.18). The control plane VCN2016can include a control plane DMZ tier2020(e.g. the control plane DMZ tier1820ofFIG.18) that can include load balancer (LB) subnet(s)2022(e.g. LB subnet(s)1822ofFIG.18), a control plane app tier2024(e.g. the control plane app tier1824ofFIG.18) that can include app subnet(s)2026(e.g. similar to app subnet(s)1826ofFIG.18), a control plane data tier2028(e.g. the control plane data tier1828ofFIG.18) that can include DB subnet(s)2030. The LB subnet(s)2022contained in the control plane DMZ tier2020can be communicatively coupled to the app subnet(s)2026contained in the control plane app tier2024and to an Internet gateway2034(e.g. the Internet gateway1834ofFIG.18) that can be contained in the control plane VCN2016, and the app subnet(s)2026can be communicatively coupled to the DB subnet(s)2030contained in the control plane data tier2028and to a service gateway2036(e.g. the service gateway ofFIG.18) and a network address translation (NAT) gateway2038(e.g. the NAT gateway1838ofFIG.18). The control plane VCN2016can include the service gateway2036and the NAT gateway2038. The data plane VCN2018can include a data plane app tier2046(e.g. the data plane app tier1846ofFIG.18), a data plane DMZ tier2048(e.g. the data plane DMZ tier1848ofFIG.18), and a data plane data tier2050(e.g. the data plane data tier1850ofFIG.18). The data plane DMZ tier2048can include LB subnet(s)2022that can be communicatively coupled to trusted app subnet(s)2060and untrusted app subnet(s)2062of the data plane app tier2046and the Internet gateway2034contained in the data plane VCN2018. The trusted app subnet(s)2060can be communicatively coupled to the service gateway2036contained in the data plane VCN2018, the NAT gateway2038contained in the data plane VCN2018, and DB subnet(s)2030contained in the data plane data tier2050. The untrusted app subnet(s)2062can be communicatively coupled to the service gateway2036contained in the data plane VCN2018and DB subnet(s)2030contained in the data plane data tier2050. The data plane data tier2050can include DB subnet(s)2030that can be communicatively coupled to the service gateway2036contained in the data plane VCN2018. The untrusted app subnet(s)2062can include one or more primary VNICs2064(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs)2066(1)-(N). Each tenant VM2066(1)-(N) can be communicatively coupled to a respective app subnet2067(1)-(N) that can be contained in respective container egress VCNs2068(1)-(N) that can be contained in respective customer tenancies2070(1)-(N). Respective secondary VNICs2072(1)-(N) can facilitate communication between the untrusted app subnet(s)2062contained in the data plane VCN2018and the app subnet contained in the container egress VCNs2068(1)-(N). Each container egress VCNs2068(1)-(N) can include a NAT gateway2038that can be communicatively coupled to public Internet2054(e.g. public Internet1854ofFIG.18). The Internet gateway2034contained in the control plane VCN2016and contained in the data plane VCN2018can be communicatively coupled to a metadata management service2052(e.g. the metadata management system1852ofFIG.18) that can be communicatively coupled to public Internet2054. Public Internet2054can be communicatively coupled to the NAT gateway2038contained in the control plane VCN2016and contained in the data plane VCN2018. The service gateway2036contained in the control plane VCN2016and contained in the data plane VCN2018can be communicatively couple to cloud services2056. In some embodiments, the data plane VCN2018can be integrated with customer tenancies2070. This integration can be useful or desirable for customers of the IaaS provider in some cases such as a case that may desire support when executing code. The customer may provide code to run that may be destructive, may communicate with other customer resources, or may otherwise cause undesirable effects. In response to this, the IaaS provider may determine whether to run code given to the IaaS provider by the customer. In some examples, the customer of the IaaS provider may grant temporary network access to the IaaS provider and request a function to be attached to the data plane tier app2046. Code to run the function may be executed in the VMs2066(1)-(N), and the code may not be configured to run anywhere else on the data plane VCN2018. Each VM2066(1)-(N) may be connected to one customer tenancy2070. Respective containers2071(1)-(N) contained in the VMs2066(1)-(N) may be configured to run the code. In this case, there can be a dual isolation (e.g., the containers2071(1)-(N) running code, where the containers2071(1)-(N) may be contained in at least the VM2066(1)-(N) that are contained in the untrusted app subnet(s)2062), which may help prevent incorrect or otherwise undesirable code from damaging the network of the IaaS provider or from damaging a network of a different customer. The containers2071(1)-(N) may be communicatively coupled to the customer tenancy2070and may be configured to transmit or receive data from the customer tenancy2070. The containers2071(1)-(N) may not be configured to transmit or receive data from any other entity in the data plane VCN2018. Upon completion of running the code, the IaaS provider may kill or otherwise dispose of the containers2071(1)-(N). In some embodiments, the trusted app subnet(s)2060may run code that may be owned or operated by the IaaS provider. In this embodiment, the trusted app subnet(s)2060may be communicatively coupled to the DB subnet(s)2030and be configured to execute CRUD operations in the DB subnet(s)2030. The untrusted app subnet(s)2062may be communicatively coupled to the DB subnet(s)2030, but in this embodiment, the untrusted app subnet(s) may be configured to execute read operations in the DB subnet(s)2030. The containers2071(1)-(N) that can be contained in the VM2066(1)-(N) of each customer and that may run code from the customer may not be communicatively coupled with the DB subnet(s)2030. In other embodiments, the control plane VCN2016and the data plane VCN2018may not be directly communicatively coupled. In this embodiment, there may be no direct communication between the control plane VCN2016and the data plane VCN2018. However, communication can occur indirectly through at least one method. An LPG2010may be established by the IaaS provider that can facilitate communication between the control plane VCN2016and the data plane VCN2018. In another example, the control plane VCN2016or the data plane VCN2018can make a call to cloud services2056via the service gateway2036. For example, a call to cloud services2056from the control plane VCN2016can include a request for a service that can communicate with the data plane VCN2018. FIG.21is a block diagram2100illustrating another example pattern of an IaaS architecture, according to at least one embodiment. Service operators2102(e.g. service operators1802ofFIG.18) can be communicatively coupled to a secure host tenancy2104(e.g. the secure host tenancy1804ofFIG.18) that can include a virtual cloud network (VCN)2106(e.g. the VCN1806ofFIG.18) and a secure host subnet2108(e.g. the secure host subnet1808ofFIG.18). The VCN2106can include an LPG2110(e.g. the LPG1810ofFIG.18) that can be communicatively coupled to an SSH VCN2112(e.g. the SSH VCN1812ofFIG.18) via an LPG2110contained in the SSH VCN2112. The SSH VCN2112can include an SSH subnet2114(e.g. the SSH subnet1814ofFIG.18), and the SSH VCN2112can be communicatively coupled to a control plane VCN2116(e.g. the control plane VCN1816ofFIG.18) via an LPG2110contained in the control plane VCN2116and to a data plane VCN2118(e.g. the data plane1818ofFIG.18) via an LPG2110contained in the data plane VCN2118. The control plane VCN2116and the data plane VCN2118can be contained in a service tenancy2119(e.g. the service tenancy1819ofFIG.18). The control plane VCN2116can include a control plane DMZ tier2120(e.g. the control plane DMZ tier1820ofFIG.18) that can include LB subnet(s)2122(e.g. LB subnet(s)1822ofFIG.18), a control plane app tier2124(e.g. the control plane app tier1824ofFIG.18) that can include app subnet(s)2126(e.g. app subnet(s)1826ofFIG.18), a control plane data tier2128(e.g. the control plane data tier1828ofFIG.18) that can include DB subnet(s)2130(e.g. DB subnet(s)2030ofFIG.20). The LB subnet(s)2122contained in the control plane DMZ tier2120can be communicatively coupled to the app subnet(s)2126contained in the control plane app tier2124and to an Internet gateway2134(e.g. the Internet gateway1834ofFIG.18) that can be contained in the control plane VCN2116, and the app subnet(s)2126can be communicatively coupled to the DB subnet(s)2130contained in the control plane data tier2128and to a service gateway2136(e.g. the service gateway ofFIG.18) and a network address translation (NAT) gateway2138(e.g. the NAT gateway1838ofFIG.18). The control plane VCN2116can include the service gateway2136and the NAT gateway2138. The data plane VCN2118can include a data plane app tier2146(e.g. the data plane app tier1846ofFIG.18), a data plane DMZ tier2148(e.g. the data plane DMZ tier1848ofFIG.18), and a data plane data tier2150(e.g. the data plane data tier1850ofFIG.18). The data plane DMZ tier2148can include LB subnet(s)2122that can be communicatively coupled to trusted app subnet(s)2160(e.g. trusted app subnet(s)2060ofFIG.20) and untrusted app subnet(s)2162(e.g. untrusted app subnet(s)2062ofFIG.20) of the data plane app tier2146and the Internet gateway2134contained in the data plane VCN2118. The trusted app subnet(s)2160can be communicatively coupled to the service gateway2136contained in the data plane VCN2118, the NAT gateway2138contained in the data plane VCN2118, and DB subnet(s)2130contained in the data plane data tier2150. The untrusted app subnet(s)2162can be communicatively coupled to the service gateway2136contained in the data plane VCN2118and DB subnet(s)2130contained in the data plane data tier2150. The data plane data tier2150can include DB subnet(s)2130that can be communicatively coupled to the service gateway2136contained in the data plane VCN2118. The untrusted app subnet(s)2162can include primary VNICs2164(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs)2166(1)-(N) residing within the untrusted app subnet(s)2162. Each tenant VM2166(1)-(N) can run code in a respective container2167(1)-(N), and be communicatively coupled to an app subnet2126that can be contained in a data plane app tier2146that can be contained in a container egress VCN2168. Respective secondary VNICs2172(1)-(N) can facilitate communication between the untrusted app subnet(s)2162contained in the data plane VCN2118and the app subnet contained in the container egress VCN2168. The container egress VCN can include a NAT gateway2138that can be communicatively coupled to public Internet2154(e.g. public Internet1854ofFIG.18). The Internet gateway2134contained in the control plane VCN2116and contained in the data plane VCN2118can be communicatively coupled to a metadata management service2152(e.g. the metadata management system1852ofFIG.18) that can be communicatively coupled to public Internet2154. Public Internet2154can be communicatively coupled to the NAT gateway2138contained in the control plane VCN2116and contained in the data plane VCN2118. The service gateway2136contained in the control plane VCN2116and contained in the data plane VCN2118can be communicatively couple to cloud services2156. In some examples, the pattern illustrated by the architecture of block diagram2100ofFIG.21may be considered an exception to the pattern illustrated by the architecture of block diagram2000ofFIG.20and may be desirable for a customer of the IaaS provider if the IaaS provider cannot directly communicate with the customer (e.g., a disconnected region). The respective containers2167(1)-(N) that are contained in the VMs2166(1)-(N) for each customer can be accessed in real-time by the customer. The containers2167(1)-(N) may be configured to make calls to respective secondary VNICs2172(1)-(N) contained in app subnet(s)2126of the data plane app tier2146that can be contained in the container egress VCN2168. The secondary VNICs2172(1)-(N) can transmit the calls to the NAT gateway2138that may transmit the calls to public Internet2154. In this example, the containers2167(1)-(N) that can be accessed in real-time by the customer can be isolated from the control plane VCN2116and can be isolated from other entities contained in the data plane VCN2118. The containers2167(1)-(N) may also be isolated from resources from other customers. In other examples, the customer can use the containers2167(1)-(N) to call cloud services2156. In this example, the customer may run code in the containers2167(1)-(N) that requests a service from cloud services2156. The containers2167(1)-(N) can transmit this request to the secondary VNICs2172(1)-(N) that can transmit the request to the NAT gateway that can transmit the request to public Internet2154. Public Internet2154can transmit the request to LB subnet(s)2122contained in the control plane VCN2116via the Internet gateway2134. In response to determining the request is valid, the LB subnet(s) can transmit the request to app subnet(s)2126that can transmit the request to cloud services2156via the service gateway2136. It should be appreciated that IaaS architectures1800,1900,2000,2100depicted in the figures may have other components than those depicted. Further, the embodiments shown in the figures are only some examples of a cloud infrastructure system that may incorporate an embodiment of the disclosure. In some other embodiments, the IaaS systems may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration or arrangement of components. In certain embodiments, the IaaS systems described herein may include a suite of applications, middleware, and database service offerings that are delivered to a customer in a self-service, subscription-based, elastically scalable, reliable, highly available, and secure manner. An example of such an IaaS system is the Oracle Cloud Infrastructure (OCI) provided by the present assignee. FIG.22illustrates an example computer system2200, in which various embodiments may be implemented. The system2200may be used to implement any of the computer systems described above. As shown in the figure, computer system2200includes a processing unit2204that communicates with a number of peripheral subsystems via a bus subsystem2202. These peripheral subsystems may include a processing acceleration unit2206, an I/O subsystem2208, a storage subsystem2218and a communications subsystem2224. Storage subsystem2218includes tangible computer-readable storage media2222and a system memory2210. Bus subsystem2202provides a mechanism for letting the various components and subsystems of computer system2200communicate with each other as intended. Although bus subsystem2202is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple buses. Bus subsystem2202may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. For example, such architectures may include an Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, which can be implemented as a Mezzanine bus manufactured to the IEEE P1386.1 standard. Processing unit2204, which can be implemented as one or more integrated circuits (e.g., a conventional microprocessor or microcontroller), controls the operation of computer system2200. One or more processors may be included in processing unit2204. These processors may include single core or multicore processors. In certain embodiments, processing unit2204may be implemented as one or more independent processing units2232and/or2234with single or multicore processors included in each processing unit. In other embodiments, processing unit2204may also be implemented as a quad-core processing unit formed by integrating two dual-core processors into a single chip. In various embodiments, processing unit2204can execute a variety of programs in response to program code and can maintain multiple concurrently executing programs or processes. At any given time, some or all of the program code to be executed can be resident in processor(s)2204and/or in storage subsystem2218. Through suitable programming, processor(s)2204can provide various functionalities described above. Computer system2200may additionally include a processing acceleration unit2206, which can include a digital signal processor (DSP), a special-purpose processor, and/or the like. I/O subsystem2208may include user interface input devices and user interface output devices. User interface input devices may include a keyboard, pointing devices such as a mouse or trackball, a touchpad or touch screen incorporated into a display, a scroll wheel, a click wheel, a dial, a button, a switch, a keypad, audio input devices with voice command recognition systems, microphones, and other types of input devices. User interface input devices may include, for example, motion sensing and/or gesture recognition devices such as the Microsoft Kinect® motion sensor that enables users to control and interact with an input device, such as the Microsoft Xbox® 360 game controller, through a natural user interface using gestures and spoken commands. User interface input devices may also include eye gesture recognition devices such as the Google Glass® blink detector that detects eye activity (e.g., ‘blinking’ while taking pictures and/or making a menu selection) from users and transforms the eye gestures as input into an input device (e.g., Google Glass®). Additionally, user interface input devices may include voice recognition sensing devices that enable users to interact with voice recognition systems (e.g., Siri® navigator), through voice commands. User interface input devices may also include, without limitation, three dimensional (3D) mice, joysticks or pointing sticks, gamepads and graphic tablets, and audio/visual devices such as speakers, digital cameras, digital camcorders, portable media players, webcams, image scanners, fingerprint scanners, barcode reader 3D scanners, 3D printers, laser rangefinders, and eye gaze tracking devices. Additionally, user interface input devices may include, for example, medical imaging input devices such as computed tomography, magnetic resonance imaging, position emission tomography, medical ultrasonography devices. User interface input devices may also include, for example, audio input devices such as MIDI keyboards, digital musical instruments and the like. User interface output devices may include a display subsystem, indicator lights, or non-visual displays such as audio output devices, etc. The display subsystem may be a cathode ray tube (CRT), a flat-panel device, such as that using a liquid crystal display (LCD) or plasma display, a projection device, a touch screen, and the like. In general, use of the term “output device” is intended to include all possible types of devices and mechanisms for outputting information from computer system2200to a user or other computer. For example, user interface output devices may include, without limitation, a variety of display devices that visually convey text, graphics and audio/video information such as monitors, printers, speakers, headphones, automotive navigation systems, plotters, voice output devices, and modems. Computer system2200may comprise a storage subsystem2218that comprises software elements, shown as being currently located within a system memory2210. System memory2210may store program instructions that are loadable and executable on processing unit2204, as well as data generated during the execution of these programs. Depending on the configuration and type of computer system2200, system memory2210may be volatile (such as random access memory (RAM)) and/or non-volatile (such as read-only memory (ROM), flash memory, etc.) The RAM typically contains data and/or program modules that are immediately accessible to and/or presently being operated and executed by processing unit2204. In some implementations, system memory2210may include multiple different types of memory, such as static random access memory (SRAM) or dynamic random access memory (DRAM). In some implementations, a basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within computer system2200, such as during start-up, may typically be stored in the ROM. By way of example, and not limitation, system memory2210also illustrates application programs2212, which may include client applications, Web browsers, mid-tier applications, relational database management systems (RDBMS), etc., program data2214, and an operating system2216. By way of example, operating system2216may include various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems, a variety of commercially-available UNIX® or UNIX-like operating systems (including without limitation the variety of GNU/Linux operating systems, the Google Chrome® OS, and the like) and/or mobile operating systems such as iOS, Windows® Phone, Android® OS, BlackBerry® 22 OS, and Palm® OS operating systems. Storage subsystem2218may also provide a tangible computer-readable storage medium for storing the basic programming and data constructs that provide the functionality of some embodiments. Software (programs, code modules, instructions) that when executed by a processor provide the functionality described above may be stored in storage subsystem2218. These software modules or instructions may be executed by processing unit2204. Storage subsystem2218may also provide a repository for storing data used in accordance with the present disclosure. Storage subsystem2200may also include a computer-readable storage media reader2220that can further be connected to computer-readable storage media2222. Together and, optionally, in combination with system memory2210, computer-readable storage media2222may comprehensively represent remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. Computer-readable storage media2222containing code, or portions of code, can also include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information. This can include tangible computer-readable storage media such as RAM, ROM, electronically erasable programmable ROM (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disk (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible computer readable media. This can also include nontangible computer-readable media, such as data signals, data transmissions, or any other medium which can be used to transmit the desired information and which can be accessed by computing system2200. By way of example, computer-readable storage media2222may include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and an optical disk drive that reads from or writes to a removable, nonvolatile optical disk such as a CD ROM, DVD, and Blu-Ray® disk, or other optical media. Computer-readable storage media2222may include, but is not limited to, Zip® drives, flash memory cards, universal serial bus (USB) flash drives, secure digital (SD) cards, DVD disks, digital video tape, and the like. Computer-readable storage media2222may also include, solid-state drives (SSD) based on non-volatile memory such as flash-memory based SSDs, enterprise flash drives, solid state ROM, and the like, SSDs based on volatile memory such as solid state RAM, dynamic RAM, static RAM, DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM and flash memory based SSDs. The disk drives and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program modules, and other data for computer system2200. Communications subsystem2224provides an interface to other computer systems and networks. Communications subsystem2224serves as an interface for receiving data from and transmitting data to other systems from computer system2200. For example, communications subsystem2224may enable computer system2200to connect to one or more devices via the Internet. In some embodiments communications subsystem2224can include radio frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., using cellular telephone technology, advanced data network technology, such as 3G, 4G or EDGE (enhanced data rates for global evolution), WiFi (IEEE 1502.11 family standards, or other mobile communication technologies, or any combination thereof), global positioning system (GPS) receiver components, and/or other components. In some embodiments communications subsystem2224can provide wired network connectivity (e.g., Ethernet) in addition to or instead of a wireless interface. In some embodiments, communications subsystem2224may also receive input communication in the form of structured and/or unstructured data feeds2226, event streams2228, event updates2230, and the like on behalf of one or more users who may use computer system2200. By way of example, communications subsystem2224may be configured to receive data feeds2226in real-time from users of social networks and/or other communication services such as Twitter® feeds, Facebook® updates, web feeds such as Rich Site Summary (RSS) feeds, and/or real-time updates from one or more third party information sources. Additionally, communications subsystem2224may also be configured to receive data in the form of continuous data streams, which may include event streams2228of real-time events and/or event updates2230, that may be continuous or unbounded in nature with no explicit end. Examples of applications that generate continuous data may include, for example, sensor data applications, financial tickers, network performance measuring tools (e.g. network monitoring and traffic management applications), clickstream analysis tools, automobile traffic monitoring, and the like. Communications subsystem2224may also be configured to output the structured and/or unstructured data feeds2226, event streams2228, event updates2230, and the like to one or more databases that may be in communication with one or more streaming data source computers coupled to computer system2200. Computer system2200can be one of various types, including a handheld portable device (e.g., an iPhone® cellular phone, an iPad® computing tablet, a PDA), a wearable device (e.g., a Google Glass® head mounted display), a PC, a workstation, a mainframe, a kiosk, a server rack, or any other data processing system. Due to the ever-changing nature of computers and networks, the description of computer system2200depicted in the figure is intended only as a specific example. Many other configurations having more or fewer components than the system depicted in the figure are possible. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, firmware, software (including applets), or a combination. Further, connection to other computing devices, such as network input/output devices, may be employed. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. Although specific embodiments have been described, various modifications, alterations, alternative constructions, and equivalents are also encompassed within the scope of the disclosure. Embodiments are not restricted to operation within certain specific data processing environments, but are free to operate within a plurality of data processing environments. Additionally, although embodiments have been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope of the present disclosure is not limited to the described series of transactions and steps. Various features and aspects of the above-described embodiments may be used individually or jointly. Further, while embodiments have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present disclosure. Embodiments may be implemented only in hardware, or only in software, or using combinations thereof. The various processes described herein can be implemented on the same processor or different processors in any combination. Accordingly, where components or modules are described as being configured to perform certain operations, such configuration can be accomplished, e.g., by designing electronic circuits to perform the operation, by programming programmable electronic circuits (such as microprocessors) to perform the operation, or any combination thereof. Processes can communicate using a variety of techniques including but not limited to conventional techniques for inter process communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that additions, subtractions, deletions, and other modifications and changes may be made thereunto without departing from the broader spirit and scope as set forth in the claims. Thus, although specific disclosure embodiments have been described, these are not intended to be limiting. Various modifications and equivalents are within the scope of the following claims. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected” is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure. Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is intended to be understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Preferred embodiments of this disclosure are described herein, including the best mode known for carrying out the disclosure. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. Those of ordinary skill should be able to employ such variations as appropriate and the disclosure may be practiced otherwise than as specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein. All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein. In the foregoing specification, aspects of the disclosure are described with reference to specific embodiments thereof, but those skilled in the art will recognize that the disclosure is not limited thereto. Various features and aspects of the above-described disclosure may be used individually or jointly. Further, embodiments can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive.
199,144
11863456
DETAILED DESCRIPTION In general, this disclosure is directed to techniques and systems that may increase the network utility of a computer network through autonomous network and mission optimization. Specifically, the techniques of this disclosure include the use of a network template that includes network-wide quality of service settings to configure nodes within the network and to enable nodes of the network to perform flow admission control of data flows in ways that increases the amount of data flows admitted by nodes of the network. A complex joint network may be a network made up of nodes and a variety of different types of subnetworks across a wide geographical area that are connected for transmitting data such as video data, audio data, textual data, Internet data traffic, and the like, in the form of data flows of associated network packets. An organization, such as a corporation, a military unit, or any other collection of users, may use the complex joint network to transmit data flows to perform one or more tasks to accomplish a mission, such as a military operation, using the complex joint network. During the mission, users may use the complex joint network to send and receive data flows related to performing the mission. For example, users may send and receive such data flows to and from mission elements, which may be traffic flow endpoints such as users, devices, machines, servers, and the like, across the complex joint network. Nodes of the network. such as router devices. that perform routing of the data flows may receive data flows and may route the received data flows to their destinations in the network. To route the received data flows, router devices in the network may make real-time quality of service (QoS) decisions to perform flow admission control to determine whether to admit a received data flow for routing to the data flow's destination or to deny admission of a received data flow. In some examples, a router device in the network may perform flow admission control based on mission utilities associated with data flows, where the mission utility associated with a data flow may correspond to the relative priority of the data flow. The router device may prioritize data flows associated with higher mission utilities over data flows with lower mission utilities to reduce possible delays, packet loss, and the like, in transmitting data flows associated with higher mission utilities. However, a router device may not be able to determine the mission utilities associated with data flows due to several reasons. For example, in order to minimize the size of a data flow, network packets of a data flow may not indicate the mission utility associated with the data flow. In another example, the mission utility of a data flow may change over time during the mission based on several different factors, such that certain data flows may become relatively more important, and thus increase in its associated mission utility, or may become relatively less important, and thus decrease in its associated mission utility. A network having router devices that are not able to determine, in real-time, the mission utilities associated with data flows may not be able to maximize the Normalized Cumulative Network Performance (CNP) of the network or the network utility of the network, which may be the sum of the mission utilities of data flows admitted in the network divided by the sum of the mission utility associated with all requested data flows in the network. In accordance with aspects of the present disclosure, nodes in a network may be able to make real-time quality of service (QoS) decisions based on a network template that includes various real-time information associated with the network, such as the flow types of data flows in the network, the mission elements in the network, the traffic classes of data flows in the network, and the like. The network template may be continuously updated with the latest information regarding the network. A router device in the network may utilize the network template to derive the mission utility associated with a data flow by determining, based on information contained in network packets of the data flow, information associated with the data flow, such as one or more flow types associated with the data flow and one or more source mission elements of the data flow. The router device may therefore determine the mission utility associated with the data flow based on such information associated with the data flow and perform flow admission control of data flows based on the derived mission utilities associated with the data flows. The techniques of this disclosure may increase the performance of a computer network. By enabling router devices of the network to derive, in real-time, the current mission utilities of data flows received by the router device and to perform quality of service and/or network quality techniques, such as flow admission control of the data flows, based on the current mission utilities of data flows, the techniques of this disclosure thereby increases the network utility of the network by enabling router devices to increase the amount of higher priority data flows that are admitted by the router devices and to reduce possible delays, packet loss, and the like, in transmitting higher priority data flows in the network. FIGS.1A and1Bare block diagrams illustrating a system2for performing flow admission control in accordance with aspects of this disclosure. As shown inFIG.1A, system2includes a plurality of internal networks6A-6E (collectively “internal networks6”), such as LANs or other internal networks, connected by complex joint network4. While there could be other configurations, an example of a basic structure for system2is illustrated inFIGS.1A and1B. Complex joint network4may be a wide area network (WAN) that includes a global scale network of interconnected subnets and routers. In the example ofFIG.1A, complex joint network4includes subnetworks8A-8B (hereafter “subnets8”), such as wired or wireless LANs, WANs (e.g., radio WANs), optical-based networks, satellite communication (SATCOM) networks, route radio networks, bridge radio networks, and the like that are interconnected via one or more router devices, such as router device14. Subnets8may be based on or use Transmission Control Protocol/Internet Protocol (TCP/IP) or any other protocol that can be tunneled or wrapped over TCP/IP networks. In addition, complex joint network may also include satellite communication (SATCOM) networks, such as SATCOM network16, that is interconnected with subnets8via, e.g., router device14. In other examples, complex joint network4may include any number of subnets connected via any number of routers as well as additional communication networks. Each of internal networks6may include one or more client devices12A-12C (collectively, “client devices12”). Client devices12could be stationary computing devices, mobile computing devices, or any other suitable computing devices that send and receive data packets via router devices10. Each of internal networks6is connected to complex joint network4via respective router devices10A-10E (collectively “router devices10”). Router devices10may be network appliances that controls the forwarding of data packets between computer networks. For example, router devices10may control the forwarding of data packets between complex joint network4and internal networks6. Further, router devices10may also control the forwarding of data packets between internal networks6. For example, router devices10may be able to control the forwarding of data packets from a first subnet (e.g., subnetwork8A) to a second subnet (e.g., subnetwork8B), such as by selecting the network links in complex joint network4over which the data packets are forwarded from the first subnet to the second subnet. Router devices10may connect to complex joint network4via links such as fiber optic links, SATCOM links, wireless radio links, and the like. For example, router device10A may connect to SATCOM network16via a SATCOM link and may connect to subnet8A via a fiber optic link. System2may include network management system20, which may be one or more computing devices, server devices, and the like that are connected to nodes in complex joint network4, such as being connected to router devices10. Network management system20may coordinate and manage the operations and settings of router devices10. Network management system20may include network template18for coordinating the settings of router devices10throughout system2. In some examples, router devices10may each store a copy of network template18. In other examples, network template18may be stored in network management systems connected to router devices10, such as network management system20, or elsewhere (e.g., other servers) in system2. When the network template18is edited, the edits made in network template18may be propagated across system2, such as being propagated throughout router devices10, complex joint network4, router device11(shown inFIG.1B), and the like, so that router devices10may operate using the latest version of network template18. As shown inFIG.1B, internal networks6may, in some examples, be configured to receive encrypted data packets from complex joint network4. For example, internal network6may include enclave7A that includes client devices12that are connected to router device11to send and receive data packets to and from Inline Network Encryptor (INE)9. Inline Network Encryptor (INE)9is an encryption device that communicates with router device10A to receive encrypted data packets from complex joint network4. INE9may decrypt the received data packets and forward them to one of client devices12via router device11. Similarly, INE9may receive plain-text (i.e., unencrypted) data packets from client devices12via router device11, encrypt the data packets according to an encryption protocol, and send the data packets to another one of internal networks6via complex joint network4 In general, INE9fronting enclave7A encrypts all IP traffic originating from that enclave and transports the IP traffic over secure Internet protocol security (IPsec) tunnels to the respective INEs fronting respective destination internal networks which decrypt these data packets before forwarding them to the hosts residing behind them. INE9is configured to prevent bypass of any data from a plain-text (PT) network interface to a cipher-text (CT) interface, except for multicast join messages and tags associated with data flows generated by router device11, as described in more detail below. Examples of INE9include High Assurance IP Encryptors (HAIPEs) or commercial solutions for classified (CSfC) virtual private network (VPN) gateways. Network management system20may use network template18to manage complex joint network4regardless of whether complex joint network4transports encrypted data packets or unencrypted data packets. Network template18may include one or more files that include information regarding naming conventions, subnets, links, flow types, quality of service (QoS) settings, and other types of information that can be used by router devices10as well as any other suitable router devices and systems in system2, such as router device14. For example, network template18may include information that can be used by a visualizer tool provided by network management system20in system2to auto-configure items and flows that can be tracked and displayed by the visualizer tool. Network template18may also include information for configuring the data used by router devices10to perform dynamic flow admission control and make real-time QoS decisions throughout complex joint network4. For example, network template18may define the types of flows that are identified and/or permitted throughout complex joint network4and may, for each identified flow, include information that can be used by router devices10to make real-time QoS decisions regarding flow admissions. In accordance with aspects of the present disclosure, a router device such as router device10A of router devices10, router device14, or router device11may perform flow admission control based on network template18. Specifically, routing device10A may receive a copy of network template18from network management system20, such as at the start of a mission, and may, in response to receiving a data flow, determine whether to admit the traffic flow based on information associated with the data flow in network template18. While this disclosure is described with respect to router device10A and/or router device11, the techniques of this disclosure can equally be performed by any router device in complex joint network4, such as any of router devices10, router device14, router device11, any routing devices in subnetwork8, and the like. A data flow may be a sequence of associated data packets that are transmitted from a source node in complex joint network4to a destination node in complex joint network4. In some examples, router device10A may receive a data flow from internal network6A at an interface of router device10A, and router device10A may perform flow admission control to determine whether to perform routing functionality to transmit the data flow on a network link to route the data flow through complex joint network4to one of the other router devices10(e.g., one of router devices10B-10E). In some examples, router device10A may receive a control data flow from complex joint network4at an interface of router device10A, and router device10A may perform flow admission control to determine whether to perform routing functionality to transmit the data flow on a network link to route the data flow to one or more of client devices12in internal network6A. Router device10A may divide the bandwidth of a network link amongst different traffic classes, such as traffic classes defined by network template18, and may, for each of a plurality of traffic classes, reserve a specific amount of bandwidth of a network link for router device10A to transmit data flows of the traffic class. As such, router device10A may determine whether to admit a data flow of a specific traffic class based on whether there is sufficient available bandwidth in the bandwidth of the network link reserved for the specific traffic class to transmit the data flow via the network link. Data flows within a traffic class are prioritized based on the mission utility associated with the data flows. That is, if there is insufficient available bandwidth in the bandwidth of the network link reserved for the specific traffic class to transmit an incoming data flow of the traffic class, router device10A may drop (i.e., cease to receive and transmit) one or more data flows of the traffic class having lower mission utility than the mission utility of the data flow to create sufficient available bandwidth in the bandwidth of the network link reserved for the specific traffic class to transmit the data flow. As such, router device10A may, in response to receiving a data flow, determine a mission utility associated with the data flow and a traffic class associated with the data flow in order to control admission of the data flow based on the mission utility associated with the data flow and the traffic class associated with the data flow. To determine the mission utility associated with the data flow and the traffic class, associated with the data flow, router device10A may inspect data packets of a data flow to determine various information and may use such information to determine the mission utility associated with the data flow and the traffic class associated with the data flow. The contents of a data packet of a data flow, such as the contents of the header of the data packet, may include any one or combination of the source network address and source port of the data packet, the destination network address and the destination port, the transport protocol identifier for the data packet, as well as other relevant information associated with the data packet. In examples where router10A receives encrypted data flow, such as in the example ofFIG.1Bwhere router device10A is on the encrypted side of INE9, router device10A may be unable to inspect the data packets of a data flow because the data packets are encrypted. For example, a data flow originating from one of client devices12may flow from enclave7A to routing device11to INE9, where INE9encrypts the data flow before the data flow reaches router device10A. In these examples, because router device11on the plaintext side of INE9may encounter data flows before the data flows are encrypted by INE9, router device11may, in response to receiving a data flow, inspect the unencrypted data packets of the data flow prior to INE9encrypting the data flow and may generate a tag associated with a data flow. Specifically, router device11may inspect data packets of a data flow to determine various information and may generate a tag associated with the data flow that may include any one or combination of the source network address and source port of the data packet, the destination network address and the destination port, the transport protocol identifier for the data packet, as well as other relevant information associated with the data packet. Router device11may also determine, based on such information, the mission utility associated with the data flow and the traffic class associated with the data flow. Router device11may therefore generate, for a data flow, associated tags, the mission utility associated with the data flow, and the traffic class associated with the data flow. In some examples, router device11may, upon generating the tag associated with a data flow and upon determining the mission utility and the traffic class associated with the data flow, transmit the tag, the mission utility, and the traffic class associated with the data flow, to router device10A and to network management system20. For example, router device11may transmit the tag associated with the data flow, the mission utility, and the traffic class associated with the data flow to network management system20. In some examples, router device11may insert the tag, the mission utility, and the traffic class associated with the data flow in data packets of the data flow and forward the data packet to INE9. When INE9receives the data packets, INE9may refrain from encrypting the portions of the data packets that contain the tag generated and determined by router device11, the mission utility, and the traffic class associated with the data flow. Thus, when router device10A receives the encrypted data flow from INE9, the data packets of the encrypted data flow may each include an unencrypted portion that contains the tag, the mission utility, and the traffic class associated with the data flow generated an determined by router device11. Router device10A may receive the tags generated by router device11, either via INE9or via network management system20. Router device10A may, for a set of encrypted data flows received by router device10A, receive a set of tags associated with the set of encrypted data flows. Router device10A may split up the set of encrypted data flows into individual encrypted data flows, determine the tag associated with each of the individual encrypted data flows, and perform quality of service techniques for each of the individual encrypted data flows, such as flow admission control, based on the associated tags, as described in this disclosure. Router device10A or router device11may determine the traffic class associated with the data flow based on at least one of: the source port of the data flow or the destination port of the data flow. Examples of traffic classes may include a chat traffic class, a video traffic class, a VoIP traffic class, a bulk traffic class (e.g., for file transfer protocol data flows), a web traffic class, a control traffic class, and the like. Network template18may specify a plurality of traffic classes, where each traffic class in the plurality of traffic classes specifies one or more data flows of the traffic class. Specifically, each traffic class in the plurality of traffic classes may specify the one or more data flows of the traffic class by specifying one or more ports that are associated with the traffic class. Router device10A or router device11may therefore determine a traffic class that is associated with the traffic flow out of the plurality of traffic classes specified by network template18as the traffic class that specifies a port that matches the source port of the data flow or the destination port of the data flow. In some examples, network template18may also include a default traffic class. If router device10A or router device11cannot match a data flow with one of the traffic classes specified by network template18, router device10A or router device11may determine that the data flow is an unmatched data flow that is associated with the default traffic class, and any remaining unallocated bandwidth of router device10A may be used to admit such unmatched data flows. If router device10A does not have any unallocated bandwidth or if the remaining unallocated bandwidth of router device10A is less than an expected bandwidth used by the unmatched data flow, then router device10A may refrain from admitting the unmatched data flow. Network template18may, for each traffic class, also specify a default optimization basis (also referred to as “OptimizationBasis”). The optimization basis may be used by routers (e.g., router device10A and router device11) as well as network management system20to perform autonomous optimization of, e.g., complex joint network4. The optimization basis for a traffic class may include 1) a throughput size or speed of the traffic class, 2) a response latency or jitter of the traffic class, 3) a stability (e.g., associated with a reduction of errors) of the traffic class, 4) and any other suitable information. To determine the mission utility associated with the data flow, router device10A or router device11may determine the flow type associated the data flow. In some examples, network template18may specify a plurality of flow types, where each flow type indicates a type of data flow associated with the flow type. Examples of flow types may include a voice over IP (VoIP) flow type, a world wide web (WWW) flow type, a chat flow type, and the like. Network template18may, for each flow type in the list of flow types, specify at least a port, a bandwidth, and a flow type mission utility. The port may be a TCP or UDP port or range of TCP or UDP ports associated with the flow type, and may be used to match a data flow with a flow type. The bandwidth may be the expected bandwidth usage of the flow, such as in bits per second. The flow type mission utility may correspond to the network priority of the flow type, where a higher flow type mission utility may indicate a higher priority. In some examples, the mission utility may be expressed as a numerical value, such as integers, such as from 0 to 100, 0 to 70, and the like. In some examples, network template18may also specify a default flow type. Data flows that are not matched to any flow types in the list of flow types may be assigned to the default flow type. The default flow type may specify a default flow type mission utility that may be assigned to such unmatched flows. In some examples, each flow type may be associated with a default optimization basis associated with a parent traffic class of the flow type. A flow type may override the default optimization basis by specifying, in network template18, values for 1) a throughput size or speed of the traffic class, 2) a response latency or jitter of the traffic class, 3) a stability (e.g., associated with a reduction of errors) of the traffic class, 4) and any other suitable information. The optimization basis may be used by routers (e.g., router device10A and router device11) as well as network management system20to perform autonomous optimization of, e.g., complex joint network4. Router device10A or router device11may determine the flow type of the data flow based on at least one of: the source port of the data flow or the destination port of the data flow. That is, router device10A or router device11may determine whether a flow type in the plurality of flow types specified by network template18specifies a port or a range of ports that matches at least one of the source port of the data flow or the destination port of the data flow. As described above, a flow type may have an associated flow mission utility. If router device10A or router device11determines that the data flow is associated with a flow type in the plurality of flow types specified by network template18, router device10A or router device11may determine a flow type mission utility associated with the data flow as the flow type mission utility of the flow type specified in network template18. In some examples, router device10A or router device11may use the source port of the data flow to determine the flow type of the data flow and a source flow type mission utility associated with the data flow. Router device10A or router device11may determine whether a flow type in the plurality of flow types specified by network template18matches the source port of the data flow. If router device10A or router device11determines that a flow type in the plurality of flow types specified by network template18matches the source port of the data flow, router device10A or router device11may determine a source flow type mission utility associated with the data flow as the flow type mission utility of the flow type specified in network template18. In some examples, router device10A or router device11may also use the destination port of the data flow to determine a destination flow type of the data flow and the flow type mission utility associated with the data flow. Router device10A or router device11may determine whether a flow type in the plurality of flow types specified by network template18matches the destination port of the data flow. If router device10A or router device11determines that a flow type in the plurality of flow types specified by network template18matches the destination port of the data flow, router device10A or router device11may determine a destination flow type mission utility associated with the data flow as the flow type mission utility of the flow type specified in network template18. If router device10A or router device11determines both a source flow type mission utility and a destination flow type mission utility, router device10A or router device11may select the greater of the source flow type mission utility and the destination flow type mission utility as the flow type mission utility associated with the data flow. If router device10A or router device11determines that the data flow is not associated with any of the flow types in the plurality of flow types specified by network template18, router device10A or router device11may determine the flow type mission utility associated with the data flow as the flow type mission utility of the default mission type. In some examples, to determine the mission utility associated with a data flow, router device10A or router device11may determine a source mission element associated with the data flow and a destination mission element associated with the data flow based on network template18and the contents of data packets of the data flow. Mission elements are endpoint users or the names of an endpoint computing device, such as one of client devices12or an endpoint machine server. As such a source mission element may correspond to the mission element associated with the source of the data flow and a destination mission element may correspond to the mission element associated with the destination of the data flow. Endpoint users may have an associated rank, title, name, and the like, while endpoint devices may have names (e.g., domain names), IP addresses, and the like. The mission elements may be associated with priorities based on, for example, the rank of the endpoint user, the usage of the endpoint device, and the like. In some examples, network template18may specify a plurality of mission elements, where each mission element in the plurality of mission element, may specify a network address, and a mission element utility. The network address may be, for example, an IP address or other network address, and may be used for matching data flows to the mission element. The mission element utility may correspond to the network priority of mission element, where a higher mission element utility may indicate a higher priority, such that flows from the mission element and/or to a mission element may be prioritized based at least in part on the associated mission element utility. In some examples, the mission element utility may be expressed as a numerical value, such as integers. One example of a range of values for the mission element utility may be from 0 to 70, 0 to 100, or any other suitable range of values. In some examples, network template18may also include a default mission element having a default mission utility. If a data flow is not matched with one of the plurality of mission elements specified by network template18, the data flow may be associated with the default mission element. To determine the source mission element associated with the data flow and the destination mission element associated with the data flow, router device10A or router device11may determine, from the contents of data packets of the data flow, the source network address and the destination network address of a data packet of the data flow. The source network address and the destination network address of a data packet of the data flow may be considered the source network address and the destination network address of the data flow. Router device10A or router device11may determine the source mission element associated with the data flow based on determining whether the plurality of mission elements specified by network template18include a mission element having a network address that matches the source network address of the data flow. That is, router device10A or router device11may determine whether the plurality of mission elements in network template18includes a mission element having a network address that is the same value as the source network address of the data flow. Router device10A or router device11may determine the destination mission element associated with the data flow based on determining whether the list of mission elements in network template18includes a mission element having a network address that matches the destination address of the data flow. That is, router device10A or router device11may determine whether the plurality of mission elements specified by network template18includes a mission element having a network address that is the same value as the destination network address of the data flow. Router device10A or router device11may also determine a source mission elements utility associated with the data flow as the mission element utility of the source mission element of the data flow. If router device10A or router device11determines that the plurality of mission elements specified by network template18includes a mission element having a network address that matches the source network address of the data flow, router device10A or router device11may determine a source mission element utility associated with the data flow as the mission element utility of the mission element having the network address that matches the source network address of the data flow. Router device10A or router device11may also determine a destination mission element utility associated with the data flow as the mission element utility of the destination mission element of the data flow. If router device10A or router device11determines that the list of mission elements in network template18includes a mission element having a network address that matches the destination network address of the data flow, router device10A or router device11may determine a destination mission element utility associated with the data flow as the mission element utility of the mission element having the network address that matches the destination network address of the data flow. Router device10A or router device11may determine the mission element utility associated with the data flow as the greater of the source mission element utility and the destination mission element utility. If network template18does not include a mission element in the list of mission elements having a network address that matches the destination network address of the data flow or the source network address of the data flow, router device10A may determine a mission element utility associated with the data flow as the mission utility of the default mission element specified in network template18. Router device10A or router device11may determine a mission utility associated with the data flow based on one or more of: the flow type mission utility associated with the data flow and/or the mission element utility. In some examples, router device10A or router device11may determine the mission utility associated with the data flow as the sum of the flow type mission utility associated with the data flow and the mission element utility associated with the data flow. In other examples, router device10A or router device11may determine the mission utility associated with the data flow as the average (e.g., mean) of the flow type mission utility associated with the data flow and the mission element utility associated with the data flow, the greater of the flow type mission utility associated with the data flow and the mission element utility, and the like. Router device10A may control the admission of the data flow based at least in part on the mission utility associated with the data flow and the traffic class associated with the data flow. As described above, a traffic class is associated with a bandwidth allocation, which may be the amount of available bandwidth at one or more interfaces of router device10A that can be allocated for all data flows associated with the traffic class that router device10A encounters. For example, a traffic class may be allocated a percentage of the total bandwidth at one or more interfaces of router device10A as specified by the bandwidth allocation associated with the traffic class. Router device10A may determine whether to admit the data flow based at least in part on determining whether the available bandwidth in the bandwidth allocation associated with the traffic class is sufficient for the data flow. That is, router device10A or router device11may determine the expected bandwidth of the data flow, and router device10A may determine whether the amount of bandwidth not currently being used to receive and route data flows in the bandwidth allocation associated with the traffic class is greater than or equal to the expected bandwidth of the data flow. As described above, network template18may specify, for a flow type, the expected bandwidth of the flow. As such, router device10A or router device11may determine the flow type of the data from network template18based on the source port and/or destination port of the data flow and may determine, based on network template18, the expected bandwidth for the flow type associated with the data flow as the expected bandwidth of the data flow. If the expected bandwidth of the data flow is less than or equal to the amount of bandwidth not currently being used to receive and route data flows in the bandwidth allocation associated with the traffic class, router device10A or router device11may admit the data flow. If the expected bandwidth of the data flow is greater than the amount of bandwidth not currently being used to receive and route data flows in the bandwidth allocation associated with the traffic class, router devices along the path of the data flow, such as router device10A, router device11, router device14, router device10D, and the like, may determine whether to admit the data flow based at least in part on the mission utility associated with the data flow. Specifically, router device10A may compare the mission utility associated with the data flow with the mission utility associated with the other data flows associated with the traffic class that are currently being admitted by router device10A. If the mission utility associated with the data flow is not greater than the mission utility of at least one data flow associated with the traffic class that is currently being admitted by router device10A, router device10A may refrain from admitting the data flow. If the mission utility associated with the data flow is greater than the mission utility of at least one data flow associated with the traffic class that is currently being admitted by router device10A, router device10A may determine whether dropping (i.e., ceasing admission) a currently admitted data flow associated with the traffic class having the lowest mission utility would increase the amount of bandwidth not currently being used to be greater than or equal to the expected bandwidth of the data flow. If router device10A determines that dropping the currently admitted data flow associated with the traffic class having the lowest mission utility would increase the amount of bandwidth not currently being used to be greater than or equal to the expected bandwidth of the data flow, router device10A may admit the data flow. If router device10A determines that dropping the currently admitted data flow associated with the traffic class having the lowest mission utility would not increase the amount of bandwidth not currently being used to be greater than or equal to the expected bandwidth of the data flow, router device10A may determine whether the mission utility associated with the data flow is greater than the second lowest mission utility of the one or more data flows associated with the traffic class that is currently being admitted by router device10A. If router device10A determines that the mission utility associated with the data flow is not greater than the second lowest mission utility of the one or more data flows associated with the traffic class that is currently being admitted by router device10A, router device10A may refrain from admitting the data flow. If router device10A determines that the mission utility associated with the data flow is greater than the second lowest mission utility of the one or more data flows associated with the traffic class that is currently being admitted by router device10A, router device10A may determine whether dropping currently admitted data flows associated with the traffic class having the lowest mission utility and the second lowest mission utility would increase the amount of bandwidth not currently being used to be greater than or equal to the expected bandwidth of the data flow. If router device10A determines that dropping the two currently admitted data flows would increase the amount of bandwidth not currently being used to be greater than or equal to the expected bandwidth of the data flow, router device10A may admit the data flow. In this way, router device10A may use the mission utility and the traffic class associated with a data flow to determine whether to admit the data flow. FIG.2is a block diagram illustrating router device200that perform flow admission control in accordance with one or more techniques of this disclosure. Router device200ofFIG.2is an example of one of router devices10A-10E, router device11, router device14, and the like ofFIGS.1A and1B, as well as any other router device described throughout this disclosure and is described below within the context of system2ofFIGS.1A and1B.FIG.2illustrates only one particular example of router device200and many other examples of router device200may be used in other instances. Router device200ofFIG.2may include a subset of the components included in example router device200or may include additional components not shown inFIG.2. As shown in the example ofFIG.2, router device200includes one or more processors240, one or more communication units244, and one or more storage devices248. Storage devices248of router device200also include routing module220, communication module222, and flow admission module224. One or more processors240may implement functionality and/or execute instructions within router device200. For example, processors240on router device200may receive and execute instructions stored by storage devices248that execute the functionality of routing module220, communication module222, and flow admission module224. These instructions executed by processors240may cause router device200to perform any quality of service technique such as flow admission control during program execution. That is, routing module220, communication module222, and flow admission module224may be operable by processors240to perform various actions or functions of router device200, for instance, routing data flows to and from internal network6A and complex joint network4and to perform flow admission control of data flows to and from internal network6A and complex joint network4. Routing module220, communication module222, and flow admission module224may rely on information received by communication units244. In other words, as is described in more detail below, modules200-224may be operable by processors240to perform operations on information received by communication units244from an outside computing device, such as network management system20or complex joint network4. Although shown as software modules in the example ofFIG.2, router device200may execute the functions for performing the techniques of this disclosure using firmware, an application-specific integrated circuit (ASIC), or some combination of firmware, software, and ASICs. Communication channels250may interconnect each of the components200,222,224,240,244, and248for inter-component communications (physically, communicatively, and/or operatively). In some examples, communication channels250may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data. One or more communication units244of router device200may communicate with external devices via one or more wired and/or wireless networks by transmitting and/or receiving network signals on the one or more networks. Each communication unit244may include multiple ports for receiving and/or sending traffic flows to outside devices, such as a client device or one or more nodes in complex joint network4. Examples of communication unit244include a network interface card (e.g., an Ethernet card), an optical transceiver, a radio frequency transceiver, a GPS receiver, or any other type of device that can send and/or receive information. Other examples of communication units244may include short wave radios, cellular data radios, wireless network radios, as well as universal serial bus (USB) controllers. One or more storage devices248within router device200may store information for processing during operation of router device200(e.g., router device200may store data that modules200,222, and224access during execution at router device200). In some examples, storage device248may function as a temporary memory, meaning that one purpose of storage device248is not long-term storage. Storage devices248on router device200may configured to include short-term storage of information as volatile memory and therefore not retain stored contents if powered off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. Storage devices248may also be configured to store larger amounts of information than volatile memory. Storage devices248may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memories include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. Storage devices248may store program instructions and/or information (e.g., data) associated with modules200,222, and224. In accordance with techniques of this disclosure, communication module222of router device200may receive, via communication units244, data flows. Communication module222may receive, via communication units244, the data flows from internal network6A for router device200to route across complex joint network4to another of internal networks6, or communication module222may receive, via communication units244, the data flows from complex joint network4for router device200to route to one of client devices12. Flow admission module224may, in response to receiving a data flow, determine, based on network template18, the mission utility associated with the data flow and the traffic class associated with the data flow. Flow admission module224may, based on the mission utility associated with the data flow and the traffic class associated with the data flow, determine whether to admit the data flow. In some examples, if router device200receives encrypted data flows, such as router device10A inFIG.1B, flow admission module224may receive, from network management system20or a router device (e.g., router device11inFIG.1B) in a plaintext portion of the network, tags associated with data flows that may include any one or combination of the source network address and source port of the data packet, the destination network address and the destination port, the transport protocol identifier for the data packet, as well as other relevant information associated with the data packet. In some examples, the tags may include an indication of the mission utility associated with the data flow and the traffic class associated with the data flow. In some examples, if router device200is a router device in a plaintext portion of a crypto-partitioned network, such as router device11inFIG.1B, flow admission module224may determine tags associated with the data flow, and may transmit the tags to network management system20and/or to a corresponding router device, such as router device10A inFIG.1B, in the encrypted portion of the crypto-partitioned network. The tag associated with a data flow may include any one or combination of the source network address and source port of the data packet, the destination network address and the destination port, the transport protocol identifier for the data packet, as well as other relevant information associated with the data packet. In some examples, the tags may include an indication of the mission utility associated with the data flow and the traffic class associated with the data flow. Router device200may store a copy of network template18in storage devices248. For example, when router device200begins its operations, routing device may communicate with network management system20to receive the last version of network template18and may store network template18in storage devices248. To determine the mission utility associated with the data flow and the traffic class associated with the data flow, flow admission module224may determine the flow type associated with the data flow. For example, flow admission module224may determine, from the header of the data packets of the data flow or a tag associated with the data flow, the source network address, the source network port, the destination network address, and the destination port associated with the data flow, and flow admission module224may determine, based on the source network port and/or destination network port, the flow type associated with the data flow. Flow admission module224determine, based on network template18, the flow type associated with the data flow. For example, flow admission module224may determine the flow type associated with the data flow as the flow type specified in network template18as having a port that matches the source network port associated with the data flow and/or the destination network port associated with the data flow. Flow admission module224may determine a flow type mission utility associated with the data flow as the flow type mission utility associated with the determined flow type specified by network template18. In some examples, flow admission module224may determine, based on network template18, a source flow type associated with the data flow as the flow type specified in network template18as having a port that matches the source port associated with the data flow. Flow admission module224may therefore determine a source flow type mission utility associated with the data flow as the flow type mission utility associated with the determined source flow type specified by network template18. Similarly, flow admission module224may determine, based on network template18, a destination flow type associated with the data flow as the flow type specified in network template18as having a port that matches the destination port associated with the data flow. Flow admission module224may therefore determine a destination flow type mission utility associated with the data flow as the flow type mission utility associated with the determined destination flow type specified by network template18. In some examples, flow admission module224may determine, based on network template18, mission elements associated with the data flow. For example, flow admission module224may determine the mission element associated with the data flow as the mission element specified in network template18as having a network address that matches the source network address associated with the data flow and/or the destination network address associated with the data flow. Flow admission module224may determine a mission element utility associated with the data flow as the mission element utility associated with the determined mission element specified by network template18. In some examples, flow admission module224may determine, based on network template18, a source mission element associated with the data flow as the mission element specified in network template18as having a network address that matches the source network address associated with the data flow. Flow admission module224may therefore determine a source mission element utility associated with the data flow as the mission element utility associated with the determined source mission element specified by network template18. Similarly, flow admission module224may determine, based on network template18, a destination mission element associated with the data flow as the mission element specified in network template18as having a network address that matches the destination network address associated with the data flow. Flow admission module224may therefore determine a destination mission element utility associated with the data flow as the mission element utility associated with the determined destination mission element specified by network template18. Flow admission module224may therefore determine the mission utility associated with the data flow based at least in part on a flow type mission utility and the mission element utility associated with the data flow. For example, flow admission module224may determine a flow type mission utility associated with the data flow as the greater of the source flow type mission utility and the destination flow type mission utility, or may determine the flow type mission utility as the flow type mission utility associated with a default flow type if the flow type associated with the data flow is the default flow type. Similarly, flow admission module224may determine a mission element utility associated with the data flow as the greater of the source mission element utility and the destination mission element utility, or may determine the mission element utility as the mission element utility associated with a default mission element if the mission element associated with the data flow is the default mission element. Flow admission module224may therefore determine the mission utility associated with the data flow as the sum of the flow type mission utility associated with the data flow and the mission element utility associated with the data flow. Flow admission module224may determine, based on network template18, the traffic class associated with the data flow. For example, flow admission module224may determine the traffic class associated with the data flow as the traffic class specified in network template18as having a port that matches the source network port associated with the data flow and/or the destination network port associated with the data flow. Flow admission module224may determine, based on the determined mission utility associated with a data flow and the traffic class associated with the data flow, whether to admit the data flow. Flow admission module224may determine the amount of available bandwidth in the bandwidth of a network link to allocated to the traffic class and may determine whether the expected bandwidth usage associated with the flow type is less than or equal to the amount of available bandwidth in the bandwidth of the network link allocated to the traffic class. If the expected bandwidth usage associated with the flow type is less than or equal to the amount of available bandwidth in the bandwidth in the network link allocated to the traffic class, flow admission module224may admit the data flow. If the expected bandwidth usage associated with the flow type is greater than the amount of available bandwidth in the bandwidth in the network link allocated to the traffic class, flow admission module224may determine whether one or more data flows associated with the traffic class can be dropped to increase the amount of available bandwidth in the bandwidth in the network link allocated to the traffic class to accommodate the data flow. Flow admission module224may determine whether the mission utility of the data flow is greater than the mission utilities of one or more other data flows in the traffic class currently being admitted by router device200. If flow admission module224determines that the mission utility of the data flow is greater than the mission utilities of one or more other data flows in the traffic class currently being admitted by router device200, flow admission module224may determine whether dropping the one or more other data flows in the traffic class that are associated with lower mission utilities would increase the amount of available bandwidth in the bandwidth allocated to the traffic class to be equal to or greater than the expected bandwidth usage associated with the flow type. If flow admission module224determines that dropping one or more other data flows in the traffic class that are associated with lower mission utilities would increase the amount of available bandwidth in the bandwidth allocated to the traffic class to be equal to or greater than the expected bandwidth usage associated with the flow type, flow admission module224may drop the one or more other data flows in the traffic class that are associated with lower mission utilities and may admit the data flow. Conversely, if flow admission module224determines that even dropping every other data flows in the traffic class that are associated with lower mission utilities would not increase the amount of available bandwidth in the bandwidth allocated to the traffic class to be equal to or greater than the expected bandwidth usage associated with the flow type, flow admission module224may refrain from admitting the data flow. In this way, flow admission module224may perform admission control of data flows. FIG.3is a block diagram illustrating network management system20that manages a complex joint network4in accordance with one or more techniques of this disclosure. Network management system20ofFIG.3is described below within the context of system2ofFIGS.1A and1B.FIG.3illustrates only one particular example of network management system20, and many other examples of network management system20may be used in other instances. Network management system20ofFIG.3may include a subset of the components included in example network management system20or may include additional components not shown inFIG.3, and may be used to manage and optimize networks that include an encrypted core with unencrypted edge enclaves or an unencrypted network. As shown in the example ofFIG.3, network management system20includes one or more processors340, one or more communication units344, and one or more storage devices348. Storage devices348of network management system20also include monitoring module320, communication module322, visualizer module324, and network template18. One or more processors340may implement functionality and/or execute instructions within network management system20. For example, processors340on network management system20may receive and execute instructions stored by storage devices348that execute the functionality of monitoring module320, communication module322, and visualizer module324. These instructions executed by processors340may cause network management system20to monitor complex joint network4. That is, monitoring module320, communication module322, and visualizer module324may be operable by processors340to perform various actions or functions of network management system20, for instance, monitoring complex joint network4, deriving tags associated with active flows in complex joint network4, and providing a visualization of complex joint network4. Monitoring module320, communication module322, and visualizer module324may rely on information received by communication units344. In other words, as is described in more detail below, modules200-324may be operable by processors340to perform operations on information received by communication units344from an outside computing device, such as router devices10of complex joint network4. Although shown as software modules in the example ofFIG.3, network management system20may execute the functions for performing the techniques of this disclosure using firmware, an application-specific integrated circuit (ASIC), or some combination of firmware, software, and ASICs. Communication channels350may interconnect each of the components200,322,324,340,344, and348for inter-component communications (physically, communicatively, and/or operatively). In some examples, communication channels350may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data. One or more communication units344of network management system20may communicate with external devices via one or more wired and/or wireless networks by transmitting and/or receiving network signals on the one or more networks. Each communication unit344may include multiple ports for receiving and/or sending traffic flows to outside devices, such as a client device or one or more nodes in complex joint network4. Examples of communication unit344include a network interface card (e.g., an Ethernet card), an optical transceiver, a radio frequency transceiver, a GPS receiver, or any other type of device that can send and/or receive information. Other examples of communication units344may include short wave radios, cellular data radios, wireless network radios, as well as universal serial bus (USB) controllers. One or more storage devices348within network management system20may store information for processing during operation of network management system20(e.g., network management system20may store data that modules200,322, and324access during execution at network management system20). In some examples, storage device348may function as a temporary memory, meaning that one purpose of storage device348is not long-term storage. Storage devices348on network management system20may configured to include short-term storage of information as volatile memory and therefore not retain stored contents if powered off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. Storage devices348may also be configured to store larger amounts of information than volatile memory. Storage devices348may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memories include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. Storage devices348may store program instructions and/or information (e.g., data) associated with modules200,322, and324. Network management system20may use network template18to manage complex joint network4regardless of whether complex joint network4transports encrypted data packets or unencrypted data packets. Network template18may include one or more files that include information regarding naming conventions, subnets, links, flow types, quality of service (QoS) settings, and other types of information that can be used by router devices10as well as any other suitable devices and systems in system2. For example, network template18may include information that can be used by a visualizer tool in system2to auto-configure items and flows that can be tracked and displayed by the visualizer tool. Network template18may be created and stored in network management system20, and nodes of complex joint network4may be able to retrieve network template18during initialization of the nodes. In some examples, network template18may include definitions and associations such as global lookup settings for items (e.g., users, nodes, link types, networks, subnetworks, etc.) within complex joint network4, including defining DNS servers, LDAP servers, mission databases, and the like used to lookup such items. Network template18may also include information for configuring the data used by router devices10to perform dynamic flow admission control and make real-time QoS decisions throughout complex joint network4. For example, network template18may defines the types of flows that are identified and/or permitted throughout complex joint network4and may, for each identified flow, include information that can be used by router devices10to make real-time QoS decisions regarding flow admissions. In some examples, network template18may specify a list of flow types, where a flow type indicates the type of data flow associated with the flow type. Examples of flow types may include a voice over IP (VoIP) flow type, a world wide web (WWW) flow type, a chat flow type, and the like. Network template18may, for each flow type in the list of flow types, specify one or more of: a port, a name, a bandwidth, a mission utility, and a color. The port may be a TCP or UDP port or range of ports associated with the flow type, and may be used to match a flow with a flow type. The name may be a name used to identify the flow. Examples of names may include “FMV”, “Video-conference”, “VoIP”, “web”, “YouTube”, and the like. The bandwidth may be the expected bandwidth usage of the flow, such as in bits per second. The mission utility may correspond to the network priority of the flow type, where a higher mission utility may indicate a higher priority. Thus, a first data flow having a higher mission utility than a second flow may be prioritized over the second data flow. In some examples, the mission utility may be expressed as a numerical value, such as integers. One example of a range of values for the mission utility may be from 0 to 100, 0 to 70, −10 to 10, and the like, although other ranges of values are also contemplated in this disclosure. The color may be the color of the flow as displayed by a visualizer, such as a visualizer in network management system20. In some examples, the color may be expressed as any valid CSS color string, such as “ff0000”, “red”, or “rgb(255,0,0)”. In some examples, network template18may also include a default flow type. Flows that are not matched to any flow types in the list of flow types may be assigned to the default flow type. The default flow type may specify a default mission utility that may be assigned to such unmatched flows. In some examples, network template18may specify whether a flow type in the list of flow types is to be used as the default flow type for unmatched flows. In some examples, network template18may specify a list of traffic classes. Network template18may, for each traffic class in the list of traffic classes, specify one or more of: a name, a bandwidth allocation, and one or more data flows that fall under the traffic classes. The name may be a name used to identify the traffic class. Examples of names may include “video”, “voice”, etc. The bandwidth allocation may be the amount of available bandwidth to allocate for the traffic class. The bandwidth allocation may be specified as a percentage of available bandwidth, the amount of bandwidth in, for example, bits per second, or any other suitable expression of the amount of bandwidth allocated to the traffic class. In some examples, the bandwidth allocation may be the amount of available bandwidth in complex joint network4to allocate for the traffic class. The one or more data flows may be the flows that are assigned to the traffic class. The flows that are assigned to the traffic class may be specified as ports associated with the flow. Flows assigned to the traffic class may be admitted under the traffic class's bandwidth allocation. In some examples, only flows assigned to the traffic class may be admitted under the traffic class's bandwidth allocation, unless the traffic class has available bandwidth allocation for accommodating flows that are not assigned to the traffic. In some examples, network template18may also include a default traffic class. If a flow is not matched with one of the flow types listed in network template18, the flow may be an unmatched flow that is associated with the default traffic class, and any remaining unallocated bandwidth may be used to admit such unmatched flows. If no unallocated bandwidth remains or if the remaining unallocated bandwidth is less than the expected bandwidth used by the unmatched data flow, then a router device may refrain from admitting the unmatched data flow. In some examples, network template18may specify whether a traffic class in the list of traffic classes is to be used as the default traffic class for unmatched flows. In some examples, network template18may include a set of flow groups to group multiple data flows into a single flow, such as for visualization purposes. For example, network template18may specify a flow group of data flows associated with a specified traffic class, data flows associated with a specified flow type, data flows from a first specified user to a second specified user, and the like. In some examples, network template18may specify a list of security domains. Network template18may, for each security domain in the list of security domains, specify one or more of: a name, a network, and a color. The name may be a name used to identify the network domain. The network may be a range of network addresses (e.g., IP network addresses such as 192.168.1.0/224) by which nodes of complex joint network4are matched to a security domain. The color may be the color of nodes associated with the security domain as displayed by a visualizer, such as a visualizer in network management system20. In some examples, the color may be expressed as any valid CSS color string, such as “ff0000”, “red”, or “rgb(255,0,0)”. In some examples, network template18may specify a list of mission elements, where mission elements are endpoint users or the names of an endpoint computing device, such as one of client devices12or an endpoint machine server. Network template18may, for each mission element in the list of mission element, specify one or more of: a name, a looked up name, a user identifier (UID), a network address, and a mission utility. The name may be a name used to identify the mission element. The looked up name may be a name looked up via LDAP or DNS, such as a hostname or username associated with the mission element. The UID may be a network-unique (e.g., unique to complex joint network4) identifier for the mission element. The network address may be, for example, an IP address or other network address, and may be used for matching flows to the mission element. The mission utility may correspond to the network priority of mission element, where a higher mission utility may indicate a higher priority, such that flows from the mission element and/or to a mission element may be prioritized based at least in part on the associated mission utility. In some examples, the mission utility may be expressed as a numerical value, such as integers. One example of a range of values for the mission utility may be from 0 to 70, although other ranges of values are also contemplated in this disclosure. In addition, network template18may, for each mission element, specify additional information associated with the mission element that is looked up via LDAP or DNS, such as users associated with the endpoint, the name, rank responsibilities, and the like of the users associated with the end point, and/or the mission utility value. In some examples, network template18may also include a default mission element having a default mission utility. If a mission element is not matched with one of the mission elements listed in network template18, the mission element may be an unmatched mission element that is associated with the default mission element. In some examples, network template18may specify a list of link types, where links are network links in complex joint network4, such as links between router devices10A, subnets8, SATCOM network16, and the like. Network template18may, for each link type in the list of link types, specify one or more of: a name, a maximum bandwidth, link name prefixes, and a medium of the link. The name may be a name used to identify the link. Examples of names may include “Unknown”, “RF-CDL”, “Optical”, “RF-WiFi”, “Ethernet”, “RF-SRW”, etc. The link name prefixes may be case-insensitive strings used to match router-reported links (e.g., links reported by router devices10) to link types. The medium of the link may be, for example, include an optical medium, a Common Data Link (CDL) medium, Ethernet, WiFi, and the like. The maximum bandwidth may be the expected maximum bandwidth of the link, and may correspond to the medium of the link. In some examples, network template18may, for each link type, also specify additional information such as whether the link type is an air-to-air link, an air-to-ground link, a ground-to-ground link, and the like, whether the link type is a wired link or a wireless link, the currently measured size of the link as determined from real-time values updated from routing devices10, and the like. In some examples, network template18may specify whether a link type in the list of link types is to be used as the default link type for unmatched links. In some examples, network template18may specify a list of nodes, where nodes are devices in complex joint network4, such as router devices10, client devices12, router device14, and the like. Network template18may, for each node in the list of nodes, specify one or more of: a name, a node ID, a paired node ID, an icon, and node name prefixes. The name may be a name used to identify the node. If network template18does not specify a name for a node, the name for the node may be inferred as the node ID. The node ID may identify a node and may be the ID received from a corresponding node, such as a router in complex joint network4. The paired node ID may be the node ID of a node on the other side of an INE, such as INE9from the corresponding node in complex joint network4. The icon may be the icon that represents the node when displayed by a visualizer. The node name prefixes may be case-insensitive strings used to match the beginning of router-reported nodes (e.g., nodes reported by router devices10) to node types. In some examples, network template18may also specify a default icon that represents any unmatched nodes when displayed by a visualizer. Network template18may also specify one or more autonomous and/or manual control and/or configuration settings for routers and devices in complex joint network4. Such settings may include Application Programming Interface (API) settings, plug-in settings for routers, and the like. Such settings may also include settings for performing beam switching, such as cubic beam switching or auto-based beam switching to known discovered paths. Such settings may include settings for enabling and disabling interfaces and links of router devices in complex joint network4, and may include settings for throttling the capacity of interfaces and links of router devices in complex joint network4. These settings may enable network management system20to perform smart optimization of complex joint network4. Network template18may also specify priority and autonomous network optimization settings, such as priority and/or mission utility settings, which may include QoS settings, flow admission control settings, and multiple router tables. Such settings, which may be specified in the optimization basis associated with traffic classes and/or flow types, may specify traffic-aware settings for using different router tables for routing data flows based on the traffic in complex joint network4to optimize for latency and/or mission-aware settings for using different routing tables for routing data flows based on the mission to optimize for trust. In some examples, network template18may also specify thresholds, such as a target CNP for complex joint network4, formula options for CNP, as well as constants and the like that specify trade-off settings, such as for race conditions, tie breakers, and the like. These settings may also enable network management system20to perform smart optimization of complex joint network4. In some examples, network template18may also specify settings to perform smart rebalancing of complex joint network4. An example network template18is as follows: {″FlowTypes″: [{″Port″: 80,″Name″: ″WWW″,″Bandwidth″: 5001,″MissionUtility″: 70,″Color″: ″grey″},{″Port″: 9987,″Name″: ″voip″,″Bandwidth″: 2002,″MissionUtility″: 70,″Color″: ″grey″“OptimizationBasis”:”latency”},{″Port″: 5004,″Name″: ″fjord″,″Bandwidth″: 15000000,″MissionUtility″: 50,″Color″: ″darkorange″“OptimizationBasis”:”latency”},{″Port″: 6004,″Name″: ″hockey″,″Bandwidth″: 3900000,″MissionUtility″: 20,″Color″: ″grey″“OptimizationBasis”: “latency”},{″Port″: 8765,″Name″: ″iperf″,″Bandwidth″: 10000,″MissionUtility″: 60,″Color″: ″grey″},{″Port″: 389,″Name″: ″ldap″,″Bandwidth″: 1000,″MissionUtility″: 70,″Color″: ″grey″},{″Port″: 8080,″Name″: ″Cyvis″,″Bandwidth″: 100,″MissionUtility″: 50,″Color″: ″grey″},{″Port″: 1234,″Name″: ″Chat_A_1″,″Bandwidth″: 1000,″MissionUtility″: 40,″Color″: ″red″},{″Port″: 1235,″Name″: ″Chat_A_2″,″Bandwidth″: 1000,″MissionUtility″: 40,″Color″: ″orange″},{″Port″: 1236,″Name″: ″Chat_A_3″,″Bandwidth″: 1000,″MissionUtility″: 40,″Color″: ″yellow″},{″Port″: 1237,″Name″: ″Chat_A_4″,″Bandwidth″: 1000,″MissionUtility″: 40,″Color″: ″yellowgreen″},{″Port″: 1238,″Name″: ″Chat_A_5″,″Bandwidth″: 1000,″MissionUtility″: 40,″Color″: ″green″},{″Port″: 1239,″Name″: ″Chat_A_6″,″Bandwidth″: 1000,″MissionUtility″: 40,″Color″: ″lightblue″},{″Port″: 1340,″Name″: ″Chat_A_7″,″Bandwidth″: 1000,″MissionUtility″: 40,″Color″: ″blue″}],″FlowTypeDefault″: {″Bandwidth″: 5000,″MissionUtility″: 10,″Color″: ″lightgrey″“OptimizationBasis”: “throughput”},″SecurityDomains″: [{″Name″: ″AFRL Domain″,″Network″: ″192.168.0.0/16″,″Color″: ″hotpink″},{″Name″: ″Black Core″,″Network″: ″10.0.0.0/8″,″Color″: ″teal″}],″TrafficClasses″: [{″Name″: ″Chat″,″Allocation″: 10,″Flows″: [9987,8765,389,1234,1235,1236,1237,1238,1239,1340,1241,1242,1243,1244,1245,1246,1248,1247],“OptimizationBasis”:”latency”},{″Name″: ″VoIP″,″Allocation″: 25,″Flows″: [8080],“OptimizationBasis”: “latency”},{″Name″: ″video″,″Allocation″: 35,″Flows″: [8080],“OptimizationBasis”: “throughput”},{″Name″: ″Other″,“DefaultClass”: “yes”,″Allocation″: 20,″Flows″: [5004,6004,0],“OptimizationBasis”: “throughput” }}},]″MissionElements″: [{″Name″: ″Middle″,″AccountName″: ″middle″,″MissionUtility″: 15,″Address″: ″192.168.17.2″},{″Name″: ″Left″,″AccountName″: ″left″,″MissionUtility″: 5,″Address″: ″192.168.11.2″},{″Name″: ″Right″,″AccountName″: ″right″,″MissionUtility″: 10,″Address″: ″192.168.15.2″},{″Name″: ″GEP″,″AccountName″: ″gep″,″MissionUtility″: 20,″Address″: ″192.168.19.2″},{″Name″: ″Rome″,″AccountName″: ″Rome″,″MissionUtility″: 20,″Address″: ″192.168.89.100″}],″MissionElementDefault″: {″MissionUtility″: 10},″LinkTypes″: [ ],″NodeTypes″: [{″Name″: ″GEP″,″Icon″: ″DGE.png″,″Prefixes″: [″dge″,″gep″]},{″Name″: ″DeployedRadar″,″Icon″: ″Comms.png″,″Prefixes″: [″sb″]},{″Name″: ″Aerial Gateway″,″Icon″: ″UAV.png″,″Prefixes″: [″middle″]},{″Name″: ″Rome″,″Icon″: ″Tent.png″,″Prefixes″: [″Rome″]},{″Name″: ″TE″,″Icon″: ″Hummer.png″,″Prefixes″: [″right″,″left″]}],″NodeTypeDefault″: {″Icon″: ″Plane.png″},″TrafficGroups″: [{ }],″MissionElementsDefault″: {″MissionUtility″: 5}} In some examples, monitoring module320may determine information regarding data flows in complex joint network4based on information reported by nodes of complex joint network4to network management system20, such as router configuration data, the information contained in network template18, as well as information reported by other servers and systems, such as a Domain Name System, a Lightweight Directory Access Protocol (LDAP) server, and the like. Monitoring module320may use communication module322to communicate with nodes of complex joint network4. For example router devices (e.g., router devices10) in complex joint network4may report, for each flow sensed by the router devices, the source network address, the source port, the destination network address, the destination port, the protocol ID, and the differentiated services code point (DSCP). In the case where complex joint network4is an encrypted network, router devices on the plain-text side of the network (e.g., router device11behind INE9) may report, for each flow sensed by the router devices, a tag associated with the data flow that specifies one or more of: the plaintext side source network address, the source port, the plaintext side destination network address, the destination port, the protocol ID, and the differentiated services code point (DSCP). Meanwhile, router devices (e.g., router devices10) in the encrypted side of the network may report, for each flow sensed by the router devices, the encrypted side source network address, the encrypted side destination address, and the DSCP. Monitoring module320may, based on such information received from nodes of complex joint network4, perform one-to-one mappings between flows in the plaintext side of the network and flows in the encrypted side of the network. As described above, monitoring module320may receive tags from router devices in joint complex network4, such as from router device11, router devices10, and the like, for all active flows in complex joint network4. There may be two types of tags: dynamic tags and static tags. A static tag associated with a data flow may contain information that does not change over a mission, such as the flow type associated with the data flow and the bandwidth required for the data flow. A dynamic tag associated with a data flow may include information that may change during the course of a mission, such as mission elements terminating the data flow and the mission utility associated with the data flow. Routing devices such as routing device11may determine the static tags associated with data flows before the occurrence of mission operations based on one or more association rules, which may be collected in network template18. For example, a rule may determine the flow type associated with a data flow based on the source port of the data flow and the transport protocol of the data flow, and may determine the amount of bandwidth reserved for the data flow based on the data type. One example of such a rule is: (1) “If (source port=260) AND (protocol ID=UDP) THEN Flow Type=HD FMV)”; (2) “If (Flow Type=HD FMV) THEN (Bandwidth Reservation=5 Mbps)”. Network management system20may interface with external servers, such as DNS servers, LDAP servers, mission databases, and the like to obtain dynamic tag information for flows. In the case of the example rule with respect to the example data flow describe above, when network management system receives a report of a sensed UDP flow with source port260, a source network address of S, and a destination network address of D, monitoring module320may communicate with a DNS server to perform a reverse DNS look-up on source network address S to determine that the “HD FMV” data flow has a source mission element of “UVDS” and may communicate with an LDAP server to perform a LDAP lookup to determine that the destination mission element for the data flow at destination network address D is User 1. When User 1 logs out of the node at destination network address D and is replaced by User 2 at the destination node, monitoring module320may determine, based on performing a LDAP lookup at the LDAP server, that the current user at address D is User 2, and may therefore update the dynamic tag information associated with the data flow to specify that the destination mission element is User 2. For example, because User 1 and User 2 may be associated with different mission element utilities, monitoring module320and/or router device11may update the dynamic tag with the updated destination mission element utility with the mission element utility of user 2 and/or may update the calculated mission utility of the data flow based on the mission element utility of user 2. In this way, monitoring module320may receive, for each active flow in complex joint network4, a static tag and a dynamic tag. Network management system20may store the created static tags and dynamic tags, such as in a database on storage devices248. Network management system20may, using settings from network template18and data collected by network monitoring module320, to compute Real-time the Normalized Cumulative Network Performance (CNP) values, where the CNP is defined as: Normalized⁢Cumulative⁢Network⁢Performance⁢(C⁢N⁢P)=1C⁢N⁢P0⁢∑tasksri⁢pi(ti)CNP0=Maximum achievable CNP value during network eventri=rank/priority of network task i (larger number is higher priority)pi(ti)=performance utility (PerfUtil) function for networking task iti=completion time of networking task i If the CNP is below 100% (i.e., if 1 or more flows were not admitted by a router device in complex joint network4), then network management system20may search the representative network (e.g., complex joint network4) for unused available bandwidth in secondary, tertiary, etc. links. Based on current allowed flows and the unused available bandwidth, network management system20may perform simulations of various sets of specific changes (e.g., traffic-aware routing; mission-aware routing, enabling and disabling links in the network, performing beam steering, etc.). For example, network management system20may simulate different ways of routing various data flows in network management system20through such unused available bandwidth in the links. Network management system20may recalculate CNP for each simulated set of changes, then determine specific changes to complex joint network4based on the best CNP achieved by the simulated set of changes. In some examples, network management system20may present, such as by outputting for display at a display device, an indication of the set of simulated changes (e.g., the links to be enabled and/or disabled, the specific beam steering to be performed on nodes, the set traffic-ware and/or mission-aware routing, etc.), so that an administrator may determine whether to make such changes to complex joint network4. In some cases, network management system20may, based on settings in network template18, automatically implement the changes to complex joint network4with or without administrator input, such as by formulating and sending commands to nodes of complex joint network4to make such changes, such as sending commands to turn on or off links, commands to perform beam steering, commands to make changes to traffic-ware and/or mission-aware routing of the nodes, and the like, such as described with respect toFIGS.4-6. Visualizer module324may use the static tags and dynamic tags created using monitoring module320, as well as network template18, to provide a visualization of active data flows in complex joint network4. Specifically, visualizer module324may output a graphical user interface (GUI) that presents real-time operational information regarding complex joint network4determined by network management system20to provider network administrators with situational awareness of complex joint network4. Such a GUI may present an end-to-end view of data flows in complex joint network4along with information associated with the data flows in complex joint network4, such as information associated with active data flows in complex joint network4, information associated with links in complex joint network4, information associated with nodes in complex joint network4, information associated with users in joint network4, and the like. For example, visualizer module324may present, in the GUI, the tags associated with data flows, such as the static and dynamic tags as described above, and may automatically tag the end-to-end view of data flows presented in the GUI with endpoint node labels, mission utility metrics, link names, flow types, flow names, users, and the like. In some examples, the GUI that is outputted by visualizer module324may highlight and present data flows that may require attention by an operator of complex joint network4. For example, the GUI may highlight data flows that have been denied admission to a router device in complex joint network4and may also highlight competing data flows vying for the same network resources as the denied data flows. In some examples, the GUI that is outputted by visualizer module324may also enable an operator to query for data flows of interest based on one or more attributes and may present, in the GUI, the data flows that matches the one or more attributes. In some examples, visualizer module324may output the GUI for display at a display device operably coupled to network management system20. In other examples, visualizer module324may output the GUI to a computing device that is connected to network management system20, such as a computing device in the field that are connected via a network to network management system20, and such a computing device may output the GUI for display at a display device operably coupled to the computing device. FIG.4illustrates bandwidth allocation of a network link in complex joint network4ofFIGS.1A and1B. The network link may be a network link through which a router device ofFIGS.1A and1B, such as one of router devices10, router device11, router device14, and the like may transmit data flows received by the router device. As shown inFIG.4, the bandwidth of link402connected to the router device may be allocated based on traffic classes, where a guaranteed amount of bandwidth in link402may be reserved for each traffic class of data flows. For example, 35% of the bandwidth of link402is reserved for data flows in video traffic class404A, 25% of the bandwidth of link402is reserved for data flows in voice over IP (VoIP) traffic class404B, 20% of the bandwidth of link402is reserved for data flows in chat and world wide web (WWW) traffic class404C, and the remaining 20% of the bandwidth of link402is reserved for data flows in other traffic classes. In some examples, the bandwidth allocated for a traffic class is reserved for data flows of that traffic class and may not be used for data flows of other traffic classes, even for data flows of other traffic classes that are associated with a very high mission utility. Thus, for example, video traffic class404A may be reserved for only video traffic and may not be used for VoIP traffic. Data flows within a traffic class are prioritized by the mission utility associated with data flows. For example, in video traffic classic404A, the data flow Vid1 has a mission utility of87, data flow Vid2 has a mission utility of65, and data flow Vid3 has a mission utility of23. Thus, Vid3 may be dropped in order for link402to carry another data flow associated with the video traffic class404A if the other data flow is associated with a mission utility that is higher than 23. Because the data flows carried by link402may change over time, if data flow Vid3 is dropped, data flow Vid3 may re-request readmittance to router device10A. If, later on, the mission utility associated with data flow Vid3 is higher than the mission utility of another data flow associated with video traffic class404A, data flow Vid3 may be readmitted to router device10A. FIG.5is a block diagram illustrating dynamic balancing of network paths in accordance with the techniques of this disclosure. Such dynamic balancing of network paths can be performed in conjunction with flow admission control, as described in this disclosure, to autonomously optimize complex joint network4. As shown inFIG.5, network links506A-506D may form multiple different paths between router device10A and router device10B to connect internal networks6A and6B. Network links506A and506B may be broadband links, such as links having 10 megabits per second (Mbps) of bandwidth, while network links506C and506D may be narrow bank links, such as links having 100 kilobits per second (kbps) of bandwidth. A path between router device10A and router device10B can be formed by network link506A between router device10A and router device504in complex joint network4and network link506B between router device504and router device10B. In another example, a path between router device10A and router device10B can be formed by network link506A between router device10A and router device504and network link506D between router device504and router device10B. In another example, a path between router device10A and router device10B can be formed by network link506C between router device10A and router device504and network link506B between router device504and router device10B. In another example, a path between router device10A and router device10B can be formed by network link506C between router device10A and router device504and network link506D between router device504and router device10B. In the example ofFIG.5, video server502in internal network6B may stream video data to user 2 at internal network6B in the form of video data flow508A at a rate of 5 Mbps. User 1 at internal network6A may also be connected to user 3 at internal network6B in a VoIP call in the form of VoIP data flow508B at a rate of 100 kbps. Because links506A and506B are higher qualify links than links506C and506D due to links506A and506B having greater available bandwidth than links506C and506D, router device10A and504may route the 5 Mbps video data flow508A carrying the streaming video data through broadband network links506A and506B, and may also route the 100 kbps VoIP data flow508B carrying the VoIP call through broadband network links50A and506B. Because links506A and506B having 10 Mbps of bandwidth is carrying both the 5 Mbps video data flow508A to user 2 at internal network6B and the 100 kbps VoIP data flow508B between user 1 at internal network6A and user 3 at internal network6B, links506A and506B may only have 4.9 Mbps of available bandwidth to carry other data flows between internal networks6A and6B. If video server502subsequently attempts to stream a second video data flow508C of video data to user 4 at internal network6B at a rate of 5 Mbps, links506A and506B may not have available bandwidth to carry the second video data flow508C without router devices10A,10B, and504dropping the video data flow508A or the VoIP data flow508B currently being carried by links506A and506B. If the second video data flow508C to be streamed to user 4 at internal network6B has the same or smaller mission utility as the video data flow508A streamed to user 2 at internal network6B and the VoIP data flow508B, such as determined using network template18, router devices10A,10B, and504may not be able to drop the video data flow508A or the VoIP data flow508B currently being carried by links50A and506B to accommodate the second video data flow508C because the mission utility of the second video data flow508C is not greater than the mission utility of the video data flow508A or the VoIP data flow508B. In accordance with aspects of this disclosure, router devices, such as router devices10A,10B, and504, in complex joint network4as well as network management system20may communicate each other to probe and determine alternative paths between router devices10A,10B, and504, such as to determine multiple different paths between such router devices10A,10B, and504and to determine the quality of the different paths, such as the available bandwidth of each of the multiple different paths. In some examples, router devices10A,10B, and504may exchange information, such as in the form of data packets, that includes information associated with links that are connected to interfaces of router devices10A,10B, and504. In some examples, router devices10A,10B, and504may communicate with network management system20to determine information associated with links that are connected to interfaces of router devices10A,10B, and504. Router devices10A,10B, and504may use such information associated with links that are connected to interfaces of router devices10A,10B, and504to determine the existence of multiple paths between router device10A,10B, and504. For example, router device10A may receive, from router device504, an indication that router device504is connected to router device10B via links506B and506D. Because router device10A is connected to router device504via links506A and506C, router device10A may be able to determine that router device10A can be connected to router device10B via multiple paths using a combination of links506A-506D. Similarly, router device10B may receive, from router device504, an indication that router device504is connected to router device10A via links506A and506C. Because router device10B is connected to router device504via links506B and506D, router device10B may be able to determine that router device10B can be connected to router device10A via multiple paths using a combination of links506A-506D. In this way, router device10A may determine that router device10A can be connected to router device10B via both 10 Mbps links506A and506B and 100 kbps links506C and506D. In accordance with aspects of the present disclosure, router devices, such as router devices10A,10B, and504, in complex joint network4that are connected via links to form multiple paths between the router devices may be able to move flows from one path of the multiple paths to another path of the multiple paths to optimize the amount of data flows that can be carried between the router devices. In some examples, if the multiple paths between router devices include a higher quality path (e.g., a higher bandwidth path, a path with smaller packet loss, etc.) and a lower quality path (e.g., a lower bandwidth path, a path with greater packet loss, etc.), network management system20may direct the router devices to move data flows between the higher quality path and the lower quality path to increase the utilization of paths between the router devices and to increase the number of data flows that can be transmitted between the router devices. For example, if a data flow associated with a relatively higher mission utility being transmitted between router devices via the higher quality path prevents data flow associated with a relatively lower mission utility from being transmitted via any of the multiple paths between the router devices, network management system20may direct the router devices to move the data flow associated with the relatively higher mission utility from the higher quality path to the lower quality path to accommodate the data flow associated with the relatively lower mission utility in the higher quality path in order to admit both data flows in the multiple paths. In the example ofFIG.5, the path between router devices10A and10B formed using links506A and506B may be a higher quality path compared with the path formed using links506C and506D because the 10 Mbps path formed using links506A and506B has more available bandwidth compared with the 100 kbps path formed using links506C and506D. The higher quality path of links506A and506B may utilize 5.1 Mbps of the 10 Mbps bandwidth of links506A and506B to carry 5 Mbps video data flow508A and 100 kbps VoIP data flow508B, thereby leaving the higher quality path of links506A506B with 4.9 Mbps of available bandwidth for carrying other data flows. If router device10A subsequently receives 5 Mbps video data flow508C associated with a mission utility that is not greater than the mission utility associated VoIP data flow508B or the mission utility associated with video data flow508A, neither the higher quality path of links506A and506B nor the lower quality path of links506C and506D may be able to accommodate video data flow508C because the higher quality path of links506A and506B may only have 4.9 Mbps of available bandwidth while the lower quality path of links506C and506D may only have 100 kbps of available bandwidth. Thus, router device10A may fail at its attempt to reserve 5 Mbps of bandwidth in any of links506A-506D to accommodate video data flow508C. In order to accommodate and admit video data flow508C, network management system20may direct router device10A to move VoIP data flow508B from the higher quality path of links506A and506B to the lower quality path of links506C and506D, even though VoIP data flow508B is associated with the same or greater mission utility than the mission utility associated with video data flow508C. To move VoIP data flow508B, router device10A may drop VoIP data flow508B on links506A and506B and may instead admit VoIP data flow508B on links506C and506D. By moving VoIP data flow508B to the lower quality path of links506C and506D, router device10A increases the available bandwidth of higher quality path of links506A and506B from 4.9 Mbps to 5 Mbps, thereby enabling higher quality path of links506A and506B to carry the 5 Mbps video data flow508A as well as the Mbps video data flow508C. Network management system20may, in response to router device10A moving VoIP data flow508B to the lower quality path of links506C and506D, be able to successfully reserve 5 Mbps of bandwidth in links506A and506B to accommodate video data flow508C. In this way, router device10A may be able to admit and transmit video data flow508A, VoIP data flow508B, and video data flow508C to router device10B via the multiple paths between router devices10A and10B. FIGS.6A and6Bare block diagrams illustrating redirection of data flows, in accordance with aspects of the present disclosure. As shown inFIG.6A, complex joint network4may include router devices10A-10C and router devices612A-612C that include Common Data Link (CDL) radios602A-602L that provide wireless broadband links606A-606E and narrowband radios604A-604F that provide wireless narrowband links608A-608F for wirelessly connecting router devices10A-10C and router devices612A-612C. In general, wireless broadband links606A-606E may have much greater bandwidth compared with narrowband links608A-608F. For example, wireless broadband link606A has a bandwidth of 10 Mbps, wireless broadband link606B has a bandwidth of 5 Mbps, wireless broadband link606C has a bandwidth of 10 Mbps, wireless broadband link606D has a bandwidth of 8 Mbps, and wireless broadband link606E has a bandwidth of 10 Mbps. Meanwhile narrowband links608A-608F may each have a bandwidth of less than 1 Mbps. To transmit video data flow610that uses 4 Mbps of bandwidth from internal network6A to a user at internal network6B, router device10A may utilize wireless broadband link606B having 5 Mbps of bandwidth to transmit video data flow610to router device612B, and router device612B may utilize wireless broadband link606D to transmit video data flow610to router device10B, which may transmit video data flow610to the user at internal network6B. Meanwhile, router device10A may also transmit other data flows using wireless broadband links606A-606E and narrowband links608A-608F to internal network6B in parallel. If the destination of video data flow610changes from a user at internal network6B to a user at internal network6C, network management system20may determine that there is not a path between router device10A and router device10C connected to internal network6C that has sufficient bandwidth to transmit video data flow610. For example, network management system20may determine that to route video data flow610to router device10C, router device10A can only route video data flow610through router device612C. However, network management system20may determine that the only link between router device10A and router device612C is narrowband link608C that does not have sufficient bandwidth to transmit video data flow610. In some examples, network management system20may, in response to determining a change in the destination of a data flow, redirect the data flow by directing router device10A to establish a broadband path having sufficient bandwidth to carry the data flow between router device10A and the destination of the data flow. As shown inFIG.6B, network management system20may, in response to determining a change in the destination of video data flow610from internal network6B to internal network6C, direct router device10A to move wireless broadband link606B from connecting router device10A and router device612B to instead connect router device10A and router device612C, thereby establishing a broadband path via wireless broadband link606B and wireless broadband link606E between router device10A and router device10C. In some examples, to move wireless broadband link606B from connecting router device10A and router device612B to instead connect router device10A and router device612C, network management system20may direct router device612B to turn off CDL radio602E to stop receiving data via wireless broadband link606B, and network management system20may direct router device612C to turn on CDL radio6021to begin receiving data via wireless broadband link606B. CDL radio6061may therefore be able to discover CDL radio602B and lock its beam to CDL radio602D to create wireless broadband link606B. In another example, network management system20may direct router device10A to make a beam switch at CDL radio602D to direct its signals to CDL radio6061to create wireless broadband link606B. Once wireless broadband link606B has been moved to connect router device10A and router device612C, network management system20may direct router device10A to redirect video data flow610via wireless broadband link606B to router device10C. FIGS.7A-7Jillustrate a graphical user interface that includes information regarding complex joint network4to provide mission oriented network visibility of complex joint network4, in accordance with the techniques of the present disclosure. As described throughout this disclosure, network management system20may receive, from nodes of complex joint network4, such as from router devices10, subnets8, router device17, and the like, information regarding flows sent and/or received by nodes of network management system. Network management system20may receive such information and may, using network template18, determine various real-time information regarding such flows, as well as nodes, links, users, and the like. In addition, network management system20may also derive real-time information associated with the health of complex joint network4based on the received information. Network management system20may therefore output a GUI that presents real-time operational information regarding complex joint network4determined by network management system20to provider network administrators with situational awareness of complex joint network4. In some examples, network management system20may output the GUI for display at a display device operably coupled to network management system20. In other examples, network management system20may output the GUI to a computing device that is connected to network management system20, such as a computing device in the field that are connected via a network to network management system20, and such a computing device may output the GUI for display at a display device operably coupled to the computing device. For example, a computing device may use a web browser to connect to a web server that operates on network management system20to access a website that provides the GUI. As shown inFIG.7A, network management system20may create and output GUI700to present various information regarding complex joint network4. GUI700may include information pane701and network visualization pane703. Network visualization pane703may include a graphical representation of the nodes, flows, and the like of complex joint network4. For example, network visualization pane703may include graphical representations of nodes702A-702K and data flows704between nodes702A-702K. Data flows of different flow types may be presented in different colors, such as according to the color specified by the list of flow types in network template18, which is represented inFIG.7Aas different patterns of dashed and dotted lines. Information pane701may present various information regarding complex joint network4. In the example ofFIG.7A, information pane701may present information regarding the network health of complex joint network4. For example, information pane701may present information such as the network health, the network utility, the number of failed utility, the number of accepted utility, the number of failed flows, the number of accepted flows, and the maximum amount of bandwidth in complex joint network4. In some examples, users may interact with GUI700to customize the visualization of complex joint network4presented in visualization pane703. As shown inFIG.7B, information pane701may include widgets with which users may interact via user input to filter the flows presented in GUI700based on filtering criteria such as mission utility, flow bandwidth, flow type, source mission element, and destination mission element. In response to the user interacting with such widgets in information pane701, network management system20may filter the flows of complex joint network4based on the filtering criteria and update visualization pane703to present flows meeting the filtering criteria and to refrain from presenting flows that do not meet the filtering criteria. In some examples, GUI700may present information associated with nodes in complex joint network4. As shown inFIG.7C, information pane701may present a list of the nodes that are in complex joint network4. In some examples, GUI700may present detailed information regarding specific nodes in complex joint network4. As shown inFIG.7D, information pane701may present information regarding a specific node in complex joint network4. For example, information pane701may, for a node, present information such as alerts associated with the node, the node ID of the node, the node type, the core network address of the node, the users associated with the node, the network links connected to the node, and the flows admitted by the node. In some examples, GUI700may enable users to turn on and off the network links connected to the node. For example, the information regarding the network links connected to the node as presented in information pane701may include widgets705(FIG.7D) associated with the network link with which a user may interact, such as by providing user input to select or unselect widgets705associated with the network links to turn on or off individual network links connected to the node. In some examples, GUI700may, for a link, include one or more widgets with which a user may interact in order to perform beam steering for the node. For example the node may have a beam steering plug-in that communicates the positions of neighboring nodes found by the node along with the relative angles of the neighboring nodes. As shown inFIG.7E, GUI700may include beam steering widget706that, for a node, presents a graphical representation of the node and neighboring nodes, the relative angles of the neighboring node with respect to the node, and the radio connection, if any, between the node and a neighboring node. For example, beam steering widget706presents the current node as node708A and the neighboring nodes to node708A as nodes708B-708D, where nodes708A-708D may correspond to a subset of nodes702, and where beam steering widget706presents the relative angles between node708A and each of the nodes708B-708D. Further, beam steering widget706presents a graphical indicator709of node708A being connected via radio to node708B. A user may interact with beam steering widget706to direct node708A from being connected to node708B to connect to node708C or node708D. For example, a user may provide user input to interact with the graphical representation of node708B to unselect node708B, thereby directing node708A to cease communicating with node708B. The user may also provide user input to interact with the graphical representation of node708C to select node708C, thereby directing node708A to establish radio communications with node708C. Network management system20may receive the user input directing node708A to cease communicating with node708B and to establish radio communications with node708C and may, in response, communicate with a router in complex joint network4that corresponds to node708A to send instructions to the router to perform beam steering to cease communicating with the router in complex joint network4that corresponds to node708B and to establish radio communications with the router in complex joint network4that corresponds to node708C. In this way, network management system20may present GUI700that provides beam steering functionality that may be controlled by the user. In some examples, GUI700may present information associated with network links in complex joint network4. As shown inFIG.7F, information pane701may present a list of the network links that are in complex joint network4. In some examples, GUI700may present detailed information regarding specific network links in complex joint network4. As shown inFIG.7G, information pane701may present information regarding a specific network link in complex joint network4. For example, information pane701may, for a network link, present information such as the link ID, the security domain of the link, the number of flows being carried by the network link, the number of neighbors to the network link, and names of the flows being carried by the network link. In some examples, GUI700may present information associated with data flows in complex joint network4. As shown inFIG.7H, information pane701may present a list of the data flows that are in complex joint network4. In some examples, GUI700may present detailed information regarding specific data flows in complex joint network4. As shown inFIG.7I, information pane701may present information regarding a specific data flow in complex joint network4. For example, information pane701may, for a data flow, present information such as the flow type, the source mission element of the data flow, the destination mission element of the data flow, the source network address of the data flow, the source port of the data flow, the destination network address of the data flow, the destination port of the data flow, the transport protocol of the data flow, the Differentiated Services Code Point (DSCP) of the data flow, the security domain of the data flow, the mission utility of the data flow, the flow direction of the data flow, the network link over which the data flow travels, and the network segments over which the data flow travels. In some examples, information pane701may, for a specific flow, present flow redirector widget710with which a user can interact that causes network management system20to redirect a data flow to the destination network address inputted by the user into flow redirector widget710for the duration inputted by the user into flow redirector widget710. To redirect a data flow, network management system20may perform mission responsive network control. For example, given a data flow from node702C to a client device at node702A, a user may send a request to network management system20to redirect the data flow to a client device at node702I by providing user input, at flow director widget710, that corresponds to a destination network address associated with the client device at node702I. Network management system20may, in response to receiving the request to redirect the data flow to the client device at node702I, determine whether the network topology of complex joint network4has sufficient capacity to deliver the data flow from node702A to the client device at node702I. If network management system20determines that the network topology of complex joint network4does not have sufficient capacity to deliver the data flow from node702A to the client device at node702I, network management system20may configure nodes702to increase the capacity of paths between node702A and node702I in order to deliver the data flow from node702A to the client device at node702I. In some examples, a user may provide user input to interact with the graphical representation of the nodes, flows, and the like of complex joint network4in visualization pane703of GUI700to cause network management system20to reconfigure complex joint network4to increase the capacity of paths between node702A and node702I in order to deliver the data flow from node702A to the client device at node702I. For example, visualization pane703may indicate that node702C connects to node702A via radio transmissions, and that switching the radio transmissions of node702C to connect to node702I may provide sufficient capacity to deliver the data flow from node702A to the client device at node702I. The user may therefore provide user input to interact with the graphical representation of the nodes, flows, and the like of complex joint network4in visualization pane703to switch the radio transmissions of node702C to connect to node702I. For example, the user may provide a user input that correspond to a drag to drag a graphical representation of the radio connection between node702C and node702A so that the graphical representation of the radio connection links node702C and node702I. Network management system20may, in response to receiving the user input that corresponds to switching the radio transmissions of node702C to connect to node702I, communicate with and send commands to nodes702in complex joint network4to switch the radio transmissions of node702C to connect to node702I. In some examples, network management system20may send a command to node702A to turn off its radio to stop receiving the radio transmissions from node702C and may send a command to node702I to turn on its radio to start receiving the radio transmissions from node702C. Node702C may therefore discover the radio at node702I and may lock a radio beam to the radio at node702A. In some examples, network management system20may send a command to node702C to direct node702C to perform a beam switch to direct the radio beam tr towards the radio at node702I. Once the radio connection is established between node702C and node702I, network management system20may detect the change in the network topology of complex joint network4and may update the graphical representation of the nodes, flows, and the like of complex joint network4in visualization pane703based on the changes. Network management system20may also send to node702C a command to send the data flow to the destination network address associated with the client device at node702I. In this way, network management system20may redirect data flows in complex joint network4. In some examples, GUI700may present information associated with users in complex joint network4. As shown inFIG.7J, information pane701may present a list of the users that are in complex joint network4. In some examples, GUI700may present detailed information regarding specific nodes in complex joint network4. For example, GUI700may, for a specific user, present user information in information pane701that presents information regarding a specific user, such as the mission utility associated with the user and the network address associated with the user. FIG.8is a flow diagram illustrating techniques for improving quality of service (e.g., flow admission control), in accordance with one or more techniques of this disclosure. The operations ofFIG.8are described within the context ofFIGS.1A and1B. As shown inFIG.8, one of the router devices of system2, such as router device10A, router device11, or router device14may receive a data flow via a complex joint network4(802). Router device10A, router device11, or router device14may determine, based on a network template18, a mission utility associated with the data flow and a traffic class associated with the data flow (804). Router device10A, router device11, or router device14may control one or more quality of service decisions, such as admission of the data flow, based at least in part on the mission utility associated with the data flow and the traffic class associated with the data flow (806). In some examples, to determine the traffic class associated with the data flow, router device10A, router device11, or router device14may determine, based on at least one of: a source port associated with the data flow and a destination port associated with the data flow, the traffic class associated with the data flow out of a plurality of traffic classes specified by the network template18. In some examples, to control the admission of the data flow, router device10A, router device11, or router device14may determine, based on the network template18, a bandwidth of a network link allocated for the traffic class associated with the data flow and determine whether to admit the data flow based at least in part on the mission utility associated with the data flow and the bandwidth allocated for the traffic class. In some examples, determine whether to admit the data flow based at least in part on the mission utility associated with the data flow and the bandwidth allocated for the traffic class, router device10A, router device11, or router device14may determine, based on a flow type associated with the data flow, an expected bandwidth usage of the data flow, determine whether dropping one or more data flows of the traffic class that are associated with a lower mission utility than the mission utility associated with the data flow would create available bandwidth in the bandwidth in the network link allocated for the traffic class that is greater than or equal to the expected bandwidth usage of the data flow, in response to determining that dropping the one or more data flows would create the available bandwidth that is greater than or equal to the expected bandwidth usage of the data flow, drop the one or more data flows, and admit the data flow for transmission using the available bandwidth in the bandwidth in the network link allocated for the traffic class. In some examples, to determine whether to admit the data flow based at least in part on the mission utility associated with the data flow and the bandwidth allocated for the traffic class, router device10A, router device11, or router device14may determine, based on a flow type associated with the data flow, an expected bandwidth usage of the data flow, determine whether moving one or more data flows of the traffic class that are associated with an equal or higher mission utility than the mission utility associated with the data flow to a second network link would create available bandwidth in the bandwidth in the network link allocated for the traffic class that is greater than or equal to the expected bandwidth usage of the data flow, in response to determining that moving the one or more data flows to the second link would create the available bandwidth in the network link that is greater than or equal to the expected bandwidth usage of the data flow, move the one or more data flows to the second network link, and admit the data flow for transmission using the available bandwidth in the bandwidth in the network link allocated for the traffic class. In some examples, to determine, based on the network template18, the mission utility associated with the data flow and the traffic class associated with the data flow, router device10A, router device11, or router device14may determine, based on at least one of: a source port associated with the data flow or a destination port associated with the data flow, a flow type associated with the data flow out of a plurality of flow types specified by the network template18, determine a flow type mission utility associated with the data flow based at least in part on the determined flow type associated with the data flow, and determine the mission utility associated with the data flow based at least in part on the flow type mission utility associated with the data flow. In some examples, to determine the flow type associated with the data flow, router device10A, router device11, or router device14may determine, based on the source port associated with the data flow, a source flow type associated with the data flow out of the plurality of flow types specified by the network template18, determine a source flow type mission utility associated with the data flow based at least in part on the determined source flow type associated with the data flow, determine, based on the destination port associated with the data flow, a destination flow type associated with the data flow out of the plurality of flow types specified by the network template18, and determine a destination flow type mission utility associated with the data flow based at least in part on the determined destination flow type associated with the data flow; In some examples, to determine the flow type mission utility associated with the data flow, router device10A may determine the flow type mission utility associated with the data flow as the greater of the source flow type mission utility associated with the data flow and the destination flow type mission utility associated with the data flow. In some examples, router device10A, router device11, or router device14may determine, based on at least one of: a source network address associated with the data flow or the destination network address associated with the data flow, a mission element associated with the data flow out of a plurality of mission elements specified by the network template18, determine a mission element utility associated with the data flow based at least in part on the determined mission element associated with the data flow, and determine the mission utility associated with the data flow based at least in part on the flow type mission utility associated with the data flow and the mission element utility associated with the data flow. In some examples, to determine the mission element associated with the data flow, router device10A, router device11, or router device14may determine, based on the source network address associated with the data flow, a source mission element associated with the data flow out of the plurality of mission elements specified by the network template18, determine a source mission element utility associated with the data flow based at least in part on the determined source mission element associated with the data flow, determine, based on the destination network address associated with the data flow, a destination mission element associated with the data flow out of the plurality of mission elements specified by the network template18, and determine a destination mission element utility associated with the data flow based at least in part on the determined destination mission element associated with the data flow. In some examples, to determine the mission element utility associated with the data flow, router device10A, router device11, or router device14may determine the mission element utility associated with the data flow as the greater of the source mission element utility associated with the data flow and the destination mission element utility associated with the data flow. In some examples, to determine the mission utility associated with the data flow based at least in part on the flow type mission utility associated with the data flow and the mission element utility associated with the data flow, router device10A, router device11, or router device14may determine the mission utility associated with the data flow as a sum of the flow type mission utility associated with the data flow and the mission element utility associated with the data flow. In some examples, router device10A, router device11, or router device14may receive, from network management system20, an instruction to perform beam steering to establish wireless communications with a first neighboring node. Router device10A, router device11, or router device14may, in response to receiving the instruction, perform beam steering to disconnect wireless communications with a second neighboring node and to establish wireless communications with the first neighboring node. In some example, network management system20may output a graphical user interface (GUI)700that includes a beam steering widget, wherein the beam steering widget presents a view of a node that corresponds to router device10A, router device11, or router device14, a view of relative angles of neighboring nodes with respect to the node, and a view of a radio connection between the node and the second node. Network management system20may receive a first user input for directing the node to perform beam steering to establish wireless communications with the first neighboring node. Network management system20may, in response to receiving the first user input, send, to router device10A, router device11, or router device14, the instruction to establish wireless communications with the first neighboring node. In some examples, to determine the mission utility associated with the data flow and the traffic class associated with the data flow, router device10A or router device14may receive a tag associated with the data flow that specifies the mission utility associated with the data flow and the traffic class, where the complex joint network is a crypto-partitioned network and where router device10A or router device14is in an encrypted portion of the crypto-partitioned network. In some examples, network management system20may determine unused bandwidth in one or more links of the complex joint network4. Network management system20may perform sets of simulated changes to the complex joint network4to utilize the unused bandwidth. Network management system20may determine a set of simulated changes to the complex joint network4having the greatest Normalized Cumulative Network Performance (CNP) out of the sets of simulated changes to the complex joint network4. Network management system20may send a plurality of commands to nodes of the complex joint network4to make the set of simulated changes to the complex joint network4. router device10A, router device11, or router device14may receive one or more commands out of the plurality of commands. Router device10A, router device11, or router device14may perform the one or more commands, wherein the one or more commands include one or more of: one or more commands to enable a first one or more links, one or more commands to disable a second one or more links, one or more commands to perform beam steering, one or more commands to redirect a particular data flow, one or more commands to update performance of mission-aware routing by router device10A, router device11, or router device14, and one or more commands to update performance of traffic-aware routing by router device10A, router device11, or router device14. In some examples, network management system20may output a graphical user interface (GUI)700that provides a view of nodes of the complex joint network4, a view of links between the nodes of the complex joint network4, and a view of data flows of the complex joint network4. Network management system20may receive one or more filtering parameters, wherein the one or more filtering parameters specify one or more of: a mission utility, a flow bandwidth, a flow type, a start mission element, or an end mission element. Network management system20may, in response to receiving the one or more filtering parameters, update the GUI700to provide a view of a subset of the data flows of the complex joint network4that matches the one or more filtering parameters. In some examples, network management system20may receive user input that corresponds to a specified data flow that is to be redirected to a specified destination. Network management system20may, in response to receiving the user input, send, to one or more nodes in the complex joint network4, one or more instructions to redirect the specified data flow to the specified destination. In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium. By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and radio, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and radio are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements. The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware. Various examples of the disclosure have been described. Any combination of the described systems, operations, or functions is contemplated. These and other examples are within the scope of the following claims.
135,062
11863457
DETAILED DESCRIPTION Certain embodiments of systems, devices, components, modules, routines, data structures, and processes for time-sensitive data delivery in datacenters or other suitable distributed computing systems are described below. In the following description, specific details of components are included to provide a thorough understanding of certain embodiments of the disclosed technology. A person skilled in the relevant art will also understand that the technology can have additional embodiments. The technology can also be practiced without several of the details of the embodiments described below with reference toFIGS.1-6. As used herein, the term “distributed computing system” generally refers to an interconnected computer system having multiple network nodes that interconnect a plurality of servers or hosts to one another and/or to external networks (e.g., the Internet). The term “network node” generally refers to a physical network device. Example network nodes include routers, switches, hubs, bridges, load balancers, security gateways, or firewalls. A “host” generally refers to a physical computing device. In certain embodiments, a host can be configured to implement, for instance, one or more virtual machines, virtual switches, or other suitable virtualized components. For example, a host can include a server having a hypervisor configured to support one or more virtual machines, virtual switches, or other suitable types of virtual components. In other embodiments, a host can be configured to execute suitable applications directly on top of an operating system. A computer network can be conceptually divided into an overlay network implemented over an underlay network in certain implementations. An “overlay network” generally refers to an abstracted network implemented over and operating on top of an underlay network. The underlay network can include multiple physical network nodes interconnected with one another. An overlay network can include one or more virtual networks. A “virtual network” generally refers to an abstraction of a portion of the underlay network in the overlay network. A virtual network can include one or more virtual end points referred to as “tenant sites” individually used by a user or “tenant” to access the virtual network and associated computing, storage, or other suitable resources. A tenant site can host one or more tenant end points (“TEPs”), for example, virtual machines. The virtual networks can interconnect multiple TEPs on different hosts. Virtual network nodes in the overlay network can be connected to one another by virtual links individually corresponding to one or more network routes along one or more physical network nodes in the underlay network. In other implementations, a computer network can only include the underlay network. As used herein, a “packet” generally refers to a formatted unit of data carried by a packet-switched network. A packet typically can include user data along with control data. The control data can provide information for delivering the user data. For example, the control data can include source and destination network addresses/ports, error checking codes, sequencing information, hop counts, priority information, security information, or other suitable information regarding the user data. In accordance with embodiments of the disclosed technology, the control data can also include a delivery time field configured to contain data of a delivery time at which a packet or a payload of the packet containing time-sensitive data is allowed to be forwarded to or accessed by a final destination or endpoint, as described in more detail herein. “Time-sensitive data” generally refers to data whose importance and/or value diminishes or otherwise changes in some ways as a function of time. Typically, the control data can be contained in headers and/or trailers of a packet. The headers and trailers can include one or more data field containing suitable information. An example data schema for control data is described in more detail below with reference toFIGS.4A and4B. FIG.1is a schematic diagram illustrating a distributed computing system100implementing network traffic routing and associated transmission rate limiting in accordance with embodiments of the disclosed technology. As shown inFIG.1, the distributed computing system100can include an underlay network108interconnecting a plurality of hosts106, a plurality of client devices102associated with corresponding users101, and a platform controller125operatively coupled to one another. Even though particular components of the distributed computing system100are shown inFIG.1, in other embodiments, the distributed computing system100can also include additional and/or different components or arrangements. For example, in certain embodiments, the distributed computing system100can also include network storage devices, additional hosts, and/or other suitable components (not shown) in other suitable configurations. As shown inFIG.1, the underlay network108can include one or more network nodes112that interconnect the multiple hosts106and the client device102of the users101. In certain embodiments, the hosts106can be organized into racks, action zones, groups, sets, or other suitable divisions. For example, in the illustrated embodiment, the hosts106are grouped into three host sets identified individually as first, second, and third host sets107a-107c. Each of the host sets107a-107cis operatively coupled to a corresponding network nodes112a-112c, respectively, which are commonly referred to as “top-of-rack” network nodes or “TORs.” The TORs112a-112ccan then be operatively coupled to additional network nodes112to form a computer network in a hierarchical, flat, mesh, or other suitable types of topology. The underlay network can allow communications among hosts106, the platform controller125, and the users101. In other embodiments, the multiple host sets107a-107cmay share a single network node112or can have other suitable arrangements. The hosts106can individually be configured to provide computing, storage, and/or other suitable cloud or other suitable types of computing services to the users101. For example, as described in more detail below with reference toFIG.2, one of the hosts106can initiate and maintain one or more virtual machines144(shown inFIG.2) or containers (not shown) upon requests from the users101. The users101can then utilize the provided virtual machines144or containers to perform database, computation, communications, and/or other suitable tasks. In certain embodiments, one of the hosts106can provide virtual machines144for multiple users101. For example, the host106acan host three virtual machines144individually corresponding to each of the users101a-101c. In other embodiments, multiple hosts106can host virtual machines144for the users101a-101c. The client devices102can each include a computing device that facilitates the users101to access computing services provided by the hosts106via the underlay network108. In the illustrated embodiment, the client devices102individually include a desktop computer. In other embodiments, the client devices102can also include laptop computers, tablet computers, smartphones, or other suitable computing devices. Though three users101are shown inFIG.1for illustration purposes, in other embodiments, the distributed computing system100can facilitate any suitable numbers of users101to access cloud or other suitable types of computing services provided by the hosts106in the distributed computing system100. The platform controller125can be configured to manage operations of various components of the distributed computing system100. For example, the platform controller125can be configured to allocate virtual machines144(or container and other suitable resources) in the distributed computing system100, monitor operations of the allocated virtual machines144, or terminate any allocated virtual machines144once operations are complete. In another example, the platform controller125can be configured to maintain and provide access to a platform system time. In a further example, the platform controller125can facilitate synchronization of local system time on the individual hosts106according to the Network Time Protocol or other suitable protocols. In the illustrated implementation, the platform controller125is shown as an independent hardware/software component of the distributed computing system100. In other embodiments, the platform controller125can also be a datacenter controller, a fabric controller, or other suitable types of controller or a component thereof implemented as a computing service on one or more of the hosts106. FIG.2is a schematic diagram illustrating certain hardware/software components of the distributed computing system100in accordance with embodiments of the disclosed technology. In particular,FIG.2illustrates an overlay network108′ that can be implemented on the underlay network108inFIG.1. Though particular configuration of the overlay network108′ is shown inFIG.2, In other embodiments, the overlay network108′ can also be configured in other suitable ways. InFIG.2, only certain components of the underlay network108ofFIG.1are shown for clarity. InFIG.2and in other Figures herein, individual software components, objects, classes, modules, and routines may be a computer program, procedure, or process written as source code in C, C++, C#, Java, and/or other suitable programming languages. A component may include, without limitation, one or more modules, objects, classes, routines, properties, processes, threads, executables, libraries, or other components. Components may be in source or binary form. Components may include aspects of source code before compilation (e.g., classes, properties, procedures, routines), compiled binary units (e.g., libraries, executables), or artifacts instantiated and used at runtime (e.g., objects, processes, threads). Components within a system may take different forms within the system. As one example, a system comprising a first component, a second component and a third component can, without limitation, encompass a system that has the first component being a property in source code, the second component being a binary compiled library, and the third component being a thread created at runtime. The computer program, procedure, or process may be compiled into object, intermediate, or machine code and presented for execution by one or more processors of a personal computer, a network server, a laptop computer, a smartphone, and/or other suitable computing devices. Equally, components may include hardware circuitry. A person of ordinary skill in the art would recognize that hardware may be considered fossilized software, and software may be considered liquefied hardware. As just one example, software instructions in a component may be burned to a Programmable Logic Array circuit or may be designed as a hardware circuit with appropriate integrated circuits. Equally, hardware may be emulated by software. Various implementations of source, intermediate, and/or object code and associated data may be stored in a computer memory that includes read-only memory, random-access memory, magnetic disk storage media, optical storage media, flash memory devices, and/or other suitable computer readable storage media excluding propagated signals. As shown inFIG.2, the source host106aand the destination hosts106band106b′ (only the destination hosts106bis shown with detail components) can each include a processor132, a memory134, a network interface card136, and a packet processor138operatively coupled to one another. In other embodiments, the hosts106can also include input/output devices configured to accept input from and provide output to an operator and/or an automated software controller (not shown), or other suitable types of hardware components. The processor132can include a microprocessor, caches, and/or other suitable logic devices. The memory134can include volatile and/or nonvolatile media (e.g., ROM; RAM, magnetic disk storage media; optical storage media; flash memory devices, and/or other suitable storage media) and/or other types of computer-readable storage media configured to store data received from, as well as instructions for, the processor132(e.g., instructions for performing the methods discussed below with reference toFIGS.5A-5D). Though only one processor132and one memory134are shown in the individual hosts106for illustration inFIG.2, in other embodiments, the individual hosts106can include two, six, eight, or any other suitable number of processors132and/or memories134. The source host106aand the destination host106bcan individually contain instructions in the memory134executable by the processors132to cause the individual processors132to provide a hypervisor140(identified individually as first and second hypervisors140aand140b) and an operating system141(identified individually as first and second operating systems141aand141b). Even though the hypervisor140and the operating system141are shown as separate components, in other embodiments, the hypervisor140can operate on top of the operating system141executing on the hosts106or a firmware component of the hosts106. The hypervisors140can individually be configured to generate, monitor, terminate, and/or otherwise manage one or more virtual machines144organized into tenant sites142. For example, as shown inFIG.2, the source host106acan provide a first hypervisor140athat manages first and second tenant sites142aand142b, respectively. The destination host106bcan provide a second hypervisor140bthat manages first and second tenant sites142a′ and142b′, respectively. The hypervisors140are individually shown inFIG.2as a software component. However, in other embodiments, the hypervisors140can be firmware and/or hardware components. The tenant sites142can each include multiple virtual machines144for a particular tenant (not shown). For example, the source host106aand the destination host106bcan both host the tenant site142aand142a′ for a first tenant101a(FIG.1). The source host106aand the destination host106bcan both host the tenant site142band142b′ for a second tenant101b(FIG.1). Each virtual machine144can be executing a corresponding operating system, middleware, and/or applications. Also shown inFIG.2, the distributed computing system100can include an overlay network108′ having one or more virtual networks146that interconnect the tenant sites142aand142bacross multiple hosts106. For example, a first virtual network142ainterconnects the first tenant sites142aand142a′ at the source host106aand the destination host106b. A second virtual network146binterconnects the second tenant sites142band142b′ at the source host106aand the destination host106b. Even though a single virtual network146is shown as corresponding to one tenant site142, in other embodiments, multiple virtual networks146(not shown) may be configured to correspond to a single tenant site146. The virtual machines144can be configured to execute one or more applications147to provide suitable cloud or other suitable types of computing services to the users101(FIG.1). For example, the source host106acan execute an application147that is configured to provide a computing service that monitors online trading and distribute price data to multiple users101subscribing to the computing service. The virtual machines144on the virtual networks146can also communicate with one another via the underlay network108(FIG.1) even though the virtual machines144are located on different hosts106. Communications of each of the virtual networks146can be isolated from other virtual networks146. In certain embodiments, communications can be allowed to cross from one virtual network146to another through a security gateway or otherwise in a controlled fashion. A virtual network address can correspond to one of the virtual machines144in a particular virtual network146. Thus, different virtual networks146can use one or more virtual network addresses that are the same. Example virtual network addresses can include IP addresses, MAC addresses, and/or other suitable addresses. To facilitate communications among the virtual machines144, virtual switches (not shown) can be configured to switch or filter packets114directed to different virtual machines144via the network interface card136and facilitated by the packet processor138. As shown inFIG.2, to facilitate communications with one another or with external devices, the individual hosts106can also include a network interface card (“NIC”)136for interfacing with a computer network (e.g., the underlay network108ofFIG.1). A NIC136can include a network adapter, a LAN adapter, a physical network interface, or other suitable hardware circuitry and/or firmware to enable communications between hosts106by transmitting/receiving data (e.g., as packets) via a network medium (e.g., fiber optic) according to Ethernet, Fibre Channel, Wi-Fi, or other suitable physical and/or data link layer standards. During operation, the NIC136can facilitate communications to/from suitable software components executing on the hosts106. Example software components can include the virtual switches141, the virtual machines144, applications147executing on the virtual machines144, the hypervisors140, or other suitable types of components. In certain implementations, a packet processor138can be interconnected to and/or integrated with the NIC136in order to facilitate network traffic operations for enforcing communications security, performing network virtualization, translating network addresses, maintaining/limiting a communication flow state, or performing other suitable functions. In certain implementations, the packet processor138can include a Field-Programmable Gate Array (“FPGA”) integrated with the NIC136. An FPGA can include an array of logic circuits and a hierarchy of reconfigurable interconnects that allow the logic circuits to be “wired together” like logic gates by a user after manufacturing. As such, a user101can configure logic blocks in FPGAs to perform complex combinational functions, or merely simple logic operations to synthetize equivalent functionality executable in hardware at much faster speeds than in software. In the illustrated embodiment, the packet processor138has one interface communicatively coupled to the NIC136and another coupled to a network switch (e.g., a Top-of-Rack or “TOR” switch) at the other. In other embodiments, the packet processor138can also include an Application Specific Integrated Circuit (“ASIC”), a microprocessor, or other suitable hardware circuitry. In any of the foregoing embodiments, the packet processor138can be programmed by the processor132(or suitable software components associated therewith) to route packets inside the packet processor138in order to achieve various aspects of time-sensitive data delivery, as described in more detail below with reference toFIGS.3A-5. In operation, the processor132and/or a user101(FIG.1) can configure logic circuits in the packet processor138to perform complex combinational functions or simple logic operations to synthetize equivalent functionality executable in hardware at much faster speeds than in software. For example, the packet processor138can be configured to process inbound/outbound packets for individual flows according to configured policies or rules contained in a flow table such as a MAT. The flow table can contain data representing processing actions corresponding to each flow for enabling private virtual networks with customer supplied address spaces, scalable load balancers, security groups and Access Control Lists (“ACLs”), virtual routing tables, bandwidth metering, Quality of Service (“QoS”), etc. As such, once the packet processor138identifies an inbound/outbound packet as belonging to a particular flow, the packet processor138can apply one or more corresponding policies in the flow table before forwarding the processed packet to the NIC136or TOR112. For example, as shown inFIG.2, the application147, the virtual machine144, and/or other suitable software components on the source host106acan generate an outbound packet114destined to, for instance, other applications147at the destination hosts106band106b′. The NIC136at the source host106acan forward the generated packet114to the packet processor138for processing according to certain policies in a flow table. Once processed, the packet processor138can forward the outbound packet114to the first TOR112a, which in turn forwards the packet to the second TOR112bvia the overlay/underlay network108and108′. The second TOR112bcan then forward the packet114to the packet processor138at the destination hosts106band106b′ to be processed according to other policies in another flow table at the destination hosts106band106b′. If the packet processor138cannot identify a packet as belonging to any flow, the packet processor138can forward the packet to the processor132via the NIC136for exception processing. In another example, when the first TOR112areceives an inbound packet115, for instance, from the destination host106bvia the second TOR112b, the first TOR112acan forward the packet115to the packet processor138to be processed according to a policy associated with a flow of the packet115. The packet processor138can then forward the processed packet115to the NIC136to be forwarded to, for instance, the application147or the virtual machine144. In the distributed computing system100, different hosts106can be dynamically provisioned to execute applications147based on workload, execution priority, resource availability, or other suitable criteria. As such, maintaining a constant physical and/or network communication distance between the different hosts106may be impractical. For example, an application147executing on the source host106amay have a communication distance with another application147executing on the destination host106bthat is different than one executing on the other destination host106b′ or other hosts106in the distributed computing system100. Consequently, the users101subscribing to the related computing services provided by the application147executing on the source host106amay experience communication latency variations. Thus, some users101may receive certain information, for instance, price data from the source host106abefore others, thereby unfairly disadvantage the other users101. Several embodiments of the disclosed technology can address certain aspects of the foregoing difficulties by implementing a source-configured delivery time mandate for packets114containing the same time-sensitive data delivered from a source (e.g., the source host106a) to multiple destinations or endpoints (e.g., the destination host106band other hosts106in the distributed computing system100. In one implementation, packets114can be configured to include a delivery time field in a preamble of the packets114. The delivery time field can be configured to contain data representing a delivery time at which the packets114or payloads of the packets114are allowed to be forwarded from a host106to a final destination, such as a virtual machine, container, or other suitable types of endpoint hosted on the host106. As such, early access to the time-sensitive data by some users101may be prevented, as described in more detail below with reference toFIGS.3A-5. FIG.3A-3Dare schematic diagrams illustrating certain example operations of time-sensitive data delivery in a distributed computing system in accordance with embodiments of the disclosed technology. As shown inFIG.3A, the destination hosts106band106b′ can each include a delivery agent139that is configured to enforce a source-configured delivery time mandate for packets114containing the same or similar time-sensitive data from the source host106a. The delivery agent139can be implemented in the packet processor138, the NIC136, the hypervisor140(shown inFIG.2), the operating system141(shown inFIG.2), in the TOR112a-112c(shown inFIG.1), or in other suitable manners. In the illustrated example, packets114can be configured to include a delivery time field186in, for instance, a preamble of the packets114. In other embodiments, the delivery time field can be included in a midamble, suffix, or other portions of the packets114. The delivery time field186can be configured to contain data representing a delivery time at which the packets114or payloads of the packets114are allowed to be forwarded from the destination hosts106band106b′ to a final destination or endpoint, such as a virtual machine144hosted on the destination hosts106band106b′, as identified in, for example, a destination IP address field, a media access control (“MAC”) address, or other suitable network address filed of the packets114. For instance, as shown inFIG.3A, the packet114destined to the destination host106bcan include header field that contain data representing a source address (e.g., “10.1.1.0”), a destination address of a virtual machine144(e.g., “192.168.1.1”), and a delivery time (e.g., “2020-11-30 15:29:01”). On the other hand, the packet114′ destined to the destination host106b′ can include header field that contain data representing the same source address (e.g., “10.1.1.0”), a different destination address of another virtual machine144(e.g., “170.1.1.2”), and the same delivery time (e.g., “2020-11-30 15:29:01”). Upon receiving the packets114and114′, the delivery agents139at the destination hosts106band106b′ can individually inspect the data included in the delivery time field186of the packets114and114′ to identify the delivery time set by the source host106a, e.g., “2020-11-30 15:29:01.” Though the delivery times for both the destination hosts106band106b′ are shown as being the same inFIG.3A, in other embodiments, the delivery times can be different to accommodate clock drift or for other suitable reasons. The delivery agent139can also be configured to determine whether the identified delivery time indicated by the data in the delivery time field186has expired when compared to, for instance, a local system time, a platform system time, or other suitable standard time. When the local system time is used, the destination hosts106band106b′ can be configured to synchronize the local system time according to the Network Time Protocol Precision Time Protocol, or other suitable protocols. The time synchronization can be based on a remote reference clock (e.g., a clock at a Global Positioning System receiver) or other suitable types of reference clocks. In response to determining that the delivery time has expired, the delivery agents139at the destination hosts106band106b′ can be configured to forward or otherwise allow access to the packets114and114′ or payloads of the packets114and114′ by the final destinations immediately, such as the virtual machines144, as shown inFIG.3B. In the illustrated example, the delivery time is “2020-11-30 15:29:01” while the local system time at the destination hosts106aand106′ are “2020-11-30 15:29:01” and “2020-11-30 15:30:02,” respectively. As such, the delivery time included in the packets114and114′ has expired when the packets114and114′ arrive at the destination hosts106band106b′. As such, the delivery agents139can cause the packets114and114′ to immediately allow access to the packets114and114′ by the virtual machines144identified, for instance, by the network addresses “192.168.1.1” and “170.1.1.2.” The destination hosts106band106b′ can also be configured to optionally transmit a report116and116′ to the source host106a, the final destination (e.g., the virtual machines144), a monitoring environment (e.g., the platform controller125inFIG.1), or other suitable entities indicating that the packets114and114′ have arrived “late,” i.e., after the delivery time has expired and request the source host106ato adjust the delivery time for future packets (not shown). On the other hand, in response to determining that the delivery time has not expired, the delivery agents139at the destination hosts106band106b′ can be configured to temporarily store the packets114and114′ or the payloads of the packets114and114′ in a buffer until the delivery time indicated in the delivery time field186expires. The buffer can include a physical and/or virtual storage in the NIC136, the packet processor138, coupled to the NIC136, the hypervisor140(shown inFIG.2), the operating system141(shown inFIG.2) on the destination hosts106band106b′ or a combination thereof. For example, as shown inFIG.3C, the packet114has arrived at the destination host106bwhile the packet114′ is still in transit from the source host106ato the destination host106b′. Upon receiving the packet114, the delivery agent139at the destination host106bcan determine that the delivery time (e.g., “2020-11-30 15:29:01”) has not expired yet when compared to a local system time (e.g., “2020-11-30 15:28:01”). As such, the delivery agent139can be configured to temporarily store the packet114in the packet processor138without forwarding the packet114to the virtual machine144executing on the destination host106band identified by the destination address, e.g., “192.168.1.1.” As such, even though the packet114arrives at the destination host106bbefore the packet114′ containing the same time-sensitive information arrives at the destination host106b′, the packet114and the time-sensitive information contained in the packet114is held until the delivery time expires, as shown with dashed lines inFIG.3C. Though the temporary storage operation is described above in the context of the destination hosts106band106b′, in further implementations, the foregoing delivery time determination and temporary storage operations can also be performed by, for example, a network node such as the TOR112inFIG.1before the packets are delivered to the destination hosts106band106b′. In certain embodiments, the destination hosts106band106b′ can also be configured to determine a difference between the delivery time indicated in the delivery time field186of the packets114and114′ and the local or platform system time at which the packets114and114′ were received. The destination hosts106band106b′ can then compare the determined difference with a delay threshold. When the difference equals or exceeds the delay threshold, the destination hosts106band106b′ can be configured to transmit a notification118and118′ to the source host106a, the delivery controller131, or other suitable entities indicating that a “long” delay between reception and forwarding of the packets hosts106band106b′ to the final destinations is detected, as shown inFIG.3D. In other embodiments, the destination hosts106band106b′ can also be configured to report the determined difference to the source host106a, the delivery controller131, or other suitable entities in response to or irrespective of whether a long delay is detected. The source host106acan be configured to set the delivery time in various ways in order to achieve simultaneous or near simultaneous (e.g., within 0.1 millisecond) delivery of the packets114and114′ and associated payloads containing the same time-sensitive data to the final destinations. In one embodiment, the source host106acan be configured to calculate a value of the delivery time based on a current time at the source host106aand an estimated maximum latency of communicating with all the destination hosts106. For instance, the source host106acan periodically transmit test packets (e.g., pings, not shown) to the various destination hosts106and record latency values between transmitting the test packets and receiving a response in return. The source host106acan also record values such as round-trip time when establishing network connections with the destination host106aor determine latency values to the destination host106ain other suitable manners. Based on the recorded historical latency data, the source host106acan be configured to select a maximum latency corresponding to one or more of the destination host106aand set the delivery time to be a current time plus the maximum latency and optionally a safety factor, as follows: Delivery time=Current time+Maximum latency+Safety factor The safety factor can be 0%, 10%, 20%, or other suitable proportions of the largest latency or can be a fixed or adjustable time value (e.g., 1 millisecond) proportional to the sum of the current time and the maximum latency. In further examples, the source host106acan be configured to determine the delivery time with other suitable factors, weights, and/or in other suitable manners. In any of the foregoing examples, the delivery time can be calculated as a current time plus an offset that is defined by a system administrator or other suitable entities. In certain embodiments, the source host106acan also include a delivery controller131that is configured to adjust the calculation of the delivery time based on feedbacks from the destination hosts106. Though the delay controller131is shown inFIG.3Aas being implemented as a component of the source server106a, in other implementations, the delay controller131can be implemented as a computing service available to the source server106aor in other suitable forms. In the illustrated example, when a report116from a destination host106is received indicating that the packets114or114′ previously transmitted arrived “late,” i.e., arrived after the set delivery time has expired, the delivery controller131can be configured to increase the maximum latency and/or the optional safety factor in the formula above by a preset amount (e.g., 0.5 millisecond). The delivery controller131can be configured to keep monitoring for any additional report of late arrival of additional packets114and114′. In response to detecting additional reports of late arrival of packets114and114′, the delivery controller131can be configured to continue increasing the maximum latency and/or the safety factor in a step or escalating fashion until no more “late” arrival report is received. In another example, when the delivery controller131receives a notification118or118′ indicating a long delay between arrival and forwarding of the packets114and114′ at one or more of the destination hosts106, the delivery controller131can be configured to decrease the maximum latency and/or safety factor by a preset amount (e.g., 0.5 millisecond). The delivery controller131can then be configured to monitor for any report116of late arrival of packets114and114′. When no such report116is received for a period of time (e.g., ten seconds), the delivery controller131can be configured to further decrease the maximum latency and/or safety factor until at least one such late arrival report116is received. In response to receiving the late arrival report116, the delivery controller131can be configured to restore the previously used maximum latency and/or safety factor that did not result in receiving any late arrival reports116. By setting the delivery time as described above, the delivery controller131can be configured to deliver packets114and114′ containing the time-sensitive information to multiple destinations (e.g., the virtual machines144) at the same time or within a tolerance of time. Though packets114and114′ may arrive at different destination hosts106at different times, the destination hosts106can temporarily store the packets114and114′ in a buffer until the delivery time indicated in the delivery time field of the packets114and114′ expires. As such, final destinations, such as virtual machines144, containers, or applications147(FIG.2) hosted on the various destination hosts106can receive the same or similar information from the source host106aat the same or within a tolerance of time. Thus, strict physical/network communication distance control between the source host106aand the multiple destination hosts106may be avoided while providing simultaneous dissemination of the same information. Though the technique is described above as being implemented via storing the packets or portions thereof in a buffer, in other embodiments, identifiers of the packets may be stored in the buffer instead of the packets. A platform key (e.g., a master key at a host106) may then be used to derive the decryption key for decrypting the packets such as by hashing the master key with the stored identifiers of the packets. In further embodiments, the destination hosts106can provide a decryption facility (not shown) to decrypt the packet. The decryption facility can a trusted platform module, a decryption module in an operating system or hypervisor, a standalone application, or in other suitable forms. During operation, the final destination or endpoint, e.g., a virtual machine144can present the packets with the delivery time to the decryption facility. In turn, the decryption facility can be configured to decrypt and provide the virtual machine144access to the information in the packets only after expiration of the delivery time. FIG.4Ais a schematic diagram illustrating a data schema180suitable for a packet header in accordance with embodiments of the disclosed technology. As shown inFIG.4A, the data schema180can include a MAC field181, an IP field182, a TCP field183, a TLS field184, an HTTP field185, and a payload189. The MAC field181, the IP field182, and the TCP field183can be configured to contain a MAC address, an IP address, and a port number of the NIC136(FIG.2) and/or the host106(FIG.2), respectively. In certain embodiments, the IP field182can also include a delivery time field186(shown inFIG.4B) configured to contain a delivery time. In other embodiments, the delivery time field186can also be an encapsulating layer header in the data schema. The TLS field184can be configured to contain a value indicating a type of data contained in the packet. Example values for the TLS field184can include APPLICATION_DATA, CHANGE_CIPHER_SPEC, ALERT, or HANDSHAKE. The HTTP field185can be configured to contain various parameters according to the HTTP protocol. For example, the parameters can include a content length of the data in the data field, cache control, etc. Example header fields of the IP field182are described in more detail with reference toFIG.4B. Even though the example data schema180includes the HTTP field185, in other embodiments, the data schema180can include Secure Shell, Secure Copy, Secure FTP, or other suitable header fields. FIG.4Bis a schematic diagram illustrating example header fields suitable for the IP field182inFIG.4Ain accordance with embodiments of the disclosed technology. As shown inFIG.4B, the header fields can include a source IP address field187, a destination IP address field188, and a delivery time field186containing example IP addresses and a delivery time, respectively. Though particular fields are shown inFIG.4Bas examples, in other embodiments, the IP header182can also include additional and/or different fields configured to contain other suitable parameters in addition to those shown inFIG.4B. FIGS.5A-5Dare flowcharts illustrating processes for implementing time-sensitive data delivery in accordance with embodiments of the disclosed technology. Though the processes are described below in light of the distributed computing system100ofFIGS.1-3D, in other embodiments, the processes can also be performed in other computing systems with similar or different components. As shown inFIG.5A, a process200can include estimating a maximum latency between a source host and multiple destination hosts at stage202. As described above, the maximum latency can be estimated based on historical latency data obtained via, for instance, transmitting test packets and monitoring for responses. The process200can then include calculating a delivery time at stage204. The delivery time can be calculated based on a current time of the source host and the estimated maximum latency as described above with reference toFIGS.3A-3D. The process200can also include setting the calculated value of the delivery time as a parameter in a delivery time field of packets to be transmitted to the multiple destination hosts at stage206. The process200can further include transmitting the packets with the set delivery time at stage208. As shown inFIG.5B, another process210can include inspecting data in the delivery time field upon receiving a packet from the source host at stage212. The process210can then include a decision stage214to determine whether the delivery time included in the delivery time field has expired. In response to determining that the delivery time has expired, the process210proceeds to forwarding the packet to the final destination immediately at stage216and optionally transmitting a “late” arrival report to the source host, the delivery controller, or other suitable entities at stage218. In response to determining that the delivery time has not expired, the process210proceeds to holding the packet in a buffer at the destination host without forwarding the packet to the final destination at stage220. The process210can optionally include determining whether the difference between the time of arrival and the delivery time exceeds a delay threshold and reporting a “long” delay to the source host, the delivery controller, or other suitable entities in response to determining that the difference exceeds the delay threshold at stage222. As shown inFIG.5C, a process230can include receiving, at a source host, a “late” arrival report at stage232. As described above, the late arrival report indicates that the delivery time has expired before the packet is received at a destination host. The process230can also include increasing the maximum latency used to calculate the delivery time at stage234. Various ways of increasing the maximum latency are described above with reference toFIGS.3A-3D. The process230can then include a decision stage236to determine whether additional late arrival reports are received. In response to determining that an additional late arrival report is received, the process230revers to increasing the maximum latency at stage234. Otherwise, the process230proceeds to maintaining the maximum latency at stage238. As shown inFIG.5D, a process240can include receiving a “long” delay notification from a destination host at stage242. The process240can also include reducing the maximum latency at stage244. Various ways of decreasing the maximum latency are described above with reference toFIGS.3A-3D. The process240can then include a decision stage246to determine whether any late arrival report is received. In response to determining that no late arrival report is received, the process240reverts to reducing the maximum latency at stage244. Otherwise, the process240proceeds to restoring the maximum latency last used that did not cause reception of any late arrival reports at stage248. FIG.6is a computing device300suitable for certain components of the distributed computing system100inFIG.1. For example, the computing device300can be suitable for the hosts106, the client devices102, or the platform controller125ofFIG.1. In a very basic configuration302, the computing device300can include one or more processors304and a system memory306. A memory bus308can be used for communicating between processor304and system memory306. Depending on the desired configuration, the processor304can be of any type including but not limited to a microprocessor (μP), a microcontroller (μC), a digital signal processor (DSP), or any combination thereof. The processor304can include one more level of caching, such as a level-one cache310and a level-two cache312, a processor core314, and registers316. An example processor core314can include an arithmetic logic unit (ALU), a floating-point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof. An example memory controller318can also be used with processor304, or in some implementations memory controller318can be an internal part of processor304. Depending on the desired configuration, the system memory306can be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.) or any combination thereof. The system memory306can include an operating system320, one or more applications322, and program data324. As shown inFIG.11, the operating system320can include a hypervisor140for managing one or more virtual machines144. This described basic configuration302is illustrated inFIG.8by those components within the inner dashed line. The computing device300can have additional features or functionality, and additional interfaces to facilitate communications between basic configuration302and any other devices and interfaces. For example, a bus/interface controller330can be used to facilitate communications between the basic configuration302and one or more data storage devices332via a storage interface bus334. The data storage devices332can be removable storage devices336, non-removable storage devices338, or a combination thereof. Examples of removable storage and non-removable storage devices include magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD), and tape drives to name a few. Example computer storage media can include volatile and nonvolatile, removable, and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. The term “computer readable storage media” or “computer readable storage device” excludes propagated signals and communication media. The system memory306, removable storage devices336, and non-removable storage devices338are examples of computer readable storage media. Computer readable storage media include, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other media which can be used to store the desired information and which can be accessed by computing device300. Any such computer readable storage media can be a part of computing device300. The term “computer readable storage medium” excludes propagated signals and communication media. The computing device300can also include an interface bus340for facilitating communication from various interface devices (e.g., output devices342, peripheral interfaces344, and communication devices346) to the basic configuration302via bus/interface controller330. Example output devices342include a graphics processing unit348and an audio processing unit350, which can be configured to communicate to various external devices such as a display or speakers via one or more A/V ports352. Example peripheral interfaces344include a serial interface controller354or a parallel interface controller356, which can be configured to communicate with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (e.g., printer, scanner, etc.) via one or more I/O ports358. An example communication device346includes a network controller360, which can be arranged to facilitate communications with one or more other computing devices362over a network communication link via one or more communication ports364. The network communication link can be one example of a communication media. Communication media can typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and can include any information delivery media. A “modulated data signal” can be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media can include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), microwave, infrared (IR) and other wireless media. The term computer readable media as used herein can include both storage media and communication media. The computing device300can be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions. The computing device300can also be implemented as a personal computer including both laptop computer and non-laptop computer configurations. From the foregoing, it will be appreciated that specific embodiments of the disclosure have been described herein for purposes of illustration, but that various modifications may be made without deviating from the disclosure. In addition, many of the elements of one embodiment may be combined with other embodiments in addition to or in lieu of the elements of the other embodiments. Accordingly, the technology is not limited except as by the appended claims.
49,509
11863458
DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present inventive subject matter. It will be apparent, however, that the present inventive subject matter may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present inventive subject matter. Embodiments are described herein according to the following outline: 1.0.General Overview2.0.Structural Overview2.1.Network Messages2.2.Network Paths2.3.Network Device2.4.Ports2.5.Traffic Management2.6.Forwarding Logic2.7.Performance Monitoring Subsystems2.8.Path Selection and Management2.9.Miscellaneous3.0.Collecting State Information through Reflected Packets3.1.Illustrative Network3.2.Probing3.3.Annotation3.4.Determining When to Reflect a Packet3.5.Reflecting the Packet3.6.Handling a Reflected Packet at Intermediate Hops3.7.Reflecting Packets Within Tunnels3.8.Collection3.9Instructions Not to Reflect3.10.Device Logic3.11.Miscellaneous4.0.Dynamic Weighted Cost Multipathing4.1.General Flow4.2.Multipath Forwarding Implementation Example4.3.Adjusting Weights4.4.Packet Reordering4.5.Miscellaneous5.0.Visibility Packets5.1.Transforming Packets into Special Visibility Packets5.2.Visibility Tags5.3.Visibility Queue5.4.Healing Engine5.5.Example Process Flows6.0.Programmable Visibility Engines6.1.Example PVE Architecture6.2.Example PVE Process Flow6.3.PVE Functions6.4.PVE Inputs6.5.PVE Outputs6.6.PVE Actions6.7.Multi-Layer PVEs6.8.Implementing WRED with PVEs6.9.Implementing Heatmaps with PVEs7.0.Example Embodiments8.0.Implementation Mechanism-Hardware Overview9.0.Extensions and Alternatives 1.0. GENERAL OVERVIEW Approaches, techniques, and mechanisms are disclosed for improving performance of a network based on state information. According to an embodiment, nodes within a network are configured to adapt to changing path states, due to congestion (e.g. from long-lived data flows and/or other issues), node failures, and other factors. In an embodiment, the foregoing is enabled by, among other aspects, detecting path state changes and reporting the changes back to a source using messages capable of traversing a routable network. In an embodiment, the foregoing may involve, for example, collecting information about node and/or path state from other nodes in the network using reflected packets. In an embodiment, a node may selectively convey path information and/or other state information to another node by annotating the information into packets it receives from the other node. A node may furthermore selectively reflect these annotated packets back to the other node, or these annotated packets may be reflected by yet other nodes that subsequently receive these packets. In various embodiments, this reflection may be performed by any node through which a packet is routed, regardless of whether the reflecting node is the final destination of the packet, and even if the reflecting node is in the middle of a tunnel. The information to be conveyed may be inserted into the original packet, and the original packet may then itself be reflected back to the source node. Or, the reflecting node may transparently duplicate the original packet, insert the information into the duplicate packet, and reflect the duplicate packet back to the source node while the original packet continues on to its next hop, assuming the reflecting node is not the destination of the packet. The packet into which the reflecting node inserts the information, whether the original packet or a duplicate, is referred to herein as a “reflected packet.” Using these reflected packets, state and other information may be conveyed over routable networks with varying levels of hierarchy. Moreover, nodes within the network may take various actions, such as changing routes, adjusting traffic flows, and so forth, based on the information collected from reflected packets. According to an embodiment, a weighted cost multipathing selection technique is improved by dynamically adjusting the weights of the paths in response to feedback indicating the current state of the network topology. Such feedback may be collected, for instance, using probing and collection processes at some or all of the nodes within the network. The feedback indicates the current state of one or more paths, such as current congestion amounts, path faults, and so forth. As the path states change over time, the weights may also change. Both the gathering of feedback and dynamic adjustment may be automated using logic implemented by computing hardware at the nodes, thus allowing the techniques to scale to any arbitrary number of network nodes and paths. In an embodiment, collected state information may be returned to and consumed by a path management process at the source node, at any other node between a reflecting node and the source node, and/or at another node designated as a collection point. The path management process analyzes the state information and assigns new weights to any relevant path(s) based on the analysis. For instance, a multipath forwarding table may be updated such that the number of entries for a more congested path is decreased at the same time the number of entries for a less congested path is increased. According to an embodiment, a switch or other network node is configured to transform certain packets or other data units that would have been dropped into “special visibility” packets (or other data units). Similarly, in an embodiment, any data unit that is impacted in an unexpected manner (e.g. inflated latency) may also be transformed into a special visibility packet. The transformation may, in some cases, including duplicating the original packet and transforming the duplicate packet into a special visibility packet instead of the original. Special visibility packets, or simply “visibility packets,” may be used for a number of different purposes, depending on the embodiment. For instance, visibility packets may be stored for some period of time in a repository, where they may be viewed and/or analyzed through external processes. As another example, certain types of special visibility packets may be utilized by network reconfiguration logic for determining when and/or how to correct problems associated with those types of special visibility packets. According to an embodiment, a computing construct referred to as a Programmable Visibility Engine (“PVE”) is provided. The PVE receives instructions to execute one or more functions from a defined set of functions supported by the PVE. The PVE may be, for instance, a software-based engine executed by one or more general purpose processors within the node, or specialized hardware such as a special-purpose processor, FPGA, or ASIC (or a set of logic contained therein). By instructing the PVE, or a series of PVEs, to perform various functions, a customer may easily customize the capabilities of a switch or other device to support calculation and collection of arbitrary metrics, and performance of various actions in response to custom triggers. In an embodiment, a node may have a fixed number of PVEs. These PVEs may be tied to input data from predefined areas of memories, or dynamically linked by the user to input data from different areas of memory. In other embodiments, a user may dynamically instantiate a number of PVEs within a node, and link those PVEs to desired areas of memory. In other aspects, the inventive subject matter encompasses computer apparatuses and computer-readable media configured to carry out the foregoing techniques. 2.0. STRUCTURAL OVERVIEW FIG.1is an illustrative view of various aspects of an example networking system100, also referred to as a network, in which the techniques described herein may be practiced, according to an embodiment. Networking system100comprises a plurality of interconnected nodes110a-110n(collectively nodes110), each implemented by a different computing device. For example, a node110may be a single networking computing device, such as a router or switch, in which some or all of the processing components described herein are implemented using application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). As another example, a node110may include one or more memories storing instructions for implementing various components described herein, one or more hardware processors configured to execute the instructions stored in the one or more memories, and various data repositories in the one or more memories for storing data structures utilized and manipulated by the various components. Each node110is connected to one or more other nodes110in network100by one or more communication links, depicted as lines between nodes110. The communication links may be any suitable wired cabling or wireless links. Note that system100illustrates only one of many possible arrangements of nodes within a network. Other networks may include fewer or additional nodes110having any number of links between them. 2.1. Network Messages While each node110may or may not have a variety of other functions, in an embodiment, each node110is configured to send, receive, and/or relay data to one or more other nodes110via these links. In general, data is communicated as series of discrete units or structures of data represented by signals transmitted over the communication links. Different nodes110within a network100may send, receive, and/or relay data units at different communication levels, or layers. For instance, a first node110may send a data unit at the network layer (e.g. a TCP segment) to a second node110over a path that includes an intermediate node110. This data unit110will be broken into smaller data units (“subunits”) at various sublevels before it is transmitted from the first node110. For example, the data unit may be broken into packets, then cells, and eventually sent out as a collection of signal-encoded bits to the intermediate device. Depending on the network type and/or the device type of the intermediate node110, the intermediate node110may rebuild the entire original data unit before routing the information to the second node110, or the intermediate node110may simply rebuild the subunits (e.g. packets or frames) and route those subunits to the second node110without ever composing the entire original data unit. When a node110receives a data unit, it typically examines addressing information within the data unit (and/or other information within the data unit) to determine how to process the data unit. The addressing information may be, for instance, an Internet Protocol (IP) address, MPLS label, or any other suitable information. If the addressing information indicates that the receiving node110is not the destination for the data unit, the node may look up the destination node110within receiving node's routing information and route the data unit to another node110connected to the receiving node110based on forwarding instructions associated with the destination node110(or an address group to which the destination node belongs). The forwarding instructions may indicate, for instance, an outgoing port over which to send the message, a label to attach the message, etc. In cases where multiple paths to the destination node110are possible, the forwarding instructions may include information indicating a suitable approach for selecting one of those paths, or a path deemed to be the best path may already be defined. Addressing information, flags, labels, and other metadata used for determining how to handle a data unit is typically embedded within a portion of the data unit known as the header. The header is typically at the beginning of the data unit, and is followed by the payload of the data unit, which is the information actually being sent in the data unit. A header is typically comprised of fields of different types, such as a destination address field, source address field, destination port field, source port field, and so forth. In some protocols, the number and the arrangement of fields may be fixed. Other protocols allow for arbitrary numbers of fields, with some or all of the fields being preceded by type information that explains to a node the meaning of the field. A traffic flow is a sequence of data units, such as packets, from a source computer to a destination. In an embodiment, the source of the traffic flow may mark each data unit in the sequence as a member of the flow using a label, tag, or other suitable identifier within the data unit. In another embodiment, the flow is identified by deriving an identifier from other fields in the data unit (e.g. a “five-tuple” combination of a source address, source port, destination address, destination port, and protocol). A flow is often intended to be sent in sequence, and network devices are therefore typically configured to send all data units within a given flow along a same path to ensure that the flow is received in sequence. For convenience, many of the techniques described in this disclosure are described with respect to routing IP packets in an L3 (level 3) network, in which context the described techniques have particular advantages. It will be recognized, however, that these techniques may also be applied to realize advantages in routing other types of data units conforming to other protocols and/or at other communication layers within a network. Thus, unless otherwise stated or apparent, the term “packet” as used herein should be understood to refer to any type of data structure communicated across a network, including packets as well as segments, cells, data frames, datagrams, and so forth. 2.2. Network Paths Any node in the depicted network100may communicate with any other node in the network100by sending messages through a series of nodes110and links, referred to as a path. For example, Node B (110b) may send packets to Node H (110h) via a path from Node B to Node D to Node E to Node H. There may be a large number of valid paths between two nodes. For example, another path from Node B to Node H is from Node B to Node D to Node G to Node H. In an embodiment, a node110does not actually need to specify a full path for a packet that it sends. Rather, the node110may simply be configured to calculate the best path for the packet out of the device (e.g. which egress port it should send the packet out on). When a node110receives a packet that is not addressed directly to the node110, based on header information associated with a packet, such as path and/or destination information, the node110relays the packet along to either the destination node110, or a “next hop” node110that the node110calculates is in a better position to relay the packet to the destination node110. In this manner, the actual path of a packet is product of each node110along the path making routing decisions about how best to move the packet along to the destination node110identified by the packet. In an embodiment, a node110may be configured to exercise greater control over a path. The node10may, for instance, be configured to include data within the packet that indicates, by a label or identifier, some aspect of the path that should be selected for the path. Other nodes110are configured to honor this information. Or, a node110may be configured to encapsulate a packet in a tunnel between two nodes. The packet is wrapped with a tunnel header that specifies a different destination than the destination of the packet. The packet is first directed to this tunnel destination, at which point the tunnel header is removed, and the packet continues on to the originally specified destination. Moreover, there may be more than one link between two nodes110. For instance, there is more than one link between Node B and Node D. Each different link between two nodes110may be considered a different path between those two nodes110. Some of the paths between two nodes110are clearly not optimal. For instance, a path that from Node B to Node D to Node C to Node F to Node I to Node J to Node N to Node G to Node H is likely less optimal than any of the paths mentioned thus far. A node may thus be configured not to make routing decisions that would select such paths. On the other hand, many other paths may be equally optimal, depending on the state of the network100. To optimize use of network100, nodes110may be configured to distribute, or “load-balance,” traffic between a number of paths so as to reduce congestion at any one node or along any one path. This distribution may be equal, or weighted. Moreover, in accordance with some embodiments, the distribution may change over time in accordance with changes in the state of nodes110and/or paths. In some embodiments, some or all of nodes110may be configured to contribute to various processes for collecting state information associated with nodes110and/or paths. Some or all of nodes110may be configured, for example, to selectively annotate packets with state information as they traverse the network100. Some or all of nodes110may also or instead be configured to selectively reflect certain annotated packets back down a path, in reverse of the direction they were sent, to provide upstream feedback regarding the states of nodes110and/or paths. Some or all of nodes110may also or instead be configured to collect state information from such annotated packets. Some or all of nodes110may also or instead be configured to change various aspects of network100based on collected information, such as changing traffic flow control policies, rerouting traffic, rebooting nodes110, and so forth. Specific examples of these processes are described subsequently. 2.3. Network Device FIG.2is an illustrative view of various aspects of an example network device200in which techniques described herein may be practiced, according to an embodiment. Network device200is a computing device comprising any combination of hardware and software configured to implement the various logical components described herein, including components210-290. Note that, in an embodiment, some or all of the nodes110in system100may each be a separate network device200. 2.4. Ports Network device200includes ports210/290. Ports210, including ports210a—n, are inbound (“ingress”) ports by which data units referred to herein as packets205are received over a network, such as network110. Ports290, including ports290a—n, are outbound (“egress”) ports by which at least some of the packets205are sent out to other destinations within the network, after having been processed by the network device200. Ports210/290are depicted as separate ports for illustrative purposes, but may actually correspond to the same physical hardware ports on the network device210. That is, a network device200may both receive packets205and send packets205over a single physical port, and the single physical port may thus function as both an ingress port210and egress port290. Nonetheless, for various functional purposes, certain logic of the network device200may view a single physical port as a separate ingress port210and egress port290. Moreover, for various functional purposes, certain logic of the network device200may subdivide a single ingress port210or egress port290into multiple ingress ports210or egress ports290, or aggregate multiple ingress ports210or multiple egress ports290into a single ingress port210or egress port290. Hence, in various embodiments, ports210and290should be understood as distinct logical constructs that are mapped to physical ports rather than simply as distinct physical constructs. 2.5. Traffic Management Since not all packets205received by the device200can be processed by the packet processor(s)250at the same time, a traffic manager221of device200may store packets205in temporary memory structures referred to as buffers222while the packets205are waiting to be processed. For example, the device's forwarding logic220may only be capable of processing a certain number of packets205, or portions of packets205, in a given clock cycle, meaning that other packets205, or portions of packets205, must either be ignored (i.e. dropped) or stored. At any given time, a large number of packets205may be stored in the buffers222of the device200, depending on network traffic conditions. A buffer222may be a portion of any type of memory, including volatile memory and/or non-volatile memory. Device200includes a buffer manager configured to manage use of buffers222by device200. Among other processing tasks, the buffer manager may, for example, allocate and deallocate specific segments of memory for buffers222, create and delete buffers222within that memory, identify available buffer(s)222in which to store a newly received packet205, maintain a mapping of buffers222to packets205stored in those buffers222(e.g. by a packet sequence number assigned to each packet205as the packet205is received), mark a buffer222as available when a packet205stored in that buffer222is dropped or sent from the device200, determine when to drop a packet205instead of storing the packet205in a buffer222, and so forth. A packet205, and the buffer(s)222in which it is stored, is said to belong to a construct referred to as a queue224. A queue224may be a distinct, continuous portion of the memory in which buffers222are stored. Or, a queue224may instead be a set of linked memory locations (e.g. linked buffers222). In some embodiments, the number of buffers222assigned to a given queue224at a given time may be limited, either globally or on a per-queue basis, and this limit may change over time. The forwarding logic220of device200may process a packet205over one or more stages. A node may have many queues224, and each stage of processing may utilize one or more of the queues224to regulate which packet205is processed at which time. To this end, a queue224arranges its constituent packets205in a sequence, such that each packet205corresponds to a different node in an ordered series of nodes. The sequence in which the queue224arranges its constituent packets205generally corresponds to the sequence in which the packets205in the queue224will be processed. The traffic manager221is a component that manages the use of buffers222to store packets205(or copies thereof), assigns buffers222to queues224, and manages the flow of packets205through the queues224. The traffic manager221may, for instance, determine when to “dequeue” packets205from queues224and provide those packets205to specific packet processor(s) of forwarding logic220. The traffic manager221may further identify a specific queue224to assign a packet205to. 2.6. Forwarding Logic A device200comprises one or more packet processing components that collectively implement forwarding logic220by which the device200is configured to determine how to handle each packet the device200receives. Forwarding logic220, or portions thereof, may, in some instances, be hard-coded. For instance, specific hardware or software within the node may be configured to always react to certain types of data units in certain circumstances in a certain way. Forwarding logic220, or portions thereof, may also be configurable, in that the logic220changes over time in response to data collected from or instructions received from other nodes in the network in which the device200is located. For example, a device200will typically store in its memories one or more forwarding tables (or equivalent structures) that map certain data unit attributes or characteristics to actions to be taken with respect to data units having those attributes or characteristics, such as sending the data unit to a selected path, or processing the data unit using a specified internal component. For example, such attributes or characteristics may include a Quality-of-Service level specified by the data unit or associated with another characteristic of the data unit, a flow control group, an ingress port210through which the data unit was received, a tag or label in the packet's header, a source address, destination address, packet type, or any other suitable distinguishing property. In an embodiment, forwarding logic220may read port state data255. Port state data255may include, for instance, flow control state information describing various traffic flows and associated traffic flow control rules or policies, link status information indicating links that are up or down, port utilization information indicating how ports are being utilized (e.g. utilization percentages, utilization states, etc.). Forwarding logic220may be configured to implement the associated rules or policies associated with the flow(s) to which a given packet belongs. Forwarding logic220may process a data unit over multiple stages. At each stage, the data unit is placed in a buffer222, which is said to belong to a queue224. A device200may have many queues224, and each stage of processing may utilize one or more of the queues224. At any given processing stage, one or more packet processing components, such as a Field Programmable Gate Array (FPGA), Application-Specific Integrated Circuit (ASIC), or a general purpose processor executing software-based instructions, reads data units from associated queues224and determines how to handle the data units. In an embodiment, different queues224may exist for different destinations. For example, each port210and/or port290may have its own set of queues224. The queue224to which an incoming packet205is assigned may therefore be selected based on the port210through which it was received, while the queue224to which an outgoing packet is assigned may be selected based on forwarding information indicating which port290the packet should depart from. A different packet processor may be associated with each different set of one or more queues224. Hence, the current processing context of the packet205may be used to select which queue224a packet205should be assigned to. In an embodiment, there may also or instead be different queues224for different flows or sets of flows. That is, each identifiable traffic flow or group of traffic flows is assigned its own set of queues224to which its packets205are respectively assigned. In an embodiment, different queues224may correspond to different classes of traffic or quality-of-service (QoS) levels. Different queues224may also or instead exist for any other suitable distinguishing property of the packets205, such as source address, destination address, packet type, and so forth. For instance, a data unit may be forwarded to another queue224associated with another processing stage implemented by another set of processing components, sent out of the device200over an outbound port290, discarded, delayed for flow control reasons, and so forth. The collective actions of these processing components over these multiple stages is said to implement the forwarding logic of the device200. An example flow of a packet205through device200is as follows. The packet205may be received by a port210. The packet205is then processed by an initial packet processor (in some embodiments known as a packet pre-processor), and then delivered to a traffic manager221. Traffic manager221stores the packet205in a buffer222and assigns the packet205to a queue224. Traffic manager221manages the flow of the packet205through the queue224until the packet205is released to another packet processor. Depending on the processing, the traffic manager221may then assigned the packet205to another queue so that it may be processed by yet another processor, or the packet processor may send the packet205out another port290. In the course of processing a packet205, a device200may replicate a packet205one or more times. For example, a packet205may be replicated for purposes such as multicasting, mirroring, debugging, and so forth. Thus, a single packet205may be replicated to multiple queues224. Hence, though certain techniques described herein may refer to the original packet205that was received by the device200, it will be understood that those techniques will equally apply to copies of the packet205that have been generated for various purposes. Dropping Data Units As data units are routed through different nodes in a network, the nodes may, on occasion, discard, fail to send, or fail to receive data units, thus resulting in the data units failing to reach their intended destination. The act of discarding of a data unit, or failing to deliver a data unit, is typically referred to as “dropping” the data unit. Instances of dropping a data unit, referred to herein as “drops” or “packet loss,” may occur for a variety of reasons, such as resource limitations, errors, or deliberate policies. Many devices in networks with complex topologies, such as switches in modern data centers, provide limited visibility into drops and other issues that can occur inside the devices. Such devices can often drop messages, such as packets, cells, or other data units, without providing sufficient information to determine why the messages were dropped. For instance, it is common for certain types of nodes, such as switches, to be susceptible to “silent packet drops,” where data units are dropped without being reported by the switch at all. Another common problem is known as a “silent black hole,” where a node is unable to forward a data unit due to a lack of valid routing instructions at the node, such as errors or corruption in forwarding table entries. Another common problem is message drops or routing errors due to bugs in particular protocols. Beyond dropping data units, a variety of other low visibility issues may arise in a node, such as inflated latency. Inflated latency refers to instances where the delay in transmission of a data unit exceeds some user expectation of target threshold. 2.7. Performance Monitoring Subsystems According to an embodiment, a device200may comprise any of a variety of subsystems configured to facilitate various aspects of monitoring the performance of a network, such as an annotation subsystem230, reflection subsystem240, path state information subsystem250, and visibility subsystem270. Annotation subsystem230interfaces with forwarding logic220and/or traffic manager221to identify when to annotate packets with state information (e.g. using annotation criteria) and insert the state information into the identified packets. The annotated state information may include, for example, node state information235already stored at the device200due to the operation of other component(s) (not depicted) and/or node state information235generated by components within device200. Node state information235may also, in some embodiments, impact when the device210chooses to annotate a packet (e.g. triggered by a congestion level or amount of delay). Suitable selective annotation techniques for annotation subsystem230are described elsewhere herein. Reflection subsystem240interfaces with forwarding logic220and/or traffic manager221to identify when to reflect packets back along the path from whence the packets came (e.g. using reflection criteria), and interfaces with forwarding logic220to take appropriate actions to actually reflect packets identified for reflection. Node state information235may, in some embodiments, impact when the device210chooses to reflect a packet (e.g. triggered by a congestion level or amount of delay). Suitable selective reflection techniques for reflection subsystem240are described elsewhere herein. Path state information subsystem250interfaces with forwarding logic220to identify when to collect information from packets that have been marked as reflected (i.e. by other nodes of the network in which device200resides), when to generate and store metrics based on annotated information therein, and optionally when to take one or more actions based thereon. Suitable information collection techniques for subsystem250are described elsewhere herein In an embodiment, the forwarding logic220may be configured such that certain packets that would have been dropped by the forwarding logic220or traffic manager221, and/or certain related packets, are instead processed by a visibility subsystem270that transforms the packets into special visibility packets. Conceptually, the packets to be transformed may be viewed as being forwarded to a visibility path instead of the normal path to which they otherwise would have been forwarded. The visibility component270analyzes the visibility packets and optionally generates logs or reports based thereon. In this manner, the device200provides insight into drops or other events. The visibility subsystem270may further react to certain visibility packets, or trends based thereon, by changing the configuration of device200or by sending messages to other nodes in a network. 2.8. Path Selection and Management A variety of path selection techniques exist for forwarding logic220to select a path for a packet. One of the most common of these techniques assigns weights to each path. The weights are intended to quantify some aspect of the path such as the total number of hops in the path and/or the speed or length of the path. Generally, the technique involves selecting a “shortest path” based on routing metrics, representing costs that are generally computed at least in part on these weights. The selected path typically (but not necessarily) corresponds to the path with the lowest cost. Though there are many varieties of algorithms for identifying path cost, one example type of algorithm is known as a “shortest path” algorithm. This algorithm, may, for example, be employed to identify and calculate the costs for all paths within a network topology, based on individual weights assigned to the nodes and links (also known as “edges”) within that topology. A number of issues may arise when assigning a path for a destination. For instance, many techniques may not consider the state of a path when performing path assignment. That is, assignments are made with no device and/or network state input. Path selection may also occur without considering alternate paths, which may not happen to be topological shortest paths, but may nonetheless be better suited to handle traffic due to current network conditions. Moreover, “shortest path” algorithms tend not to provide an intelligent mechanism for selecting a path when multiple paths are deemed “shortest.” Complex network topologies, such as those found in data centers having thousands or even millions of nodes, employ multiple paths among servers to deliver scalable, cost-effective network capacity. To more efficiently route traffic through a network, the forwarding logic220at some or all of the nodes in the network may include a load-balancing component configured to distribute traffic to the same destination across multiple paths. The simplest and the most widely deployed approach for load balancing among these paths, Equal Cost Multipath (ECMP), divides flows among the shortest paths toward a destination. ECMP is designed to utilize an ideally uniform hashing of balanced flow sizes to achieve fairness and good load balancing between paths. However, ECMP assumes a balanced, regular, and fault-free topology, which is often an invalid assumptions in practice that can lead to substantial performance degradation and, worse, variation in flow bandwidths even for same size flows. This is particularly true where the topology is complex, such as in a data center. Alternatively, a Weighted Cost Multipath (WCMP) approach is often used to balance traffic in such network topologies. WCMP is described in detail in J. Zhou, M. Tewari, M. Zhu, A. Kabbani, L. Poutievski, A. Singh, and A. Vandat, WCMP: weighted cost multipathing for improved fairness in data centers. New York, New York, USA: ACM, 2014, pp. 5-14, the entire contents of which is incorporated by reference for all purposes as if set forth herein. Generally, WCMP assigns weights to paths and distributes traffic to the paths roughly in proportion to their assigned weights. Note that these weights correspond to the relative frequency of assignment of packets to a path, and are therefore not to be confused with the weights that are used to calculate the cost of a path. The weights themselves may be determined in a variety of manners. For instance, Zhou et al. assigns each port a weight roughly proportional to the capacity of each port. Unfortunately, a traditional WCMP approach is not optimal in certain contexts. For instance, among other weaknesses, traditional selection mechanisms, including hash-based selection mechanisms, do not consider path state when binding flows to paths, and are thus unable to react adequately to path congestion, path faults, and so forth. In an embodiment, some of all of these problems are addressed by using dynamic weights in conjunction with the WCMP approach. A path management control subsystem265in device200is configured to analyze path state information, such as may be collected by subsystem250or forwarded from another node, and determine when network conditions warrant adjusting path weights. Alternatively, an external path management control subsystem265may send instructions to device200to adjust path weights. In an embodiment, some or all of the foregoing techniques may be implemented using one or more path tables265that map destination addresses, subnets, or other components to paths through a network. In an embodiment with dynamic weights, a path management controller260adjusts weights by changing the number of entries assigned to a given path in a path table265. In other embodiments, a state information collection subsystem250may be configured to adjust paths in a path table265to route traffic around congested links or nodes in a network, or towards favored links or nodes. In yet other embodiment, other suitable data structures may instead be utilized for path selection. Additional example details of dynamic path management techniques are described elsewhere herein. 2.9. Miscellaneous Device200illustrates only one of many possible arrangements of components configured to provide the functionality described herein. Other arrangements may include fewer, additional, or different components, and the division of work between the components may vary depending on the arrangement. For example, in some embodiments, subsystems260and/or270may be omitted, along with any other components relied upon exclusively by the omitted component(s). As another example, in an embodiment, system100may include devices200with different combinations of subsystems230,240, and250. For instance, some devices200may include only annotation subsystem230, other devices200may further include a reflection subsystem240, other devices may include only a path state information collection subsystem250, and yet other devices may include none of these subsystems. 3.0. COLLECTING STATE INFORMATION THROUGH REFLECTED PACKETS As described in other sections, information about the state of various nodes and/or paths within a network may be collected through a mechanism referred to herein as reflected packets. Generally, a packet is annotated with state information at one or more nodes along a path along which it is travelling, and then reflected back towards its source. Further illustrative details of various embodiments featuring reflected packets are now described. FIG.3illustrates an example flow300for reflecting packets, according to an embodiment. The various elements of flow300may be performed in a variety of systems, including systems such as system100described above. In an embodiment, each of the processes described in connection with the functional blocks described below may be implemented using one or more integrated circuits, computer programs, other software elements, and/or digital logic in any of a general-purpose computer or a special-purpose computer, while performing data retrieval, transformation, and storage operations that involve interacting with and transforming the physical state of memory of the computer. Block310comprises sending a packet, such as a packet205, from a source node. The packet is addressed to a destination node. The packet is sent out of a port of the source node that corresponds to one of a plurality of possible paths to the destination node. In an embodiment, the packet may be a designated probe packet that the source node generates specifically to collect information about the path. In other embodiments, the packet is a normal packet generated by the source node or relayed by the source node for reasons entirely separate from collecting information about the path. Block315comprises the packet arriving at an intermediate hop along the path from the source node to the destination node. The intermediate hop may be the node at which the packet arrives immediately after being sent from the source node, or the packet may traverse any number of nodes in the path before arriving at the intermediate hop of block315. Block320comprises the intermediate hop annotating the packet with state information, using processes as described elsewhere herein. In an embodiment in which the packet is a probe packet, the intermediate hop may be configured to annotate any packet designated as a probe packet. In an embodiment where the packet is a normal packet, the intermediate hop may select to annotate the packet based on annotation criteria. The annotation criteria may be based on certain characteristics of the packet (e.g. as determined from the packet header) and/or based on the state of intermediate hop itself (e.g. if the intermediate hop is congested). The annotation criteria may further include a random, pseudo-random, or sampling element, so not all packets that have the same characteristics are annotated. As another example, an intermediate hop may be configured to annotate any packet that already contains annotated information. Further criteria for selecting when to annotate a packet are described elsewhere herein. In general, annotation is an optional aspect on a per-hop basis, such that not all intermediate hops will annotate each packet. However, in an embodiment, it is likely that a packet will be annotated at a node at which the packet is reflected. From block320, flow300may proceed to block330, in which the packet is sent to a next hop. Flow300may then loop back to block315, and the packet may be further annotated as it travels along the path. Alternatively, or additionally, flow300may proceed from block320to block340. At block340, the intermediate hop determines to reflect the packet back towards the source node. Criteria for determining when to reflect a packet may be similar in nature to annotation criteria, and are further described elsewhere herein. In some instances, reflection may involve duplicating the packet in a block345. Either the original packet or the duplicate packet becomes the reflected packet, while the other of the two packets is sent to the next hop via block330, so as to continue along the path and eventually arrive at the destination node in block350. Optionally, the continuing packet may be marked in such a manner that it will not be reflected again as it continues along the path, and/or its annotated data may be removed. In other instances, such as if the packet is a probe packet, if the reflecting node is the destination node, and/or if the reflecting node determines that continued forwarding of the packet is no longer desirable, no duplication of the packet is needed. The reflected packet is generally reflected by, among other steps, changing its destination to be that of the source node, or a collection point associated with the source node. The destination specified by the payload header may be manipulated directly, or a new header may be added to the packet (e.g. a tunnel header) that specifies the source node or collection point as the destination of the packet. Further explanation of the reflection process is described elsewhere herein. Block360comprises the reflected packet arriving at a preceding hop along the path. Optionally, the reflecting node may have marked the reflected packet as being a reflected packet, and the preceding hop may accordingly treat the reflected packet in a special manner, as described elsewhere herein. The preceding hop then sends the packet to the next preceding hop in block370, and the packet continues traversing along the original path in reverse until it arrives at the source node or a collection point in block375, as described elsewhere herein. For illustrative purposes, the term “preceding hop” is used to describe each node in the reverse path taken by the reflected packet. However, the term should not be interpreted to require that a node (other than the reflecting node or, as described elsewhere, the end of a tunnel) modify its forwarding logic to send the reflected packet to a “preceding” hop rather than to a “next” hop. Rather, since the reflecting node changes the destination address of the reflected packet, the reflected packet may be forwarded back to the source node using standard forwarding logic, and each “next preceding hop” is in reality simply the next hop in the path from the reflecting node to the source node. Moreover, in some embodiments, the reverse path that a reflected packet takes need not exactly mirror the original path that the packet took from the source node to the reflecting node. That is, since each node may be configured to make its own routing decisions, packets travelling between two nodes may on occasions travel through different sets of nodes depending on the direction in which they are travelling and/or on a variety of other factors. Hence, the reflecting packet may travel through different nodes than the original packet. In an embodiment, to reduce or eliminate this behavior, forwarding logic for reflected packets may be configured to try to replicate the reverse path using the annotated information within the packet and/or labels or identifiers within the packet. For instance, if a reflecting node is aware of one or more nodes that the packet traveled through, the reflecting node might first tunnel the packet to the most recently traversed one of these nodes to try to best replicate the reverse path. Similarly, that node may then tunnel the packet to the next known node in the list, and so forth. At block380, the source node, or any other collection point, reacts to the state information. Any node along the reverse path may function as a collection point. Moreover, in an embodiment, there may be multiple collection points, as an intercepting node in the reverse path that functions as a collection point may also be configured to continue forwarding the reflected packet back towards the source node. In general, a collection point reacts to the state information by re-calculating certain performance metrics associated with paths or nodes within the network and/or reconfiguring one or more nodes in the network based on the state information or metrics. Specific examples of such actions are described elsewhere herein. Flow300illustrates only one of many possible flows for collecting state information through reflected packets. Other flows may include fewer, additional, or different elements, in varying arrangements. For example, in some embodiments, blocks345and350may be omitted, along with any other elements relied upon exclusively by the omitted element(s). As another example, in an embodiment, a flow may alternatively involve annotating reflected packets with state information from the nodes through which the reflected packet traverses. For instance, in such embodiments, a packet may be annotated with only reduced information, or at a reduced frequency, to limit the transmission requirements for reflected packets information, as the packet travels along its original path. When the decision is made to reflect the packet, the nodes in the reverse path may therefore be configured to supplement this information by annotating further information about the path state as the reflected packet travels in reverse. Or, the decision to annotate a packet may only be made once it is determined to reflect the packet. Then, each node in the reverse path, seeing that the packet is marked as reflected, would further annotate the packet. 3.1. Illustrative Network FIG.5is a time diagram500illustrating the movement of a packet through a network over times t0-t5, as the packet is annotated and reflected, according to an embodiment. Times t0-t5do not necessarily correspond to equal intervals of time. At t0, packet505departs a Node A (510) for a Node B (511). Node A may be the original sender of packet505, or packet505may have been relayed through Node A. Node B is an intermediate hop on a path between Node A and Node N (520), which is the destination address of packet505. For simplification, other nodes in the path are not depicted. These additional nodes may optionally include one or more undepicted nodes between Node A and Node B. At t1, packet505has been annotated by Node B by to include state information506. The annotation may include adding additional information to packet505and/or updating the information in packet505. Node B is now relaying packet505to Node C (512), which is another intermediate hop on the path to Node N. Again, there may optionally be one or more undepicted nodes between Node B and Node C. At t2, packet505has been annotated by Node C by to further include state information507. Node C is now relaying packet505to Node D (513), which is another intermediate hop on the path to Node N. Again, there may optionally be one or more undepicted nodes between Node C and Node D. At t3, packet505is departing a Node F (514) for a Node G (515), both of which are other nodes along the path from Node A to Node N. As depicted, packet505still contains annotations506and507, but does not contain additional annotations. This may be, for example, because Nodes D, F, and any other intervening nodes, either do not include an annotation subsystem, or did not determine packet505to meet their respective annotation criteria. At t4, packet505has been reflected by a Node H (516) back to Node G. Prior to reflection, Node H annotates packet505with state information508, by way of adding to and/or updating annotations506and507. Optionally, Node H may duplicate packet505and also forward the duplicate copy on to Node N. In an embodiment, this duplicated copy may or may not include annotations506-508. At t5, packet505has been forwarded on through Nodes F-C, and is now departing Node B for Node A. Packet505continues to include annotations506-508, which may be analyzed by Node A for a variety of purposes explained in other sections. FIG.5illustrates but one example of how a packet may move through a network that implements techniques described herein. Other packets may take different routes, be reflected at different nodes, be annotated in different manners, and/or be collected by different nodes. Furthermore, in other embodiments, a network may have other arrangements of nodes, necessitating other routes of potentially different lengths. 3.2. Probing A source node within the network, such as a server or a device, may initiate generation of path state information by sending designated probe messages (e.g. packets sent solely for the purpose of collecting state information) along certain paths. A probe message may be, for example, a packet that includes a special flag or other identifier in the packet header or payload. The payload may otherwise be empty, or the payload may contain instructions, metrics, path information, or other useful information. Some or all of the nodes along the path may recognize the packet as being a probe packet, as it is sent or as it is being returned, based on the flag or other identifier in the packet header, and, in response, annotate the probe message with state information. In an embodiment, annotation of a probe packet may further be conditioned on the packet and/or the node state meeting other annotation criteria. Alternatively, or additionally, regular packets (i.e. packets sent as part of communications for purposes other than collecting state information) sent by the source node may be selectively annotated with state information by some or all of the nodes in a path. A source node may include a special flag or identifier within a field in the packet's header, by which certain other nodes may identify the packet as a probe packet. Or, another node along the path may selectively treat a regular packet as a probe packet in response to various rules or triggers (e.g. randomly, based on the current congestion state of the node or path, based on the source address, and/or based on any other suitable factor), as described elsewhere. For convenience, a regular packet selected for this purpose may henceforth also be referred to as a probe packet. One concern with using a regular packet as a probe packet may be exceeding a packet's maximum possible size (e.g. the MTU) when annotating path state information. Among other ways of addressing this problem, a node may be configured to only annotate packets when the annotations will not exceed the maximum possible packet size. Eventually, the probe packet may arrive at a “reflecting node.” The reflecting node may be specified by the probe packet (e.g. the destination node of the packet). Or, in some embodiments, a node may selectively determine that the node is a reflecting node based on various rules or triggers (e.g. randomly, based on the current congestion state of the node or path, based on the source address, and/or based on any other suitable factor). The reflecting node reflects the collected state information back to the source node or another designated node, either by copying the probe packet and redirecting it back to the source node, or by generating a new packet with the relevant information and returning it to the source node. 3.3. Annotation As mentioned, some or all of the nodes in a path may annotate a packet that is recognized as a probe packet, or any other packet, with state information. A node configured to perform such annotation for a particular probe packet is referred to herein as an annotating node. In some embodiments, however, the node need not be an annotating node for all probe packets, or all packets annotated by the network, but rather may selectively annotate packets using logic conditioned upon any suitable factor(s). State information may take a variety of forms and be generated in a variety of manners depending on the embodiment. For example, network metrics generated by any of a variety of frameworks at the node may be used as state information. An example of such a framework is the In-band Network Telemetry (“INT”) framework described in C. Kim, P. Bhide, E. Doe, H. Holbrook, A. Ghanwani, D. Daly, M. Hira, and B. Davie, “Inband Network Telemetry (INT),” pp. 1-28, September 2015, the entire contents of which are incorporated by reference as if set forth in their entirety herein. Examples of state information may further include, without limitation, information generated by the traffic manager221, such as queue size, drop counters, queue delay, etc., and/or port state information, such as RX/TX bytes, RX/TX utilization, flow control state, etc. In an embodiment, some or all of the annotating nodes may report per-port loading state (for one or more ports), resulting in per-port loading states for multiple nodes in a path being reported in a single message. This may enable, for example, communication of device state to one or more endpoints in a rapid manner, allowing a more responsive control algorithm. Alternatively, or additionally, a one-way total delay metric may be calculated at some or all of the annotating nodes. This metric may measure the total delay along the path up to the annotating node. Nodes may communicate one-way delay at full resolution (with high degree of precision) or using a quantized metric. In an embodiment, a quantized metric may be a quantized variance from an expected average (in order to save bits). For example, supposing the delay is expected to be 50 microseconds for a given path and the observed delay is 55.6 microseconds. The quantized difference from the norm could be transmitted (i.e. 55-50=>185 microseconds, so 5 microseconds is communicated). In an embodiment, the P4-INT metric “Egress Port TX Link Utilization,” for instance, is an example of a suitable metric that may be utilized in certain embodiments to convey path state on a per-hop basis. An example of a congestion metric that may be accumulated along a path is described, without limitation, in M. Alizadeh, T. Edsall, S. Dharmapurikar, R. Vaidyanathan, K. Chu, A. Fingerhut, V. T. Lam, F. Matus, R. Pan, N. Yadav, T. Edsall, S. Dharmapurikar, R. Vaidyanathan, K. Chu, A. Fingerhut, V. T. Lam, F. Matus, R. Pan, N. Yadav, and G. Varghese, CONGA: distributed congestion-aware load balancing for datacenters, vol. 44, no. 4. ACM, 2015, pp. 503-514, the entire contents of which are incorporated by reference as if set forth in their entirety herein. In other embodiments, enhanced metrics may be provided by custom logic at the nodes themselves. In one embodiment, the probe message is annotated to form a single message containing port loading state for many or all ports at each annotating node, thereby increasing the path state information collection process update rate. The port loading state may optionally be quantized. In at least one embodiment, the state information may be state information collected through processes such as described in U.S. application Ser. No. 14/958,830 (filed Dec. 3, 2015) and Ser. No. 14/973,541 (filed Dec. 17, 2015), the entire contents of both of which are hereby incorporated by reference as if set forth in their entirety herein. In an embodiment, the state information may be user-defined statistics collected through the use of programmable visualization engines. The annotated state information may be placed within one or more annotation fields within the header or the payload. When the probe packet is a regular packet, it may be preferable to annotate the header, so as not to pollute the payload. If annotated state information is already found within the packet, the state information from the currently annotating node may be concatenated to or summed with the existing state information, depending on the embodiment. In the former case, for instance, each node may provide one or more current metrics, such as a congestion metric. In the latter case, for instance, each node may add the value of its congestion metric to that already in the packet, thus producing a total congestion metric for the path. In an embodiment, the annotated information may be annotated as an additional header that wraps the original packet. In another embodiment, the annotated information may be annotated by repurposing existing fields within the packet, such as reserved fields or unused fields. The path itself may be identified within the probe packet. In an embodiment, the packet includes a path ID assigned by the source node, which may be any unique value that the source node maps to the path. In an embodiment, the path may be specified using a load balancing key, which is a value that is used by load balancing functions at each hop in the network. 3.4. Determining when to Reflect a Packet A node may selectively determine when to reflect a packet, based on the packet itself, node state information, path state information, and/or other conditional logic (e.g. using sampling techniques). According to an embodiment, a node monitors various quantifiable attributes of the node and/or traffic flows being conveyed through the node to determine when certain specified reflection criteria are met. If the reflection criteria are met when the node processes a specific packet, the node reflects the packet. The criteria may be general (i.e. applicable to all packets) and/or specific to individual packets or flows. The reflection criteria may be based on statistics kept by the node and/or characteristics of the individual packets. The reflection criteria may further include some randomization or sampling function, as well as a tracking mechanism, so as to avoid reflecting all packets from a given source or in a given flow. For instance, the reflection criteria may be such that a node may only reflect a small sample of packets (e.g. 1%, 0.01%, etc.), even when all other reflection criteria are met. One example of a suitable reflection criteria is an egress queue congestion condition. A node may monitor a queue fill level and reflect a packet if the fill level exceeds a specified threshold. Another example of a suitable reflection criteria is a path imbalance condition. A node may monitor next-hop load distribution indicators to determine when a given next-hop is overloaded relative to other next-hops in its group. Another example of a suitable reflection criteria is a link utilization condition. A node may monitor a link bandwidth utilization metric to determine when the percentage of the link bandwidth that is currently used is above a specified threshold. These conditions may be utilized, in isolation or in conjunction with other conditions, to determine when a packet that would be routed through the relevant queue, next-hop, and/or link should be reflected. Reflection criteria may be hard-coded into a node, or adjusted programmatically using administrative instructions to the node. Although reflection criteria may take any suitable form, in a particular embodiment, reflection criteria are divided into reflection eligibility conditions and monitoring conditions. Forwarding logic or other suitable logic may be configured to determined when a packet is “reflection eligible.” That is, the characteristics of the packet, such as the packet source, destination, label(s), size, forwarding type, traffic class, and location in the path to, and so forth may be utilized to determine if the packet is the type of packet that can be reflected. For example, in an embodiment, reflection criteria might preclude reflecting multicast packets from being reflected, or packets at their last hop from being reflected. Such logic may further include historical conditions, such as whether another packet from the source and/or flow has been reflected within a recent time period. Monitoring conditions may be utilized to determine when a packet is a “reflection candidate.” For instance, the node may monitor device attributes at the node, such as buffer or queue fill level, to determine the state of a path for a given flow. When the buffer or queue fill level for the flow exceeds a certain threshold, the packets in the flow, or at least a random sample of packets from the flow, may be designated as reflection candidates. Or, the node may monitor an internal congestion state or an administrator-induced reporting state for the node. When the internal congestion state exceeds a certain value, or when the reporting state is set, each packet routed through the node may be considered a reflection candidate. Packets that are both “reflection eligible” and “reflection candidates” may then be reflected. In some embodiments, packets are only tested for reflection candidacy if they are reflection eligible, while in other embodiments, packets are only tested for eligibility if they are reflection candidates. In yet other embodiments, any other suitable technique may be utilized to determine when reflection criteria are met. 3.5. Reflecting the Packet Generally, reflecting a packet, whether a duplicate or the original, involves modifying the packet such that 1) the reflected packet is destined for the source of the original packet, 2) the packet is flagged as being a reflected packet, and 3) the packet includes annotated state information and/or any other information the reflecting node wishes to convey. This process may involve inserting and/or modifying relevant fields within packet header(s) to include the foregoing, though in some embodiments the payload of a packet may instead be modified to include a flag and/or state information. In some embodiments, no explicit flag is needed to indicate that a reflected packet is in fact a reflected packet. Rather, the existence of a special field for carrying the annotated state information serves as an implicit flag that the packet is reflected. In an embodiment, to reduce resource utilization, the reflecting node may truncate the payload of a reflected packet to reduce the size of the reflected packet. In an embodiment, the reflecting node may elevate the service priority of the reflected packet to ensure that the reflected packet has higher processing priority than the original data packet, for faster transmission on the path back to the source. 3.6. Handling a Reflected Packet at Intermediate Hops When a packet is reflected, the reflected packet may be marked in some manner to indicate that the packet is in fact a reflected packet. For instance, as described above, a pass-thru-reflect flag may be set within the packet. Among other purposes, this marking may assist intermediate hops between the reflecting node and the source node in handling the reflected packet on its return journey. When an intermediate node detects a reflected packet (i.e. through the existence of an explicit or implicit flag), the intermediate node may handle the reflected packet differently than a regular packet. For instance, the intermediate node may bypass its own reflection logic, so as to avoid reflecting a reflected packet back to the reflecting node. As another example, the intermediate node may elevate the service priority of the reflected packet to ensure that the reflected packet has higher processing priority than the original data packet, for faster transmission on the path back to the source. As another example, the intermediate node may itself annotate the reflected packet to include state information from the intermediate node, so as to provide a more comprehensive picture of the (reverse) path state. As yet another example, the intermediate node may also or instead truncate the reflected packet. 3.7. Reflecting Packets within Tunnels In the case of reflecting a packet at a reflecting node through which the packet is being tunneled, the reflecting process may be slightly modified. The packet is first reflected back to the source specified by the tunnel header (i.e. the start of the tunnel). The tunnel source then tunnels the reflected packet back to the source address specified by the source node of the original packet. Or, in the case of multiple encapsulation, the reflected packet is tunneled back to the source of another tunnel the packet must traverse before proceeding to the source node. For instance, the tunnel source may be configured to reflect the packet back to the location specified in the payload's source address, which will be either the source node itself, or the source of another tunnel. Example Reflection of Tunneled Packet FIGS.6A and6Billustrate the reflection of a tunneled packet610in a network600, according to an embodiment.FIG.7illustrates a flow700for reflecting such a tunneled packet, according to an embodiment. In block705of flow700, a packet610departs a Node S0(601) and passes through a set of nodes602as the packet begins its route to destination Node DO (608). The contents of the packet610as the packet departs from Node S0are illustrated as packet structure610a. The contents include a packet header620and a payload630. Packet header620includes a source address, which is set to S0, and destination address, which is set to DO. Packet header620may further include other fields (not depicted). Packet610aeventually arrives at a Node H0(603), which determines that packet610should be sent via a tunnel604to Node H1(607). Accordingly, in block710, Node H0prepends a tunnel header640bto packet610a, resulting in tunneled packet610b. Tunnel header640bincludes a source address, which is set to the start (H0) of tunnel604and a destination address, which is set to the end (H1) of tunnel604. Tunnel header640bmay further include other fields (not depicted). The journey of packet610bthrough tunnel604involves passing through a set of nodes605until a node GO (606) is eventually reached. For a variety of reasons, such as reflection criteria described elsewhere herein, in block715, Node GO may determine to reflect packet610b. For instance, congestion may be detected at Node GO. Node GO may thus begin to manipulate the packet610b, or a copy thereof, to generate a reflected packet610c. Simultaneously, in some embodiments, packet610bmay continue on through one or more nodes to the end of tunnel604, at Node H1, which strips tunnel header640band then forwards packet610aon through another one or more nodes to destination Node DO. Referring now toFIG.6B, the reflected packet610cmay have a new tunnel header640c, with the source (H0) of tunnel604becoming the destination of the tunnel header and the current node (GO) becoming the source of the tunnel header. An annotated state information field may optionally be added to the header640c, as is a reflection flag or indicator, to signify that packet610chas been reflected. The packet header620remains unchanged, while the payload630also remains unchanged, though in certain embodiments payload630may be truncated or stripped. Generating the reflected packet structure610cmay involve any suitable steps, depending on the embodiment. For instance, in block720, GO may read the tunnel header640band save the tunnel source address found therein. In block725, Node GO may then strip the tunnel header640b. In block730, Node GO may add the new tunnel header640c, with the tunnel source address as the tunnel destination address. In another embodiment, rather than stripping the tunnel header, the reflecting node may replace fields within the existing tunnel header. In block740, the reflected packet610cis then forwarded over the set of nodes605back to Node H0. In block745, Node H0reads the tunnel header640cand detects the reflection indicator, signifying that packet610cis a reflected packet. In block750, Node H0saves the annotated state information field from the tunnel header640c. In block755, Node H0strips the rest of the tunnel header640c, leaving behind the original packet header620and payload630(if payload630remains in packet610c). In block760, a new tunnel header640dis added to packet header620and payload630(if found). This new tunnel header640didentifies the current node (H0) as the source address, and the source node S0, as found in header620, as the destination address. The saved annotated state information is also added to tunnel header640d, along with a reflection indicator. The resulting structure is reflected packet610d, which is then, in block765, forwarded over the set of nodes602back to Node S0. In block770, Node S0then processes the reflected packet610d, and more particularly the annotated state information found in tunnel header640d. The packet610, and movement thereof, as illustrated inFIGS.6A,6B, and7, are provided for example purposes only. Other packets may be tunneled and/or reflected in different manners, and other networks600may comprise other arrangements of nodes. 3.8. Collection A probe packet may be reflected back to the source node and/or to a designated node, such as a network controller device. The reflected packet may also be intercepted by an intermediate node between the reflecting node and the node to which the reflected probe packet is directed. Any one of these nodes (source node, designated node, or intermediate node) may be considered a “collecting” node for the purposes described herein. In an embodiment, the collecting node forwards the state information to a Path Management Control (PMC) subsystem, which may be an internal or external CPU subsystem, an ASIC or FPGA, an external host, or any other component suitable for implementing path management logic such as described herein. The collection node then processes the collected path state information, either immediately upon receipt, or in batches with other recently collected state information at periodic or other intervals. The collection node may simply record the collected path state information in a given packet in association with the path along which the probe packet was sent. Or, the collection node may generate its own metrics for a path based on the returned information. For instance, the collection node may compute metrics based both on the newly returned path state information and on historical path state information. When the collection node recognizes the packet as being a reflected packet, the collection node can use the information conveyed therein to determine whether any of a number of actions are warranted. The action may be taken by the collection node itself, and/or the collection node may send an instruction to the original source node to take the action if the collection node is different from the source node. For instance, if the state information indicates that congestion levels along a path are above a threshold, the collection node may determine to reduce the rate at which packets are sent in a flow associated with the reflected packet. The rates of other flows having attributes that are the same as or similar to the reflected packet may also be reduced, in certain embodiments. As another example, the collection node may instead determine to stop or issue flow control to one or more entities. As yet another example, the collection node may determine to reroute new packets for the flow or similar flows along a new path. In an embodiment, the collection node may be an intermediate node configured to recognize reflected packets destined for certain addresses and respond in a manner based on the information in the reflected packet. For instance, an administrative node may be configured to intercept reflected packets and send administrative instructions to one or more nodes in the network based on the information observed. Or an intermediate node may be configured to instigate rate control or flow control measures itself. Such behavior may be useful, for example, if the intermediate node supports capabilities that the source node might not support, or if the intermediate node is capable of responding to changing status information more quickly than the source node. 3.9. Instructions not to Reflect In an embodiment, a packet may optionally be marked with a special flag (e.g. in the header) that instructs downstream nodes to not reflect the packet, or to lower the probability of reflection. This flag may be utilized for a number of purposes. For instance, a source node (or intermediate node) may wish to proactively avoid receiving reflected packets, and thus insert this flag. In an embodiment, this flag may be utilized to avoid reflecting a single packet twice. That is, a single packet may be reflected as it is passing through an initial congestion point (Node A), and also subsequently reflected as it continues on through a secondary congestion point (Node B), triggering multiple reflections back to the same source. Such behavior may not necessarily be desirable. To prevent such behavior, the first node to reflect the packet may insert a special flag into the original packet (as opposed to the reflected packet) that instructs subsequent nodes not to reflect the packet. Similar techniques may be utilized temporarily or permanently to mark all packets within a flow as being ineligible for reflection after a certain number of packets from the flow have been reflected within a period of time. Conversely, in some embodiments, a packet is assumed to be ineligible for reflection unless it contains a special flag marking the packet as reflection-eligible. A source node may insert such a flag, or an intermediate node that is configured to intercept reflected packets may insert such a flag. The flag may be removed by an intermediate node to avoid reflecting a single packet twice, or to avoid reflecting too many packets from a flow within a period of time. 3.10. Device Logic FIG.4illustrates an example flow400for forwarding logic of an apparatus in a network with reflected packets, according to an embodiment. The various elements of flow400may be performed in a variety of apparatuses, including devices such as device200described above. In an embodiment, each of the processes described in connection with the functional blocks described below may be implemented using one or more integrated circuits, computer programs, other software elements, and/or digital logic in any of a general-purpose computer or a special-purpose computer, while performing data retrieval, transformation, and storage operations that involve interacting with and transforming the physical state of memory of the computer. Block410comprises receiving a packet at a device, such as a packet205. The packet may be received and then processed by the forwarding logic of the device. Block415comprises determining whether the packet is reflected. A reflected packet will generally comprise some flag or indicator that indicates that the packet is reflected, as described in other sections. Assuming the packet is not reflected, flow400proceeds to block420. Block420comprises determining whether annotation criteria are met. As described elsewhere herein, the annotation criteria may include threshold eligibility criteria based on such factors as the inclusion of a probe flag or previous annotations in the packet, factors based on packet characteristics, and/or factors based on current node state information. The annotation criteria may further include a random, pseudo-random, or sampling element to ensure that only a small portion of packets are annotated for a given flow, path, or other attribute. In an embodiment, block420may optionally comprise determining to annotate packets with reverse path information when reflected back along the source path, so as to collect path state information for a reflected packet that may lack such information. If annotation criteria are met, flow400proceeds to block430. Block430comprises annotating the packet, as described elsewhere herein. Once the packet is annotated, or if annotation criteria are not met in block420, flow400proceeds to block435. Block435comprises determining whether reflection criteria are met. As described elsewhere herein, the reflection criteria may include threshold eligibility criteria based on such factors as the current node being designated as a reflection node by the packet, factors based on certain packet characteristics, and/or factors based on current node state information. The reflection criteria may further include a random, pseudo-random, or sampling element to ensure that only a small portion of packets are reflected for a given flow, path, or other attribute. In an embodiment, the reflection criteria are such that packets are reflected less frequently than annotated. If reflection criteria are met, flow proceeds to block440. Block440comprises determining whether, in addition to reflecting the packet, the node should also continue forwarding the packet to its intended destination. If forwarding of the packet is to continue, then in block445the packet is duplicated before proceeding to block450. Otherwise flow simply proceeds to block450. Block450comprises making the source address of the packet (or its duplicate) the destination of the packet, and making the address of the current node the source of the packet (or its duplicate), either by manipulating the packet header directly, or encapsulating the packet within a new header. The packet (or its duplicate) is now considered to be a reflected packet. Flow proceeds to block455, where the reflected packet is sent back to the source of the packet (i.e. the new destination of the reflected packet). If a duplicate packet is generated in block445, or if reflection criteria were not met in block435, flow400proceeds to block460. Block460comprises determining whether the current node is the packet's destination. If so, then the packet is processed at the node in block465. Otherwise, the packet is forwarded along to the next hop on a path to the destination address of the packet in block470. Returning to block415, if the packet is reflected, then in block475, it is determined whether the current node is a “sink node” or “collection node” for the packet, using techniques such as described elsewhere herein. If not, flow proceeds to block470, thereby bypassing the annotation and reflection logic of blocks420-440. In an alternative embodiment, the annotation logic may not necessarily be bypassed. In yet other embodiments, to ensure timely delivery of the reflected packet, the reflected packet is processed and sent by the node in an expedited manner relative to other packets being processed by the node. If it is determined that the current node is a collection node in block475, then flow proceeds to block480, which comprises collecting state information from the reflected packet, as described elsewhere herein. The collection process may optionally comprise, for example, calculating aggregate metrics for the path and/or nodes traversed by the reflected packet, as indicated in annotations within the reflected packet's header. Flow then proceeds to block485, which comprises taking one or more actions based on the state information, if warranted. Examples of such actions are described in other sections. Flow400may be repeated any number of times for any number of packets, and multiple packets may be processed concurrently depending on the available hardware resources. Flow400illustrates only one of many possible flows for the forwarding logic of an apparatus. Other flows may include fewer, additional, or different elements, in varying arrangements. For example, the forwarding logic has been simplified to address only decisions related to annotation, reflection, and collection mechanisms. It will be recognized that a device's forwarding logic includes a number of other elements utilized for other purposes, and these elements may result in logical decisions that precede and obviate certain steps of flow400, and/or that occur after some or all of the steps in flow400. Moreover, in an embodiment, different nodes may be configured to support different features, and thus feature forwarding logic that omits certain steps, such as blocks420,430,435,440,480,485, and so forth. 3.11. Miscellaneous Although packet reflection techniques may be utilized for conveying information within any context, it will be noted that in at least one embodiment, packet reflection is one mechanism by which a node within a system configured to dynamically modify path weights may return path state information to a source node. Examples of such systems are described in other sections of this disclosure. 4.0. DYNAMIC WEIGHTED COST MULTIPATHING In general, weighted cost multipathing involves assigning a weight to each possible path for a destination (the destination being either a single destination node or a group of nodes such as a subnet). The technique used to select a path utilizes these weights to ensure that the probability of a data unit being assigned to a given path is approximately proportional to the weight of that path relative to the weights of the other paths to the destination. For instance, a path with a weight of two might be selected twice for every time a path with a weight of one is selected. Typically, the selection technique also involves identifying the path using a function of selected information within the data units, such as of address information. One example of a suitable function is a hash function that uses a modulo operation to calculate the remainder when the address fields (either summed or concatenated) are divided by the sum of the weights. Each possible path is assigned a number of entries (hereinafter “multipath entries”) within a table or list of paths, in proportion with its weight. The remainder is used to identify the index of the path to be selected. Dynamic WCMP, meanwhile, involves adjusting these weights dynamically based on metrics for the paths. In some embodiments, the metrics may be obtained using state information collected from reflected packets. In other embodiments, metrics may be obtained using state information collected via any other suitable means. 4.1. General Flow FIG.8illustrates an example flow800for implementing dynamic weighted cost multipathing, according to an embodiment. The various elements of flow800may be performed in a variety of systems, including systems such as system100and/or200described above. In an embodiment, each of the processes described in connection with the functional blocks described below may be implemented using one or more integrated circuits, computer programs, other software elements, and/or digital logic in any of a general-purpose computer or a special-purpose computer, while performing data retrieval, transformation, and storage operations that involve interacting with and transforming the physical state of memory of the computer. Block805comprises identifying paths to a destination within a network. The destination may be a specific address or a group of addresses. Various mechanisms may exist for defining and representing a group of addresses, such as sets, ranges, and so forth. In an embodiment, a group of addresses is defined as a “subnet,” which typically includes all addresses that begin with a certain prefix, such as the group of all addresses that begin with “192.168.1” or the group of addresses that begin with “10.0.” A subnet may be defined in a number of manners, such as by the combination of an address and a subnet mask that is applied to the address to yield a range or other grouping of addresses. Commonly, in switches and other network devices, a group of addresses is represented using a “prefix” having a format known as CIDR notation. Generally, the format includes an address (e.g. an IPv4 or IPv6 address), followed by a slash, and then followed by a number signifying how many leading bits in the address, when represented in binary form, must be the same for each device in the group. Depending on the embodiment, paths may be identified by specific sequences of nodes that constitute the path, labels, identifiers, or egress ports. A node need not necessarily know each node in a path, as may be the case for example where an egress port is used to identify a path. In some such embodiments, packets that are sent out of the node through the same port may be said to follow the same path, even if the packets may actually be routed differently downstream. In other words, in such embodiments, the node's logic for selecting a path is concerned solely with the port selected by the node, and not the complete path that the packet will eventually take. Block810comprises assigning weights to each of the paths to the destination. The weights may be determined using any suitable functions, including functions based on factors such as bandwidth, QoS levels, port or queue congestion levels, path latency or congestion levels (as determined using collected path state information), and so forth. A device may assign its own weights, or the weights may be specified via instructions from an external device. Block815comprises determining to send a particular packet to the destination. To make this determination, a destination address identified for the packet (e.g. specified by a destination field in the packet's header) is compared to a number of different destinations to which the device has mapped routing decisions (e.g. using a routing table). This comparison process, often involving a process known as prefix matching, identifies a specific destination to which the packet should be sent. For instance, if the destination address specified by the packet is 192.168.0.107, a prefix matching process might determine that the destination for the packet should be the prefix 192.168.0.1/24, and thus the device would utilize routing decision(s) mapped to that prefix to handle the packet. Block820comprises selecting a particular one of the paths identified for the destination using a load-balancing mechanism based on the weights. Ideally, the load-balancing mechanism is configured such that, on average, packets will be assigned to each of the identified paths at a frequency that is proportional to or otherwise based on the weights associated with those paths. For instance, if the weight of a Path A is 4 and the weight of a Path B is 5, it would be expected that, on average, for every four packets that are sent along Path A, five packets would be sent along Path B. Of course, it may be difficult for a load-balancing mechanism to ensure that this ideal is always met for all traffic patterns, particularly when employing measures to avoid packet reordering. Hence, the load-balancing mechanism need not be configured to ensure that this ideal is always met. One example of a suitable load-balancing mechanism is WCMP, as described elsewhere herein. Block825comprises sending the packet along to the destination via the selected path. Blocks815-825may be repeated for any number of packets. Generally, blocks815-825are performed concurrently with blocks805,810,830,835. Block830comprises identifying metrics associated with the paths to the destination. The metrics may be identified in any suitable manner, including, but not limited to, the reflection mechanism described in other sections. Block835comprises dynamically adjusting weights of the paths based on the metrics. The adjustment occurs as the device continues processing packets, per blocks815-825. Hence, at least some portion of traffic that would have been assigned to a certain path may be reassigned to a different path in response to changing network conditions, as indicated by the different metrics. Flow800illustrates only one of many possible flows for implementing dynamic weighted cost multipathing. Other flows may include fewer, additional, or different elements, in varying arrangements. 4.2. Multipath Forwarding Implementation Example According to an embodiment, a device may implement multipath forwarding to a given destination by creating and mapping “multipath groups,” which represent an array of “equal cost” egress ports, for the destination. Each egress port corresponds to one of the multiple paths available to reach the destination. The device calculates hash values based on the packet headers of packets bound for the destination, and uses these hash values to determine which egress port to use for which packets. Hashing on specific fields in the packet header, or a key generated based thereon, ensures that all packets in the same flow follow the same network path (as long as the path weights remain the same), avoiding packet re-ordering. To implement weighted hashing, weights are assigned to each egress port in a multipath group. An array of egress ports with weights is referred to as a WCMP group. Each WCMP group distributes flows among a set of egress ports in proportion to the weights of each port. The weight assigned to an egress port is in turn proportional to the anticipated capacity of the path(s) associated with that egress port. According to an embodiment, a device may implement WCMP groups using a path table in which each port mapped to the destination has a number of entries in proportion to its weight. Such a path table is referred to as a multipath table. The device uses an identifier found in or derived from the packet (e.g. the afore-mentioned hash value) to locate the index of an entry within the path table to which the packet is considered to be mapped. The port (or path) assigned to that entry is used to send the packet out of the device. Example Multipath Table and Logic FIG.9is a block diagram of a system900comprising an example multipath table930and associated logic, according to an embodiment. System900may, in some embodiments, be compatible with system200, in that path table930may be an example of a path table265, while logic921-923may be components of forwarding logic220. In other embodiments, system900may be implemented in systems other than system200. Multipath table930includes entries for two groups, including WCMP group940. Each group includes a number of entries (rows), each having a different index931. The index931need not necessarily be stored, but rather each index931may simply correspond to a different address in memory corresponding to the entry. Each entry is further associated with a port932. Optionally, additional data such as a last sent time may be stored in table930as well. Each group is associated with a different group identifier911identified in table910. Group identifier911is depicted as a prefix for illustrative purposes, but may be any suitable identifier. Table910defines a starting index912and number of entries913for each group. Hence, in accordance with the depicted example, the first four entries in table930store an ECMP group for traffic destined to prefix 1.1.2.0/24. The next 12 entries in the table store a WCMP group940for weighted distribution of traffic destined to prefix 1.1.1.0/24. FIG.10illustrates a flow1000for processing a packet in a system such as system900. Block1005comprises receiving a packet, which includes a packet header905. In block1010, the packet is resolved to a multipath group identifier in table910. For instance, the destination address907of the packet may be matched against the Longest Prefix Match (LPM) entries. The entry selected is the highest priority entry whose prefix911matches the destination address907. The selection of the entry can be said to select the multipath group to which the packet belongs. Although the example embodiment illustrates table910as identifying groups by prefix, it will be recognized that table910may simply identify each group by some identifier, and that the process of resolving a packet to a group identifier may be implemented by prefix matching or other suitable process without the involvement of table910. The packet header is used to derive a key906in block1015(e.g. a “five-tuple” key derived from various packet fields). In block1020, the key906is entered into hash function921to compute a hash value. In block1025, system900consults the table910to determine the number of multipath entries913in the selected multipath group, as indicated by the selected entry in table910. In block1030, system900performs a mod operation922between the hash value and the number of multipath entries913in the selected multipath group. In block1040, system900consults the table910to determine the starting index912for multipath entries in path table930for the selected multipath group, again as indicated by the selected entry in table910. In block1045, system900performs an addition operation923between the output of the mod operation922and the identified starting index912. In block1050, system900looks up the entry in multipath table930whose index matches the output of addition operation923. In block1055, the egress port of this entry is read from the multipath table930. This port may then be used to send the packet. Optionally, in block1060, a last sent timestamp associated with the entry in the multipath table930may be updated to reflect the current time. For example, as illustrated, a packet with destination 1.1.1.1 matches the LPM table entry pointing to the WCMP group with base index of 4 in the multipath table. The switch determines the offset into the multipath table for a particular packet by hashing over header fields e.g., IP addresses, UDP/TCP ports, as inputs. The hash modulo the number of entries for the group added to the group's base index determines the table entry with the egress port for the incoming packet ((15 mod 12)+4=7). Replicating entries for assigned weights for each possible multipath group can, in many common devices, easily exceed the number of path table entries available, typically numbering in the small thousands. To overcome this hardware limitation on table entries, one may map the “ideal” WCMP port weights onto a smaller set of integer weights, with the optimization goal of balancing consumed multipath table930resources against the impact on flow fairness. For example, as illustrated, the egress port numbers 1, 2, 3, 4 in the WCMP group have weights 2, 2, 3, 5 respectively (weight ratio 1:1:1.5:2.5) and use 12 entries in the multipath table930to provide ideal fairness. If one were to change these weights to 1, 1, 2, 3 respectively, one would reduce the number of table entries required from 12 to 7 with small changes to the relative ratios between the weights. This reduction is useful in implementing weighted hashing as this helps in significantly lowering memory cost requirements. FIGS.9and10illustrate but one example of mechanisms for implementing dynamic weight cost multipathing. Other embodiments may include fewer or additional elements in varying arrangements. Other types of data structures may be utilized instead of or in addition to those depicted, and of course the contents of those data structures may vary depending on the architecture of the system in which they are utilized. 4.3. Adjusting Weights From the collected path state information, a path management subsystem, such as path management controller260, determines each path's ranking relative to each other path for a given source/destination combination. For instance, the path management subsystem may rank paths by a collected metric or computed metric, including without limitation, path or node bandwidth, throughput, latency, congestion, or combinations thereof. The path management subsystem then determines an updated weighted path distribution for the given source. The weights may be assigned based on the rankings in any number of ways. For instance, each slot in the rankings may have a pre-defined associated weight, or the weight may be at least partially a function of the metric upon which the paths are ranked. The path management subsystem then updates the network configuration based on the updated weights. For example, if the path management subsystem is within the source node, the path management subsystem may update the multipath forwarding tables at the source node (or send a message to another component configured to do so). Or, if path information is computed for source/destination combinations where another node is the source, the path management subsystem may instead or additionally send instructions to a component at the other node to update its multipath forwarding table. As a consequence of the foregoing, some fraction of entries for paths within a multipath list or table may be reassigned to other paths, resulting in some fraction of traffic flows being reassigned accordingly. For paths that the collected state information indicates are no longer valid (e.g. as a result of path faults), the path may be removed altogether, with its entries reassigned to paths that remain valid. The path management subsystem may repeat the above process for any number of source/destination combinations. For instance, probe packets may be collected from any number of reflecting nodes in a network, with respect to any number of source nodes, corresponding to any number of paths through the network. In an embodiment, a path management subsystem may utilize information in a reflected probe packet to refresh metrics for other paths in addition to the path along which the probe packet traveled, such as may happen when probe packet includes state information for nodes along other paths (e.g. as a result of overlap or information sharing techniques). In the latter case, it may be helpful for the path management subsystem to collect state information for individual nodes and links instead of paths as a whole, and then compute metrics for the paths based on the individual nodes and links within the path. To adjust weights in systems that use replicated entries in a multipath table to implement multipath forwarding, one need simply reassign certain indexes931in the multipath table930to different ports932. For instance, suppose that metrics indicate congestion along a path to destination907that departs from port 3. System900may be configured to react to this congestion by reassigning any of the indexes 8, 9, or 10 to any other of ports 1, 2, or 4, thus changing the weight of port 3 relative to the other ports. 4.4. Packet Reordering In some embodiments, dynamic updates to WCWIP path weights can result in packet reordering for flows that are active when the weights are updated (e.g. when multipath entries are reassigned). For instance, suppose a flow A comprises 10 packets. Packets 1-8 from a flow A are routed to a path P1specified in the 10th entry in a multipath table. However, before packets 9 and 10 are routed, the 10th entry is updated to specify a path P2as a result of reweighting paths in the multipath table. Packets 9 and 10 are thus routed through P2. If P2is significantly faster than P1, packets 9 and 10 may arrive at their destination node before some or all of packets 1-8, which may cause problems at the destination node. Such packet reordering can lead to poor performance for a number of transport protocols (e.g. TCP). In an embodiment, packet reordering may be avoided by monitoring each multipath entry and observing the last time the entry has been visited (e.g. the last time the node routed a packet that hashed to the index number of the entry). If the entry has not been visited within an acceptable time duration (e.g. a time duration chosen to prevent reordering), and/or meets other prescribed reordering conditions, then the entry can be updated. An update to an entry is held back until the reordering conditions are met for that entry. In an embodiment, if reordering conditions are not met within a particular window of time, the update is dropped, as it may no longer be a beneficial update due to network state changes. FIG.11illustrates a flow1100for adjusting path weights in a system configured to avoid packet reordering. The various elements of flow1100may be performed in a variety of systems, including systems such as systems200and/or900described above. In an embodiment, each of the processes described in connection with the functional blocks described below may be implemented using one or more integrated circuits, computer programs, other software elements, and/or digital logic in any of a general-purpose computer or a special-purpose computer, while performing data retrieval, transformation, and storage operations that involve interacting with and transforming the physical state of memory of the computer. Block1110comprises determining to adjust weights for a multipath group. As described elsewhere, such a determination may be made for a variety of reasons, including in response to changes in node and/or path state indicated by information collected through reflected packets. In an embodiment, a determination of whether to adjust weights for a multipath group is performed periodically, in response to receiving a reflected packet, and/or in response to other triggers. Block1120comprises identifying a multipath entry in a multipath table, such as table930, whose associated path should be changed to reflect the new weights. The strategy used to select an entry may vary, depending on the embodiment. For example, an entry may be selected so as to keep all of the entries assigned to a path consecutive. Or, the entry with the oldest last sent times may be selected for reassignment. Or, an entry may be randomly selected. Block1130comprises determining whether the last sent time of the selected entry is older than some threshold. Such a threshold may be chosen to minimize the likelihood of packet reordering. The threshold may be global across a network, specific to a device, specific to a set of ingress or egress ports, specific to a class or flow of traffic, specific to a destination, and so forth. The threshold may further change based on observed traffic patterns. If the last sent time is older than the threshold, then in block1140, the entry is updated to a different path. Otherwise, in block1150, it is determined whether the path change requested in block1120is still valid (e.g. not stale on account of having waited too long to make the change). The amount of time to wait may, like the threshold, vary depending on the context. In an embodiment, a request is considered invalid if a different change has subsequently been requested (e.g. based on new state information obtained since the change request was made). If the request is still valid, then in block1170, the system may wait for some period of time and try block1130again. Otherwise, the entry is not changed. Blocks1120-1170may be performed, potentially concurrently, for each of multiple entries to reassign, should the weights indicate that multiple entries need to be reassigned. Flow1100illustrates only one of many possible flows for adjusting weights. Other flows may include fewer, additional, or different elements, in varying arrangements. For example, in some embodiments, block1150may be omitted, along with any other elements relied upon exclusively by the omitted element(s). In an embodiment, blocks1130and1150-1170may be omitted. 4.5. Miscellaneous While the techniques described herein are advantageous in the context of the WCMP approach to routing decisions, it will also be recognized that the techniques described herein may be applied to dynamically weight routing options in a variety of other contexts. For instance, there are many possible techniques in which a node may decide how to route a packet based on weights attached to paths, nodes, links, ports, and/or other elements in a network topology. Information collected using the described techniques may be utilized to dynamically adjust those weights accordingly. 5.0. VISIBILITY PACKETS The techniques described in this section aim to, among other aspects, improve debugging capabilities in switches and other types of network devices to provide greater visibility and handling of packet drops and/or other issues. According to an embodiment, a switch or other network node is configured not to drop a packet in certain situations when the node might otherwise have dropped the packet. Packets, cells, or other data units that become corrupted and/or invalid (e.g. due to table look-up failures) are transformed into “special visibility” packets (or other data units). In some embodiments, the node may even be configured to never drop a data unit—that is, any data unit that conventionally would have been dropped instead becomes a special visibility packet. In other embodiments, only data units that meet certain criteria are transformed into a special visibility packets. According to an embodiment, any data unit that is impacted in an unexpected manner (e.g. inflated latency) may also be transformed into a special visibility packet. The transformation may, in some cases, including duplicating the original packet and transforming the duplicate packet into a special visibility packet instead of the original. Special visibility packets may be used for a number of different purposes. For instance, they may be stored for some period of time in a repository, where they may be viewed and/or analyzed through external processes. As another example, certain types of special visibility packets may be sent or consumed by custom hardware and/or software-based logic (deemed a “healing engine”) configured to send instructions to one or more nodes within the network to correct problems associated with those types of special visibility packets. In an embodiment, information from visibility packets may be utilized to adjust weights of a path for dynamic WCMP techniques. For instance, if large number of packets are dropped by an egress port corresponding to a certain path, the weight of the path may be lowered. 5.1. Transforming Packets into Special Visibility Packets In an embodiment, the forwarding logic of a node may be configured such that certain packets, such as packets that are experiencing certain issues or that would have been dropped, are additionally or instead processed by special visibility logic that transforms the packets into special visibility packets. Conceptually, the packets to be transformed may be viewed as being forwarded to a visibility path instead of or in addition to the normal path to which they otherwise would have been forwarded. For instance, the forwarding logic may implement special visibility transformation logic by default when no other forwarding rule applies, and/or if a packet ever needs to be dropped because of resource constraints, errors, or special policies. Or, the forwarding logic may be configured to identify packets undergoing a special visibility issue, such as having an unexpected amount of latency, and apply the transformation logic to such packets. In general, the special visibility logic transforms a packet by first associating a visibility tag with the packet. Once tagged as a special visibility packet, the packet is placed in a visibility queue, which is any suitable memory or storage structure for storing the special visibility packet for analysis, as described in subsequent sections. For example, the tagged packet may be removed from processing (e.g. removed from its current buffer) and transferred to traffic management logic. The traffic management logic then accesses the special visibility packet, observes the visibility tag, and links the packet to a special visibility queue. In an embodiment, only a portion of the packet is actually tagged, with the rest of the packet being discarded. For instance, if a switch is operating at a cell or frame level, a certain cell or frame may be detected as the “start of packet” (SOP), and include information such as the packet header. This cell or frame, and optionally a number of additional following cell or frames, may form the special visibility packet, and other cells or frames of the packet (e.g. cells or frames containing the payload and/or less important header information) may be discarded. In some embodiments, a packet undergoing certain types of issues may be duplicated before being transformed, so that the original packet continues to undergo normal processing (e.g. in cases where an issue is observed, but the issue does not preclude normal processing of the packet), and the duplicate becomes the special visibility packet. 5.2. Visibility Tags A visibility tag may be any suitable data in or associated with a packet that is recognized as indicating that the packet is a special visibility packet. Aside from the existence of the visibility tag marking the packet as a special visibility packet, the visibility tag may include other information, including without limitation information indicating the location of the drop or other issue (e.g. a node identifier, a specific processing stage, and/or other relevant information) and the type of drop or other issue that occurred. A visibility tag may, for instance, be communicated as a sideband set of information that travels with the packet to the visibility queue (and/or some other collection agent). Or, a visibility tag may be stored inside the packet (e.g. within a field of the packet header, or by way of replacing the packet payload) and communicated in this way to an external element that consumes the tag. Any packet or portion of the packet (e.g. cell or subset of cells) that has an associated visibility tag is considered to be a visibility packet. 5.3. Visibility Queue In an embodiment, one or more special queues, termed visibility queues, are provided to store packets containing visibility tags. A visibility queue may be represented as a queue, FIFO, stack, or any other suitable memory structure. Visibility packets may be linked to the visibility queue only (i.e. single path), when generated on account of packet corruption. Or, visibility packets may be duplicated to the visibility queue (i.e. copied or mirrored) such that the original packet follows its normal path, as well as traverses the visibility path. Visibility queue data may be provided to various consuming entities within the node and/or network through a variety of mechanisms, depending on the embodiment. For example, a central processing unit within the node may be configured to read the visibility queue. As another example, traffic management logic may be configured to send some or all of the visibility packets directly to a central processing unit within the node as they are received, or in batches on a periodic basis. As yet another example, traffic management logic may similarly be configured to send some or all of the visibility packets to an outgoing interface, such as an Ethernet port, external CPU, sideband interface, and so forth. Visibility packets may be sent to a data collector, which may be one or multiple nodes (e.g. cluster of servers) for data mining. As yet another example, traffic management logic may similarly be configured to transmit some or all of the visibility packets to a healing engine, based on the visibility tag, for on-the-fly correction of specific error types. 5.4. Healing Engine In an embodiment, certain error types may be correctable by taking action if certain criteria are satisfied. Hence, a healing engine within or outside of a node may be configured to access the visibility packets in the visibility queue. For instance, the healing engine may periodically read the visibility queue directly. Or, as another example, a node's forwarding logic may be configured to send the visibility packets (or at least those with certain types of visibility tags) to an external node configured to operate as a healing engine. A healing engine inspects the visibility tags and/or the contents of those visibility packets it accesses. The healing engine may further optionally inspect associated data and input from the other parts of the node which tagged the packet (e.g. port up-down status). Based on rules applied to the visibility packet, or to a group of packets received over time, the healing engine is configured to perform a healing action. For example, a forwarding table entry lookup failure for a packet may have triggered a corresponding visibility tag to be set for the packet, indicating that the forwarding table entry lookup failure occurred. The healing engine observes the visibility tag, either in the visibility queue or upon receipt from traffic management logic. The healing engine inspects the packet and determines that the forwarding table entry lookup failure may be fixed using a prescribed corrective action, such as adding an entry to the forwarding table. The healing engine then automatically performs this action, or instructs the node to perform this action. The corrective set of actions for a tag are based on rules designated as being associated with the tag by either a user or the device itself. In at least one embodiment, the rules may be specified using instructions to a programmable visibility engine. However, other suitable mechanisms for specifying such rules may instead be used. 5.5. Example Process Flows FIG.12illustrates an example flow1200for transforming dropped packets into visibility packets, according to an embodiment.FIG.13illustrates an example flow1300for generating visibility packet for delayed packets, according to an embodiment. The various elements of flows1200and1300may be performed in a variety of systems, including systems such as system100described above. In an embodiment, each of the processes described in connection with the functional blocks described below may be implemented using one or more integrated circuits, computer programs, other software elements, and/or digital logic in any of a general-purpose computer or a special-purpose computer, while performing data retrieval, transformation, and storage operations that involve interacting with and transforming the physical state of memory of the computer. Depending on the embodiment, a device may be configured to perform flow1200or1300at least partially concurrently with other flows described herein, or a device may be configured only to perform flow1200and/or1300. Block1210comprises receiving a packet, such as a packet205, at a device, such as device200. Block1220comprises placing the packet in a processing queue while the packet awaits processing by the forwarding logic of the device. The queue may be selected based on a variety of characteristics of the packet, such as the ingress port through which it was received, the destination address of the packet, a type or class of the packet, a flow of the packet, and so forth. The packet may, in some embodiments, have already been processed in one or more other queues by one or more other stages of processing. Block1230comprises determining to drop the packet. Such a determination may be made for a variety of reasons, such as described elsewhere herein. For instance, there may be a table lookup failure whereby the forwarding logic of the device cannot find a valid path for the packet's destination address in the device's forwarding table. Or, the packet itself may be corrupt, the packet may be delayed for more than a threshold amount of time, or there may simply be no available queues or buffers for handling or storing the packet. The determination to drop the packet may be an implicit determination. That is, rather than explicitly determining to drop the packet, the forwarding logic may revert to performing blocks1240-1280by default when certain events, such as those mentioned above, occur. For instance, blocks1240-1280may correspond to a default or “catch-all” path in a forwarding table, that applies to any packets that the forwarding logic cannot resolve to other paths. Block1240comprises tagging the packet with a visibility tag in response to the determination to drop the packet. The tagging of the packet effectively transforms the packet into a visibility packet. Block1240may be performed for any packet that is to be dropped, or only for packets that meet other additional criteria. For example, block1240may only be performed for packets associated with certain flows, destinations, sources, packet types, service classes, and so forth. Additionally, or instead, qualifying packets may be selected only at a certain frequency (e.g. once a second, one out of every twenty dropped packets, etc.), which optionally may vary based on characteristics of the packet. Hence, block1240may be preceded by one or more steps of determining whether these additional criteria are met. Criteria may be fixed for the device, specified programmatically, and/or adjusted by logic internal to the device. The forwarding logic may tag the packet with a visibility tag in any number of ways, depending on the embodiment. For example, the forwarding logic may annotate the header of the packet, replace some or all of the payload with the tag, or generate sideband information that is associated with an identifier of the packet or its corresponding buffer. The visibility tag may include a flag, label, or other identifier that is recognized as signifying that the packet is a visibility packet and should thus be handled by a visibility subsystem. The tag may optionally include other information to help diagnose problem(s) that may have led to the drop, such as an identifier of the processing queue to which the packet was assigned, an identifier of the network device, an error or drop type, related statistics, and so forth. In an embodiment, not all of the packet need be tagged. For example, where different subunits of the packet may be processed independently (e.g. where the packet is subdivided into cells or frames) a start-of-packet subunit of the packet may be tagged. Other portions of the packet may be unaffected. Block1250comprises optionally truncating the packet. This may involve, for example, truncating the packet to a certain size, or removing certain designated portions (such as any portion of the payload that does not correspond to the tag). Or, where separate subunits of the packet are processed individually (e.g. cells or frames), this may involve discarding subunits of the packet other than the start-of-packet subunits and optionally one or more following subunits. Block1260comprises forwarding the tagged packet to a visibility subsystem. The visibility subsystem may take different forms in different embodiments. For example, in an embodiment, the visibility subsystem is internal to the network device that transformed the packet into a visibility packet. The packet is “forwarded” to the subsystem by being placed in (or linked to) a visibility queue, from which it is eventually read by the visibility subsystem. As another example, the visibility subsystem may be on a network device, designated as a “data collector,” that is external to the device that transformed the packet into a visibility packet. After waiting in a visibility queue, the packet may be forwarded to the subsystem by encapsulating the packet within another header that targets the address at which the visibility subsystem is found. In yet other embodiments, there may be multiple visibility subsystems. For example, after performing some preliminary analysis, a device's internal visibility processing logic may forward all visibility packets that it has generated, or a sample of those visibility packets, to an external device for additional analysis. The visibility subsystem may perform a variety of actions with visibility packets. Two non-limiting examples of such actions are illustrated in blocks1270and1280. Block1270comprises storing the visibility packet in a repository. The repository may serve, for example, as a log which may be inspected by a network administrator to diagnose network problems. The repository may keep all visibility packets, or only those that meet certain filtering conditions specified by the network administrator. Visibility packets may be kept in the repository for a certain period of time, and/or aged out as necessary to make room for new visibility packets. Block1280comprises performing one or more healing actions based on the tagged packet. Block1280presupposes that the visibility subsystem is a healing engine, or that the repository of block1270is monitored and analyzed by a healing engine. Actions may be taken solely on the basis of the tagged packet, or based on trends or metrics related to a number of similarly generated visibility packets. A healing action may involve reconfiguring any aspect of the network in which flow1200is performed. For instance, the healing action may involve updating a forwarding table, adjusting a path weight, restarting a system, changing a policy or priority level for a flow or class of traffic, and so forth. In some cases—for example, if the healing engine is external to the device at which the visibility packet is generated—performing the healing action may involve sending an instruction to another device to update its configuration. Specific actions may be defined by various customizable rules stored at the healing engine. Turning now toFIG.13, flow1300begins with blocks1310and1320, which are the same as blocks1210and1220, respectively. Block1330comprises determining that the packet is experiencing increased latency. The determination may be made based on timestamps associated with the packet itself, or may be inferred more generally based on metrics associated with device. That is, if a certain port to which the packet is to be forwarded is experiencing high levels of congestion, an increase in latency may be inferred for the packet. Optionally, block1330may comprise determining whether additional criteria for transforming the packet into a visibility packet are met. For example, in addition to requiring increased latency, the forwarding logic of the device may check to see whether the packet has certain other specified characteristics, such as being associated with certain flows, destinations, sources, packet types, service classes, and so forth. Additionally, or instead, qualifying packets may be selected for transformation only at a certain frequency (e.g. once a second, one out of every twenty dropped packets, etc.), which optionally may vary based on characteristics of the packet. In yet other embodiments, block1330may more generally be viewed as determining whether visibility transformation criteria, such as described above, apply. The existence of high latency may be viewed as but one of several criteria to be evaluated. The criteria may include evaluating for other events instead of or in addition to the packet experiencing increased latency. Criteria may be fixed for the device, specified programmatically, and/or adjusted by logic internal to the device. Block1340comprises duplicating the packet. In an embodiment, the entire packet need not be duplicated, but rather only a certain portion of the packet may be duplicated (e.g. the first n bytes of the packet, the packet header, the start-of-packet, etc.). Block1350comprises tagging the packet or duplicate packet with a visibility tag in response to the determination to drop the packet, as described with respect to block1240. Since the packet and duplicate packet are the same, in an embodiment, it does not matter which packet is tagged. However, in embodiments where only a portion of the packet is duplicated, then the duplicate packet is tagged. Block1360comprises forwarding the non-tagged packet to its specified destination. That is, unlike in flow1200where the packet is dropped, the packet of flow1300(or its duplicate) continues to be forwarded to its destination address. Meanwhile, block1370comprises forwarding the tagged packet to a visibility subsystem, as described with respect to block1260. Blocks1380and1390then correspond to blocks1270and1280, respectively. Flows1200and1300illustrates only one of many possible flows for the forwarding logic of an apparatus. Other flows may include fewer, additional, or different elements, in varying arrangements. For example, blocks1250,1270, and/or1280may be optional for flow1200, while blocks1380and1390may be optional for flow1300. As another example, a visibility subsystem may perform yet other actions than those identified in blocks1270,1280,1380, and1390. As another example, the forwarding logic has been simplified to address only decisions related to visibility tagging. It will be recognized that a device's forwarding logic includes a number of other elements utilized for other purposes, and these elements may result in logical decisions that precede and obviate certain steps of flows1200and1300, and/or that occur after some or all of the steps in flow1200or1300. 6.0. PROGRAMMABLE VISIBILITY ENGINES What limited visibility is provided by switches and similar devices in complex networks is often rigid in implementation, requiring customers to request enhancements from the vendors of such devices. It is often difficult for the vendors to add requested capabilities to a device until the next release of the device, and moreover the capabilities may be of limited application and/or something that the customer wishes to keep proprietary for use only in their networks. The techniques described in this section, among other aspects, provide customers with flexibility to define metrics, create statistics that are specific to their applications, and/or program network devices to perform certain actions under prescribed conditions. A computing construct referred to as a Programmable Visibility Engine (“PVE”) is provided. The PVE receives instructions to execute one or more functions from a defined set of functions supported by the PVE. The PVE may be, for instance, a software-based engine executed by one or more general purpose processors within the node, or specialized hardware such as a special-purpose processor, FPGA, or ASIC (or a set of logic contained therein). By instructing the PVE, or a series of PVEs, to perform various functions, a customer may easily customize the capabilities of a switch or other device to support calculation and collection of arbitrary metrics, and performance of various actions in response to custom triggers. In an embodiment, a node may have a fixed number of PVEs. These PVEs may be tied to input data from predefined areas of memories, or dynamically linked by the user to input data from different areas of memory. In other embodiments, a user may dynamically instantiate a number of PVEs within a node, and link those PVEs to desired areas of memory. In an embodiment, a visibility subsystem such as, such as visibility subsystem270, may be or comprise a set of one or more programmable visibility engines. 6.1. Example PVE Architecture FIG.14is a block diagram1400illustrating an example architecture for a PVE1420, according to an embodiment. PVE1420may be implemented using one or more ASICs, FPGAs, or logic therein. PVE1420is configured to implement a defined set of functions1422a-1422n, collectively functions1422. Example of functions1422are described in subsequent sections. PVE1412receives function selector input1412, that specifies a specific subset of the functions1422that should be active at a given time. PVE1420is configured to repeatedly execute the selected functions1422over a number of execution cycles. The number of execution cycles may, in an embodiment, be limited to a number supplied by a counter (not shown). PVE1420executes each selected function1422once per execution cycle. PVE1420may receive function selector input1412as signals from another component, or may read the function selector input1412from a bound memory address at the start of each execution cycle (or at any other suitable time). Function selector input1412may change over time. The selected functions1422are executed on one or more bound input values1414. The bound input value(s) may be supplied by signals from another component, or PVE1420may read the one or more input values1414from a bound memory address. Each function1422may perform different calculations using the one or more input values1422, or some functions1422may perform the same calculations. Some functions1422need not necessarily use all of the supplied input values1412, or even any of the input values1412. PVE1420is configured to output data generated by execution of functions1422to at least one data store1440. An address map1430includes mappings1432of specific functions1422to specific locations1442in the data store. Depending on the embodiment, a function1422may read and/or write to its mapped memory location1442. Although memory locations1442are illustrated as a sequence of locations, each memory location1442may actually be any location within one or more data stores1140, without regard to the locations1442mapped to other functions1422. Moreover, in an embodiment, multiple functions1422may be mapped to the same memory location1442. The address map1430may, in an embodiment, be altered dynamically by a user and/or by automated logic within the network device. In an embodiment, some or all of functions1422may be linked to one or more triggered actions1450. A triggered action1450is a specific set of processing logic, beyond simply writing to a data store1420, that is to be performed when the result of a selected function1422is within some range or set of values. For example, if the result of a comparison function is 1, a linked action may be performed, while the linked action may not be performed if the result is 0. Or, a first linked action may be performed if the result of a function is in a first range, a second linked action may be performed if the result is in a second range, and no action may be performed otherwise. The processing logic may be performed by the PVE1420directly, or PVE1420may be configured to send one or more instructions to another processing component to execute the linked action. In another embodiment, a separate component may be configured to periodically read values at locations1442and determine whether to perform linked actions1450based thereon. Diagram1400illustrates only one of many possible arrangements of a system comprising a PVE. Other arrangements may include fewer, additional, or different components, and the division of work between the components may vary depending on the arrangement. For example, in some embodiments, at least some functions1422may not be mapped to memory locations1442, and instead only trigger actions1450based on their respective calculations. In another embodiment, triggered actions1450are not linked to functions1422or even necessarily performed at all. 6.2. Example PVE Process Flow FIG.17illustrates an example flow1700for utilizing a PVE, such as PVE1420, according to an embodiment. The various elements of flow1700may be performed in a variety of systems, including in network devices such as device200described above. In an embodiment, each of the processes described in connection with the functional blocks described below may be implemented using one or more integrated circuits, computer programs, other software elements, and/or digital logic in any of a general-purpose computer or a special-purpose computer, while performing data retrieval, transformation, and storage operations that involve interacting with and transforming the physical state of memory of the computer. Block1710comprises identifying one or more inputs bound to the PVE. The inputs may be signals from another component, bound addresses in memory, and/or combinations thereof. Block1720comprises identifying one or more selected functions for the PVE to execute. The functions may be identified, for example, using function selection input such as a list of functions to be executed or bitmap. The function selection input may, in some embodiments, be part of the bound input identified in block1710. Block1730comprises receiving one or more input values from the one or more bound inputs. Receiving the value(s) may comprise, for instance, reading the values from memory or receiving signals from another component. Blocks1740-1780are performed for each function that was selected in block1720. Blocks1740-1780may be performed serially, in parallel, or partially serially and partially in parallel, depending on the architecture of the PVE. Block1740comprises executing a next selected function. Depending on the function, none, some, or all of the input value(s) may be input into one or more calculations to produce one or more result values. Block1750comprises identifying one or more memory addresses mapped to the executed function. Depending on the embodiment, the addresses may be specified by a memory address map and/or hard-coded. Block1760comprises writing the one or more result values to the one or more mapped addresses. Block1770comprises executing any actions that are linked to the function based on the one or more result values. For example, if the result value is above a certain threshold, an action linked to the function may be triggered. Block1780comprises determining whether any additional selected functions remain to be performed. If so, flow returns to block1740. Otherwise, flow returns to block1720for the next execution cycle. Flow1700illustrates only one of many possible flows for collecting state information through reflected packets. Other flows may include fewer, additional, or different elements, in varying arrangements. For example, in some embodiments, blocks1750,1760,1770, and/or1780may be omitted for some or all functions, as well as any elements relied thereupon. 6.3. PVE Functions The exact set of functions implemented by a PVE vary depending on the embodiment. Example functions supported by a PVE may include, without limitation, some or all of the following:an accumulate by value function that updates a data store by summing it with an input value (which may be positive or negative);a count function that updates a data store to indicate the number of times the count function has been called;a count function that updates a data store to indicate the number of times the count function has been called and then triggers a linked action;a compare function that compares an input value to some input threshold and either updates a data store to indicate true or false, or triggers an action based on the comparison;a count-and-compare function that updates a data store to indicate the number of times the function has been called and then triggers a linked action when the value of the data store surpasses an inputted threshold;an accumulate-and-compare function that updates a data store by summing it with an input value and then triggers a linked action when the value of the data store surpasses an inputted threshold; ora probabilistic (random) function that causes performance of an action when a randomly selected number surpasses some inputted probability threshold;an Exponentially Weighted Moving Average (“EWMA”) function that accepts an input value V and uses it to update a weighted moving average A in a data store as follows: A!=A+alpha(V-A), where alpha may be a predefined value or an input value between 0 and 1, and A! is the new value that replaces A in the data store;other statistical functions; orcombinations of the foregoing. A PVE may, at any given time, perform some, none, or all of the functions that it supports, depending on programmable function selection instructions stored in association with the PVE (either hard-coded, or specified by the user). In an embodiment, the PVE repeatedly executes a set of functions specified by the instructions over multiple iterations, occurring at periodic or other intervals (e.g. every clock cycle, every other clock cycle, etc.). In an embodiment, the instructions may be modified at any given time, which of course changes the functions performed in subsequent intervals. The PVE may execute some or all of the functions in the set in parallel. Alternatively, some or all of the functions may be executed in series. For instance, a subset of the functions may be executed in one clock cycle, followed by another subset in another clock cycle, until all of the functions specified by the instructions have been performed. In an embodiment, the programmable function selection instructions that specify which functions to perform may take the form of a bitmap of size N, where N is the number of functions implemented by the PVE. In other words, there is a bit for each function. If the bit corresponding to a function is set to 1, the function is executed in each iteration. Otherwise the function is not executed. Of course, the programmable instructions may instead take any other suitable form. In an embodiment, a count is specified for the PVE. The PVE is executed only for a number of iterations equal to the count, with the count being decremented in each iteration. Upon the count reaching 0, the PVE stops executing until some other process (e.g. a periodic reset process, or an action performed by another PVE) resets the count. A predefined value in the count may be used to indicate that the PVE is to be executed indefinitely. 6.4. PVE Inputs Each PVE function may be bound to a specific data source, which may be one or more areas of memory from which it reads data, or one or more outputs from one or more other components of the node. For instance, a PVE function may be bound to various count columns in a table that tracks an amount of buffers currently used within a node for each of a plurality of different queues or resources within the node. Or, a PVE function may be bound to a data store in which another PVE outputs values. Different PVE functions may be bound to the same data source. In an embodiment, in fact, all PVE functions may be bound to the same column(s) of data in a table. In an embodiment, an array of single-value or multi-value inputs is bound to a PVE. The PVE is configured to operate on each member of the array either in parallel or in series. For instance, the node may track statistics for each of a plurality of queues. The PVE may be executed with respect to each queue's statistics in parallel. Optionally, different members of the array may be associated with different sets of programmable instructions (e.g. different function bitmaps), such that different sets of functions are executed for at least some of the members of the array. Thus, from one perspective, the function selection instructions indicating which function(s) to perform is a portion of the input fed to the PVE when executing the PVE. In an embodiment, data values may be passed through message processing logic prior to being input into a specific PVE function. Each function may have its own associated message processing logic. The message processing logic, in essence, prepares the values of the data source for consumption by the function. For instance, if the data source includes extra information not needed for a given function, the message processing logic may filter the data to only include relevant values. Or, if the input is not arranged in a format expected by the function, the message processing logic may be configured to restructure the data source's input. 6.5. PVE Outputs The result of the function(s) performed by the PVE may be output to one or more areas of memory allocated to the PVE, referred to as data stores. Each function may be bound to a specific location or set of locations within the data store(s) of the PVE. These locations may be specified, for instance, in a special function-to-memory mapping associated with the PVE. A PVE function may both read and write to its bound location(s) within data store. In an embodiment, only a certain number of memory accesses are permitted by the PVE during a given clock cycle. If the functions selected for execution would require more memory accesses than permitted, the PVE may utilize a function prioritization scheme to determine which functions actually get to access the data store. For instance, each function may be assigned a predefined, or user configurable, prioritization level. The functions are ranked, with the highest priority functions given first access to the data store. Once the limit on memory access is reached, the other functions requiring memory access are not executed, or executed on a delayed basis. 6.6. PVE Actions In an embodiment, beyond outputting data, a user may associate a PVE with one or more defined actions. In an embodiment, the output(s) of a PVE may trigger performance of different actions. For instance, in a simple embodiment, if a non-zero value is output by a function that the user associates with an action, the action is performed. More complicated rules for determining when to perform a function exist, such as comparing the value(s) output by the PVE to various thresholds and executing actions associated with those thresholds. In an embodiment, a PVE function does not output a value at all, but rather performs different actions (or no actions) in accordance with conditional logic in the function executed by the PVE. Any suitable action may be linked to a function. Examples of actions include, without limitation: dropping a packet, issuing flow control, marking a packet for rate control, sampling a packet and sending it to a special processor for analysis, duplicating (or mirroring) a packet and sending it to a data collector component, sending information to a healing engine. 6.7. Multi-Layer PVEs A PVE may be chained, or layered, together with one or more additional PVEs, such that the output of one PVE serves as the input of another PVE. In this manner, a user may utilize function composition (e.g. f(g(x)) to define rich metrics in arbitrary manners. For example, a second PVE may operate as an aggregator of outputs generated by functionality from a first PVE, thus enabling functions such as averaging, sums, and so forth. In an embodiment, feedback layering of PVEs is supported, such that the outputs of PVE provide feedback to other PVEs. In this manner, PVEs may behave as control algorithms. For example, the output of one PVE may determine how or even whether another PVE executes a particular one of its functions. As another example, feedback layering may allow for implementation of complex control algorithms that enable the node to respond to unexpected conditions and self-heal. Example Multi-Layered PVEs FIG.15is a block diagram1500illustrating an example of layered PVEs, according to an embodiment. Two PVEs are illustrated: PVE1520A and PVE1520B. PVE1520A implements logic for performing a set of functions1522a—n, collectively functions1522. PVE1520B implements logic for performing a set of functions1524a—n, collectively functions1524, which may be the same set of functions, or a different set of functions. For illustrative purposes, each PVE1520is associated with two output data stores,1540A and1540B, comprising entries1542or1544, respectively. The total number of depicted entries in each data store1540is the same as the number of functions1522and1524. However, in other embodiments, a PVE may have any number of associated data stores1540, each with any number of entries1542/1544. PVEs1520A and1520B are also associated with address map1530A and1530B, respectively. Like address map1430, each address map1530indicates, for each of the functions1522/1524of the associated PVE1520, which data addresses1542/1544in the data stores1540are mapped to the function. PVE1520A is bound to input from the input data source1510depicted on the far left. The data source1510may in fact be any suitable data source, such as tables within the node, output data stores from other PVEs (possibly including those of PVE1520B), or output from another component of the node. Three arrows leading from this data store to various functions of PVE1520A, illustrating that the data from the data store is being fed into three different functions (1522a,1522c, and1522d) executed by PVE1520A, while the rest of the functions1522are not being executed. These three functions may have been selected, for example, by instructions associated with the PVE1520A or the data source1510currently being processed, such as a function bitmap or interpreted code. The selection may or may not be different for different data entries in data source1510and/or for different iterations of executing PVE1520A, depending on the embodiment. PVE1520B is bound to input from PVE1520A's data stores1540A/1540B. That is, the output of PVE1520A becomes the input of PVE1520B. The exact set of functions executed by PVE1520B is not illustrated, though of course any combination of one or more of the functions1524may be executed with respect to the data output by PVE1520A. Though not depicted, PVE1520A and/or PVE1520B may optionally trigger the performance of actions specified by a user. Diagram1500illustrates only one of many possible arrangements of a system comprising layered PVEs. Other arrangements may include fewer, additional, or different components, and the division of work between the components may vary depending on the arrangement. For example, in other embodiments, any number of PVEs may be chained together. Moreover, different PVEs may write to different data stores. 6.8. Implementing WRED with PVEs One common congestion management algorithm implemented within computer networks is Weighted Random Early Detection (WRED). According to an embodiment, this algorithm may be implemented using a series of PVEs arranged in similar manner to that depicted inFIG.15. For example, the data source for PVE1520A may be an array comprising, for each queue of a group of queues (Q1to QN), a congestion value, threshold information, a function bitmap, and optionally a count of a number of times PVE1520A should be called. The array is processed by PVE1520A serially, in parallel, or partially serially and partially parallel. The array is further processed repeatedly over time, as the values with the array change. FIG.16is a block diagram of an input data source1610suitable for implementing WRED using layered PVEs1620A and1620B, according to an embodiment. PVEs1620A and1620B may be, for example, PVES1520A and1520B, respectively. Input data source1610is depicted as a table comprising an entry for each queue1611, though of course the input data may in fact take a variety of other formats, including multiple tables, signals sent over time, and so forth. For each queue, the input data source1610includes a resource value1612, such as an estimated queue size, and one or more threshold values1612used to determine whether the queue is in various states. According to an embodiment, the input values1612and thresholds1613may be derived from those found in a bifurcated counting table, such as described in U.S. application Ser. No. 14/958,830 (filed Dec. 3, 2015) and Ser. No. 14/973,541 (filed Dec. 17, 2015), the entire contents of both of which are hereby incorporated by reference as if set forth in their entirety herein. However, any other suitable values may be utilized. Data source1610further comprises, for each queue, a function bitmap1614for PVE1520A and function bitmap1616for PVE1520B. The function bitmaps1614/1616select which functions of PVEs1620are to be executed. Data source1610further comprises counts1615and1617for each queue1611. In an embodiment, counts1615and1617may be decremented each time the entry for the associated queue1611is processed by the corresponding PVE1620. When counts1615/1617reach 0, the corresponding PVEs1620are no longer executed, until such a time as an external process (or potentially another PVE) resets the counts1615/1617. In this manner, the layered PVEs1620may be utilized to perform diagnostic testing, statistics collection, healing measures, or other actions on a specific queue1611for a limited amount of time, and then idled until needed again. Note that the function bitmaps1614/1616and counts1615/1617for each queue1611may be the same, or different. Depending on the embodiment, the PVEs1620may process the entry for each queue1611serially, or as a group of up to n queues. According to an embodiment, the functions selected for PVE1620A are used, among other aspects, to compute the exponentially weighted moving average (EWMA) queue size. For instance, PVE1620A may be instructed to perform an EWMA function on each queue1611, with current size value of queue being the input value1612from the table. PVE1620A writes the EWMA to entries in the data store entry that is assigned to the EWMA function. PVE1620B is also bound to input data from the data source, including its own function bitmap1616and optional count1617. PVE1620B is also bound to the outputs from PVE1620A. PVE1620B is instructed to execute a comparison function, comparing the EWMA of each queue1611(as output by PVE1620A) to the threshold information of each queue. PVE1620B outputs an operating region based on the comparison, which may take one of three different values. Different actions are associated with the output regions. A first region indicates that no drop is to occur, since no congestion is present. A second region indicates that random drops are occur. That is, a probabilistic drop is performed to prevent the queue1611from becoming saturated. A third region indicates a tail drop. All packets are to be dropped because the queue1611is saturated. PVE1620B sends an instruction to perform the relevant action to a traffic manager within the node, and the relevant information may also be written to a state table1670for the queue1611. 6.9. Implementing Heatmaps with PVEs Another example use of PVEs is generating a congestion heat map, such as described in U.S. application Ser. No. 14/973,541 (filed Dec. 17, 2015). In this case, a two-layer PVE is used to identify top buffer consumers based on destination and, for a subset of destinations, top consumers based on source. A first PVE, PVE1, identifies top consumers based on destination for a given resource. PVE1does this by processing, at intervals, the statistics collected for a particular resource (e.g. egress partition buffers). State updates for egress ports that have consumed the most resources are output to a second PVE, PVE2. That is, a compare function is utilized to determine which ports have the highest values (e.g. over a threshold), and states are written only for those ports. PVE1outputs each of the relevant egress port congestion states as 2-bit values into a PVE1data store. PVE2identifies top consumers based on a source for a given resource, grouped by destination. PVE2stores each egress port's congestion state as a 2-bit value into a PVE2data store to an address that is determined based on the update's source port. Of course, PVEs are highly flexible and may be used to implemented a variety of calculations and algorithms. The examples given above are intended solely to illustrate some of the many applications PVEs, and the uses of PVEs are not limited to these examples. 7.0. EXAMPLE EMBODIMENTS Examples of some embodiments are represented, without limitation, in the following clauses: According to an embodiment, a system comprises a network of nodes, each of the nodes being a network device configured to send, receive, and forward packets over the network, the nodes including: load-balancing nodes configured to load balance network traffic over network paths through which the load-balancing nodes send packets, the load-balancing based on weights that the load-balancing nodes dynamically adjust in accordance to metrics associated with the network paths; annotating nodes configured to annotate selected packets with state information as the selected packets traverse through the annotating nodes; collection nodes, configured to collect annotated packets and record and/or generate the metrics associated with the network paths based on the state information in the reflected packets. In an embodiment, the load-balancing nodes are configured to load-balance based on the weights using Weighted Cost MultiPathing (“WCMP”). In an embodiment, the system further comprises: reflecting nodes configured to reflect certain of the selected packets back to the load-balancing nodes from which the selected packets were respectively sent and/or forwarded, or to collection nodes associated with the load-balancing nodes. In an embodiment, a given node in the network may function as any one or more of the load-balancing nodes, annotating nodes, reflecting nodes, and/or collection nodes depending on whether the given node is sending, receiving, or forwarding network traffic, wherein at least some of the nodes in the network function as both load-balancing nodes and collection nodes, and wherein at least some of the nodes in the network function as both annotating nodes and reflecting nodes. In an embodiment, the load-balancing nodes are configured to inject probe packets into the network for the purpose of obtaining updated state information for the network paths, wherein the annotating nodes are configured to select the probe packets for annotation, and wherein the reflecting nodes are configured to reflect the probe packets. In an embodiment, the reflected packets are copies of packets, the reflecting nodes forwarding at least some of the packets from which the reflected packets are copied on to intended destinations of those packets; and the annotating nodes are configured to select packets to annotate based upon measures of delay or congestion associated with the annotating nodes and/or the packets, and/or wherein the reflecting nodes are configured to select packets to reflect based upon measures of delay or congestion associated with the annotating nodes and/or the packets. According to an embodiment, a method comprises: identifying paths from a network device to a destination within a network; assigning weights to each of the paths; determining to send particular packets from the network device to the destination; selecting, from the identified paths, particular paths along which to send the particular packets from the network device using load-balancing based at least partially upon the weights; dynamically adjusting the weights based on metrics associated with the paths. In an embodiment, the method further comprises at least one of receiving the metrics from one or more other nodes in the network or calculating the metrics based on feedback received from one or more other nodes in the network. In an embodiment, the method further comprises: receiving at least some of the particular packets reflected back from one or more reflecting nodes along the one or more of the paths; identifying the metrics based upon data annotated to the reflected particular packets. In an embodiment, the reflected packets are particular packets that have been generated by the network device for the purpose of probing the network. In an embodiment, the reflected packets are selected packets from the particular packets that the network device annotated with a special identifier or flag before sending the selected packets to the address. In an embodiment, the metrics include one or more of: measures of path delays derived from the data annotated to the reflected particular packets, or measures of congestion associated with nodes in the paths derived from the data annotated to the reflected particular packets. In an embodiment, the frequency with which a first path of the paths is selected relative to a second path of the paths is based in part on a size of a first weight of the first path relative to a size of a second weight. In an embodiment, the method further comprises: assigning each of the paths to a different set of buckets, the set of buckets assigned to a given path being proportional, in number, to a given weight assigned to the given path; reassigning buckets to different paths as the weights are adjusted to keep the set of buckets assigned to a given path proportional, in number, to a given weight assigned to the given path; wherein selecting the particular paths comprises, for a given packet of the particular packets: determining a key for the given path based on contents of the given packet; determining a bucket that is mapped to the key; determining a specific path assigned to the bucket; selecting to send the specific packet along the given path. In an embodiment, the buckets are each separate entries in a multipath forwarding table. In an embodiment, the method further comprises: for each bucket of the buckets, storing a last sent time that the network device last handled a packet whose key mapped to the bucket; waiting to reassign a given bucket that has been designated for reassignment on account of the adjusted weights until the last sent time stored for the bucket is older than a threshold age. In an embodiment, if the last sent time of the given bucket does not become older than the threshold age within a certain amount of time after the given bucket has been designated for reassignment, the reassignment of the bucket is canceled. In an embodiment, determining to send the particular packets to the destination comprises determining that one or more destination addresses of the particular packets is reachable via another network device at the destination. According to an embodiment, an apparatus comprises: a path identification component configured to identify paths from the apparatus to a destination within a network; a weight assignment component configured to assign weights to each of the paths, and further configured to dynamically adjust the weights based on metrics associated with the paths; a forwarding component configured to determine to send particular packets from the apparatus to the destination; and a load balancing component configured to select, from the identified paths, particular paths along which to send the particular packets from the apparatus, based at least partially upon the weights. In an embodiment, the apparatus further comprises a metric collection component configured to receive the metrics from one or more other nodes in the network and/or calculate the metrics based on feedback received from one or more other nodes in the network. In an embodiment, the apparatus is further configured to receive at least some of the particular packets reflected back from one or more reflecting nodes along the one or more of the paths; and identifying the metrics based upon data annotated to the reflected particular packets. In an embodiment, the metrics include one or more of: measures of path delays derived from the data annotated to the reflected particular packets, or measures of congestion associated with nodes in the paths derived from the data annotated to the reflected particular packets. In an embodiment, the weight assignment component is further configured to: assign each of the paths to a different set of buckets, the set of buckets assigned to a given path being proportional, in number, to a given weight assigned to the given path; reassign buckets to different paths as the weights are adjusted to keep the set of buckets assigned to a given path proportional, in number, to a given weight assigned to the given path; wherein selecting the particular paths comprises, for a given packet of the particular packets: determining a key for the given path based on contents of the given packet; determining a bucket that is mapped to the key; determining a specific path assigned to the bucket; selecting to send the specific packet along the given path. In an embodiment, the weight assignment component is further configured to: for each bucket of the buckets, store a last sent time that the network device last handled a packet whose key mapped to the bucket; wait to reassign a given bucket that has been designated for reassignment on account of the adjusted weights until the last sent time stored for the bucket is older than a threshold age. According to an embodiment, a system comprises a network of nodes, each of the nodes being a network device configured to send, receive, and forward packets over the network, the nodes including: sending nodes configured to send and/or forward packets over network paths within the network; annotating nodes configured to annotate selected packets with state information as the selected packets traverse through the annotating nodes; reflecting nodes configured to reflect certain of the selected packets back to the sending nodes from which the selected packets were respectively sent and/or forwarded, or to collection nodes associated with the sending nodes; the collection nodes, configured to collect reflected packets and record and/or generate metrics based on the state information annotated to the reflected packets; action nodes, configured to reconfigure one or more settings affecting traffic flow on the network based on the metrics. In an embodiment, a given node in the network may function as any one or more of the sending nodes, annotating nodes, reflecting nodes, collection nodes, and/or action nodes, depending on whether the given node is sending, receiving, or forwarding network traffic, wherein at least some of the nodes function as both sending nodes and collection nodes, and wherein at least some of the nodes function as both annotating nodes and reflecting nodes. In an embodiment, the reflected packets are copies of packets, the reflecting nodes forwarding at least some of the packets from which the reflected packets are copied on to intended destinations of those packets; and the annotating nodes are configured to select packets to annotate based upon measures of delay or congestion associated with the annotating nodes and/or the packets, and/or wherein the reflecting nodes are configured to select packets to reflect based upon measures of delay or congestion associated with the reflecting nodes and/or the packets. In an embodiment, the state information includes one or more of a measure of delay along a path in the network, a measure of congestion at a node, a switch identifier, a timestamp, a buffer or queue fill level, or a buffer use count. In an embodiment, reconfiguring the one or more settings includes adjusting a rate associated with a particular traffic flow or adjusting a cost associated with a node or a link between nodes. In an embodiment, intermediate nodes between the reflecting nodes and the collecting nodes are configured to prioritize the reflected packets. In an embodiment, a given reflecting node is configured to reflect a tunneled packet in a tunnel, the tunnel being from a tunnel source node to a tunnel destination node, the given reflecting node not being the tunnel destination node, the given reflecting node configured to tunnel the reflected tunneled packet back to the tunnel source node, the tunnel source node being configured to forward the reflected tunneled packet to a given collection node. According to an embodiment, a method comprises: receiving packets at a first network device; for a first set of the packets, each packet in the first set meeting annotation criteria, annotating the packets in the first set with state information associated with the first network device; for a second set of the packets, each packet in the second set meeting reflection criteria, each packet in the second set having been annotated with state information associated with the first network device and/or one or more other network devices in a path through which the packet has traveled, reflecting the packets in the second set back to one or more collection points along paths through which the packets in the second set have respectively travelled; for a third set of the packets, including at least some of the packets in the first set, forwarding the packets in the third set to respective destinations identified by the packets in the third set. In an embodiment, the third set also includes at least some of the packets in the second set. In an embodiment, the annotation criteria and/or the reflection criteria include one or more of: whether a packet to be annotated is marked as a probe packet or a reflected packet, whether the packet to be annotated belongs to a particular traffic flow or queue, whether a measure of delay associated with the packet to be annotated exceeds a certain threshold, whether a measure of congestion at the first network device exceeds a certain threshold, and/or an annotation frequency. In an embodiment, annotating a given packet of the packets comprises one or more of: inserting a measure of delay or a measure of congestion associated with the first network device into a header of the given packet; or updating a measure of delay in the header of the given packet by adding a measure of delay associated with the first network device to a measure of delay previously annotated to the packet. In an embodiment, the state information includes one or more of a measure of delay at the first network device, a measure of congestion at the first network device, a switch identifier, a timestamp, a buffer or queue fill level, or a buffer use count. In an embodiment, reflecting a given packet comprises copying the given packet and sending the copy of the given packet back along a path from which the given packet came, the given packet being forwarded onward to a destination identified by the given packet. In an embodiment, reflecting a given packet comprises removing at least a portion of a payload of the given packet or of a copy of the given packet. In an embodiment, for a given packet in the second set, the collection point to which the given packet is reflected is a second network device through which the given packet traveled on its way to the first network device. In an embodiment, the method further comprises: for a fourth set of packets, each packet in the fourth set marked as a reflected packet, performing one or more of: expediting forwarding of the packets in fourth set, bypassing reflection logic on the packets in the fourth set to ensure that the packets in the fourth set are not reflected again, annotating the packets in the fourth set with state information, or taking one or more actions based at least partially upon state information annotated to the packets in the fourth set. In an embodiment, the method further comprises reflecting a given packet that is in a tunnel by tunneling the given packet back to a source device at which the tunnel began. According to an embodiment, an apparatus comprises: one or more communication interfaces configured to send, receive, and forward packets; annotation logic configured to, for a first set of the packets, each packet in the first set meeting annotation criteria, annotate the packets in the first set with state information associated with the first network device; reflection logic configured to, for a second set of the packets, each packet in the second set meeting reflection criteria, each packet in the second set having been annotated with state information associated with the first network device and/or one or more other network devices in a path through which the packet has traveled, reflect the packets in the second set back to one or more collection points along paths through which the packets in the second set have respectively travelled; forwarding logic configured to, for a third set of the packets, including at least some of the packets in the first set, forward the packets in the third set to respective destinations identified by the packets in the third set. In an embodiment, the annotation criteria and/or the reflection criteria include one or more of: whether a packet to be annotated is marked as a probe packet or a reflected packet, whether the packet to be annotated belongs to a particular traffic flow or queue, whether a measure of delay associated with the packet to be annotated exceeds a certain threshold, whether a measure of congestion at the first network device exceeds a certain threshold, and/or an annotation frequency. In an embodiment, annotating a given packet of the packets comprises one or more of: inserting a measure of delay or a measure of congestion associated with the first network device into a header of the given packet; or updating a measure of delay in the header of the given packet by adding a measure of delay associated with the first network device to a measure of delay previously annotated to the packet. In an embodiment, reflecting a given packet comprises copying the given packet and sending the copy of the given packet back along a path from which the given packet came, the given packet being forwarded onward to a destination identified by the given packet. In an embodiment, reflecting a given packet comprises removing at least a portion of a payload of the given packet or of a copy of the given packet. In an embodiment, for a given packet in the second set, the collection point to which the given packet is reflected is a second network device through which the given packet traveled on its way to the first network device. In an embodiment, the apparatus further comprises reflection handling logic configured to, for a fourth set of packets, each packet in the fourth set marked as a reflected packet, perform one or more of: expediting forwarding of the packets in fourth set, bypassing reflection logic on the packets in the fourth set to ensure that the packets in the fourth set are not reflected again, annotating the packets in the fourth set with state information, or taking one or more actions based at least partially upon state information annotated to the packets in the fourth set. In an embodiment, the reflection logic is configured to reflect a given packet that is in a tunnel by tunneling the given packet back to a source device at which the tunnel began. In an embodiment, the state information includes one or more of a measure of delay at the first network device, a measure of congestion at the first network device, a switch identifier, a timestamp, a buffer or queue fill level, or a buffer use count. According to an embodiment, an apparatus comprises: one or more communication interfaces configured to receive packets from one or more devices over a network; queue management logic configured to queue the packets in one or more processing queues while the packets await processing by forwarding logic; the forwarding logic, configured to: process first packets of the packets and, based thereon, forward the first packets to destinations identified by the first packets; determine that a particular packet of the packets is to be dropped from a particular processing queue without being forwarded to a particular destination identified by the particular packet; in response to the determining that the particular packet is to be dropped, tag the particular packet with a visibility tag; forward the particular packet, with the visibility tag to, to a visibility subsystem instead of the particular destination. In an embodiment, tagging the particular packet comprises embedding the visibility tag in a header of the particular packet or replacing a payload of the particular packet. In an embodiment, tagging the particular packet comprises associating the particular packet with sideband information that forms the visibility tag. In an embodiment, the visibility tag includes at least an identifier of the network device or an identifier of the particular processing queue. In an embodiment, tagging the particular packet comprises tagging one or more cells at the start of the particular packet, the forwarding logic further configured to discard one or more cells at the end of the particular packet before forwarding the particular packet to the visibility subsystem. In an embodiment, the visibility subsystem is a data collector executing external to the network device. In an embodiment, the visibility subsystem is a visibility packet processor within the network device, wherein forwarding the particular packet comprises moving the particular packet to a visibility queue associated with the visibility packet processor. In an embodiment, the visibility subsystem is configured to store the particular packet in a repository of visibility packets. In an embodiment, the visibility subsystem is a healing engine, the healing engine configured to: input a plurality of tags tagged with the visibility tag; based on the plurality of tags, reconfigure the network device. In an embodiment, reconfiguring the network device comprises updating a forwarding table of the network device. In an embodiment, determining that the particular packet of the packets is to be dropped comprises one or more of: determining that the particular packet is corrupt, determining that a forwarding table look-up failure occurred with respect to a destination specified by the particular packet, determining that a resource constraint prevents the network device from using a particular resource to forward the particular packet, determining that the particular packet is experiencing a certain amount of latency, or determining that a policy prevents the network device from forwarding the particular packet. According to an embodiment, an apparatus comprises: one or more communication interfaces configured to receive packets from one or more devices over a network; queue management logic configured to queue the packets in one or more processing queues while the packets await processing by forwarding logic; the forwarding logic, configured to: process first packets of the packets and, based thereon, forward the first packets to destinations identified by the first packets; determine that a particular packet of the packets, in a particular processing queue, is undergoing inflated latency, the particular packet addressed to a particular destination; in response to the determining that the particular packet is experiencing inflated latency, duplicate the particular packet; tag the particular packet or the duplicate particular packet with a visibility tag; forward the tagged packet, with the visibility tag to, to a visibility subsystem instead of the particular destination; forward the other of the particular packet or the duplicate particular packet to the particular destination. In an embodiment, the visibility subsystem is a healing engine, the healing engine configured to: input a plurality of tags tagged with the visibility tag; based on the plurality of tags, reconfigure the network device. In an embodiment, tagging comprises tagging one or more cells at the start of the tagged packet, the forwarding logic further configured to discard one or more cells at the end of the tagged packet before forwarding the tagged packet to the visibility subsystem. According to an embodiment, a method comprises: receiving, at a network device, packets from one or more devices over a network; queueing the packets in one or more processing queues while the packets await processing by forwarding logic of the network device; based on the processing by the forwarding logic, forwarding first packets of the packets to destinations identified by the first packets; determining that a particular packet of the packets is to be dropped from a particular processing queue without being forwarded to a particular destination identified by the particular packet; in response to the determining that the particular packet is to be dropped, tagging the particular packet with a visibility tag; forwarding the particular packet, with the visibility tag to, to a visibility subsystem instead of the particular destination. In an embodiment, the visibility tag includes at least an identifier of the network device or an identifier of the particular processing queue. In an embodiment, tagging the particular packet comprises tagging one or more cells at the start of the particular packet, the method further comprising discarding one or more cells at the end of the particular packet before forwarding the particular packet to the visibility subsystem. In an embodiment, the visibility subsystem is a data collector executing external to the network device. In an embodiment, the visibility subsystem is a healing engine, the method further comprising: the healing engine inputting a plurality of tags tagged with the visibility tag; based on the plurality of tags, the healing engine reconfiguring the network device. In an embodiment, reconfiguring the network device comprises updating a forwarding table of the network device. In an embodiment, determining that the particular packet of the packets is to be dropped comprises one or more of: determining that the particular packet is corrupt, determining that a forwarding table look-up failure occurred with respect to a destination specified by the particular packet, determining that a resource constraint prevents the network device from using a particular resource to forward the particular packet, determining that the particular packet is experiencing a certain amount of latency, or determining that a policy prevents the network device from forwarding the particular packet. According to an embodiment, a method comprises: receiving, at a network device, packets from one or more devices over a network; queueing the packets in one or more processing queues while the packets await processing by forwarding logic of the network device; based on the processing by the forwarding logic, forwarding first packets of the packets to destinations identified by the first packets; determining that a particular packet of the packets, in a particular processing queue, is undergoing inflated latency, the particular packet addressed to a particular destination; in response to the determining that the particular packet is experiencing inflated latency, duplicating the particular packet; tagging the particular packet or the duplicate particular packet with a visibility tag; forwarding the tagged packet, with the visibility tag to, to a visibility subsystem instead of the particular destination; forwarding the other of the particular packet or the duplicate particular packet to the particular destination. In an embodiment, the visibility subsystem is a healing engine, the method further comprising: the healing engine inputting a plurality of tags tagged with the visibility tag; based on the plurality of tags, the healing engine reconfiguring the network device. In an embodiment, tagging comprises tagging one or more cells at the start of the tagged packet, the method further comprising discarding one or more cells at the end of the tagged packet before forwarding the tagged packet to the visibility subsystem. According to an embodiment, an apparatus comprises: a programmable visibility engine bound to one or more input data sources, the programmable visibility engine comprising logic implementing a defined set of functions, the one or more input data sources specifying function selection data that selects which one or more functions in the defined set to execute, the programmable visibility engine configured to execute the selected one or more functions on one or more input values specified by the one or more input data sources; one or more data stores storing data output by the programmable visibility engine; an address map that maps memory locations in the one or more memories to functions in the defined set of functions, the programmable visibility engine configured to write a result value of a given function of the defined set of functions to a given memory location, of the memory locations, that has been mapped to the given function. In an embodiment, the apparatus further comprises: one or more communication interfaces configured to receive packets over one or more networks; one or more memories storing queues of the packets in which the packets await processing by forwarding logic; wherein the one or more input data sources pass values calculated based on statistics related to the queues. In an embodiment, at least a first function of the defined set of functions instructs the forwarding logic to perform an action with respect to at least one packet based on a value output by the first function to the one or more data stores. In an embodiment, the action is dropping the packet, issuing a flow control instruction, marking the packet for rate control, sampling the packet and sending the packet to a special processor component for analysis, duplicating the packet and sending the duplicate packet to a data collector, or sending information about the packet to a healing engine. In an embodiment, at least a first function of the defined set of functions is further configured to trigger performance of an action by a processing component of the apparatus based on a value output by the first function. In an embodiment, the programmable visibility engine repeatedly executes the one or more functions selected by the function selection data in iterations, the function selection data changing between at least a first iteration and a second iteration. In an embodiment, the programmable visibility engine is a first programmable visibility engine of multiple programmable visibility engines in the apparatus, wherein a second programmable visibility engine is bound to first data output by the first programmable visibility engine as an input data source for the second programmable visibility engine. In an embodiment, the first data output includes function selection data for the second programmable visibility engine. In an embodiment, the second programmable visibility engine implements a different set of functions than the first programmable visibility engine. In an embodiment, the input data source for the first programmable data engine includes a memory location to which the second programmable visibility engine writes data. In an embodiment, the second programmable visibility engine inputs different function selection data than the first programmable visibility engine. In an embodiment, the defined set of functions includes two or more of: an accumulate-by-value function that updates a data store by summing the data store with an input value; a count function that updates a data store to indicate the number of times the count function has been called; a compare function that compares an input value to an input threshold and updates a data store to indicate true or false based on the comparison; a probabilistic function that causes performance of an action when a randomly selected number surpasses an inputted probability threshold; or an Exponentially Weighted Moving Average function that accepts an input value and uses the input value to update a weighted moving average in a data store. In an embodiment, the apparatus is a network switch. In an embodiment, the programmable visibility engine is implemented by one or more Field Programmable Gate Arrays or Application-Specific Integrated Circuits. In an embodiment, write operations to the data store from the programmable visibility engine are limited to a certain number per an interval of time, wherein the functions in the defined set of functions are associated with prioritization data indicating priorities for selecting which of the selected one or more functions are to perform write operations in a given interval of time. According to an embodiment, a method comprising: binding a data input source to a programmable visibility engine configured to implement a defined set of functions; receiving one or more input values from the data input source; receiving function selection data, the function selection data selecting which one or more of the functions of the defined set of functions to execute on the one or more input values; executing the selected one or more functions on the one or more input values; identifying memory addresses mapped to the defined set of functions; writing results of the selected one or more functions to specific memory addresses mapped to the selected one or more functions. In an embodiment, the method further comprises: receiving packets over one or more networks; storing queues of the packets while the packets await processing by forwarding logic; wherein the one or more input data sources pass values calculated based on statistics related to the queues. In an embodiment, at least a first function of the defined set of functions instructs the forwarding logic to perform an action with respect to at least one packet based on a value output by the first function to the one or more data stores. In an embodiment, action is dropping the packet, issuing a flow control instruction, marking the packet for rate control, sampling the packet and sending the packet to a special processor component for analysis, duplicating the packet and sending the duplicate packet to a data collector, or sending information about the packet to a healing engine. In an embodiment, at least a first function of the defined set of functions is configured to trigger performance of an action by a processing component based on a value output by the first function. In an embodiment, the method further comprises repeatedly executing the one or more functions selected by the function selection data in iterations, the function selection data changing between at least a first iteration and a second iteration. In an embodiment, the programmable visibility engine is a first of multiple programmable visibility engines, wherein a second programmable visibility engine is bound to first data output by the first programmable visibility engine as an input data source for the second programmable visibility engine. In an embodiment, the defined set of functions includes two or more of: an accumulate-by-value function that updates a data store by summing the data store with an input value; a count function that updates a data store to indicate the number of times the count function has been called; a compare function that compares an input value to an input threshold and updates a data store to indicate true or false based on the comparison; a probabilistic function that causes performance of an action when a randomly selected number surpasses an inputted probability threshold; or an Exponentially Weighted Moving Average function that accepts an input value and uses the input value to update a weighted moving average in a data store. In an embodiment, the method is performed by a network switch. In an embodiment, the programmable visibility engine is implemented by one or more Field Programmable Gate Arrays or Application-Specific Integrated Circuits. Other examples of these and other embodiments are found throughout this disclosure. 8.0. Implementation Mechanism—Hardware Overview According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices, or any other device that incorporates hard-wired and/or program logic to implement the techniques. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. Though the foregoing techniques are described with respect to a hardware implementation, which provides a number of advantages in certain embodiments, it will also be recognized that, in another embodiment, the foregoing techniques may still provide certain advantages when performed partially or wholly in software. Accordingly, in such an embodiment, a suitable implementing apparatus comprises a general-purpose hardware processor and is configured to perform any of the foregoing methods by executing program instructions in firmware, memory, other storage, or a combination thereof. FIG.18is a block diagram that illustrates a computer system1800that may be utilized in implementing the above-described techniques, according to an embodiment. Computer system1800may be, for example, a desktop computing device, laptop computing device, tablet, smartphone, server appliance, computing mainframe, multimedia device, handheld device, networking apparatus, or any other suitable device. Computer system1800may include one or more ASICs, FPGAs, or other specialized circuitry1803for implementing program logic as described herein. For example, circuitry1803may include fixed and/or configurable hardware logic blocks for implementing some or all of the described techniques, input/output (I/O) blocks, hardware registers or other embedded memory resources such as random access memory (RAM) for storing various data, and so forth. The logic blocks may include, for example, arrangements of logic gates, flip-flops, multiplexers, and so forth, configured to generate an output signals based on logic operations performed on input signals. Additionally, and/or instead, computer system1800may include one or more hardware processors1804configured to execute software-based instructions. Computer system1800may also include one or more busses1802or other communication mechanism for communicating information. Busses1802may include various internal and/or external components, including, without limitation, internal processor or memory busses, a Serial ATA bus, a PCI Express bus, a Universal Serial Bus, a HyperTransport bus, an Infiniband bus, and/or any other suitable wired or wireless communication channel. Computer system1800also includes one or more memories1806, such as a RAM, hardware registers, or other dynamic or volatile storage device for storing data units to be processed by the one or more ASICs, FPGAs, or other specialized circuitry1803. Memory1806may also or instead be used for storing information and instructions to be executed by processor1804. Memory1806may be directly connected or embedded within circuitry1803or a processor1804. Or, memory1806may be coupled to and accessed via bus1802. Memory1806also may be used for storing temporary variables, data units describing rules or policies, or other intermediate information during execution of program logic or instructions. Computer system1800further includes one or more read only memories (ROM)1808or other static storage devices coupled to bus1802for storing static information and instructions for processor1804. One or more storage devices1810, such as a solid-state drive (SSD), magnetic disk, optical disk, or other suitable non-volatile storage device, may optionally be provided and coupled to bus1802for storing information and instructions. A computer system1800may also include, in an embodiment, one or more communication interfaces1818coupled to bus1802. A communication interface1818provides a data communication coupling, typically two-way, to a network link1820that is connected to a local network1822. For example, a communication interface1818may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the one or more communication interfaces1818may include a local area network (LAN) card to provide a data communication connection to a compatible LAN. As yet another example, the one or more communication interfaces1818may include a wireless network interface controller, such as a 1802.11-based controller, Bluetooth controller, Long Term Evolution (LTE) modem, and/or other types of wireless interfaces. In any such implementation, communication interface1818sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information. Network link1820typically provides data communication through one or more networks to other data devices. For example, network link1820may provide a connection through local network1822to a host computer1824or to data equipment operated by a Service Provider1826. Service Provider1826, which may for example be an Internet Service Provider (ISP), in turn provides data communication services through a wide area network, such as the world wide packet data communication network now commonly referred to as the “Internet”1828. Local network1822and Internet1828both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link1820and through communication interface1818, which carry the digital data to and from computer system1800, are example forms of transmission media. In an embodiment, computer system1800can send messages and receive data through the network(s), network link1820, and communication interface1818. In some embodiments, this data may be data units that the computer system1800has been asked to process and, if necessary, redirect to other computer systems via a suitable network link1820. In other embodiments, this data may be instructions for implementing various processes related to the described techniques. For instance, in the Internet example, a server1830might transmit a requested code for an application program through Internet1828, ISP1826, local network1822and communication interface1818. The received code may be executed by processor1804as it is received, and/or stored in storage device1810, or other non-volatile storage for later execution. As another example, information received via a network link1820may be interpreted and/or processed by a software component of the computer system1800, such as a web browser, application, or server, which in turn issues instructions based thereon to a processor1804, possibly via an operating system and/or other intermediate layers of software components. Computer system1800may optionally be coupled via bus1802to one or more displays1812for presenting information to a computer user. For instance, computer system1800may be connected via an High-Definition Multimedia Interface (HDMI) cable or other suitable cabling to a Liquid Crystal Display (LCD) monitor, and/or via a wireless connection such as peer-to-peer Wi-Fi Direct connection to a Light-Emitting Diode (LED) television. Other examples of suitable types of displays1812may include, without limitation, plasma display devices, projectors, cathode ray tube (CRT) monitors, electronic paper, virtual reality headsets, braille terminal, and/or any other suitable device for outputting information to a computer user. In an embodiment, any suitable type of output device, such as, for instance, an audio speaker or printer, may be utilized instead of a display1812. One or more input devices1814are optionally coupled to bus1802for communicating information and command selections to processor1804. One example of an input device1814is a keyboard, including alphanumeric and other keys. Another type of user input device1814is cursor control1816, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor1804and for controlling cursor movement on display1812. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Yet other examples of suitable input devices1814include a touch-screen panel affixed to a display1812, cameras, microphones, accelerometers, motion detectors, and/or other sensors. In an embodiment, a network-based input device1814may be utilized. In such an embodiment, user input and/or other information or commands may be relayed via routers and/or switches on a Local Area Network (LAN) or other suitable shared network, or via a peer-to-peer network, from the input device1814to a network link1820on the computer system1800. As discussed, computer system1800may implement techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs1803, firmware and/or program logic, which in combination with the computer system causes or programs computer system1800to be a special-purpose machine. According to one embodiment, however, the techniques herein are performed by computer system1800in response to processor1804executing one or more sequences of one or more instructions contained in main memory1806. Such instructions may be read into main memory1806from another storage medium, such as storage device1810. Execution of the sequences of instructions contained in main memory1806causes processor1804to perform the process steps described herein. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device1810. Volatile media includes dynamic memory, such as main memory1806. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus1802. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor1804for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and use a modem to send the instructions over a network, such as a cable network or cellular network, as modulated signals. A modem local to computer system1800can receive the data on the network and demodulate the signal to decode the transmitted instructions. Appropriate circuitry can then place the data on bus1802. Bus1802carries the data to main memory1806, from which processor1804retrieves and executes the instructions. The instructions received by main memory1806may optionally be stored on storage device1810either before or after execution by processor1804. 9.0. Extensions and Alternatives As used herein, the terms “first,” “second,” “certain,” and “particular” are used as naming conventions to distinguish queries, plans, representations, steps, objects, devices, or other items from each other, so that these items may be referenced after they have been introduced. Unless otherwise specified herein, the use of these terms does not imply an ordering, timing, or any other characteristic of the referenced items. In the drawings, the various components are depicted as being communicatively coupled to various other components by arrows. These arrows illustrate only certain examples of information flows between the components. Neither the direction of the arrows nor the lack of arrow lines between certain components should be interpreted as indicating the existence or absence of communication between the certain components themselves. Indeed, each component may feature a suitable communication interface by which the component may become communicatively coupled to other components as needed to accomplish any of the functions described herein. In the foregoing specification, embodiments of the inventive subject matter have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the inventive subject matter, and is intended by the applicants to be the inventive subject matter, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. In this regard, although specific claim dependencies are set out in the claims of this application, it is to be noted that the features of the dependent claims of this application may be combined as appropriate with the features of other dependent claims and with the features of the independent claims of this application, and not merely according to the specific dependencies recited in the set of claims. Moreover, although separate embodiments are discussed herein, any combination of embodiments and/or partial embodiments discussed herein may be combined to form further embodiments. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
204,453
11863459
DESCRIPTION OF EMBODIMENTS In the data communications field, a packet may arrive at a destination only after being forwarded by a plurality of forwarding apparatuses. The forwarding apparatus may be a router. The router may forward an IP packet. The forwarding apparatus may be a network switch. The network switch may forward an Ethernet frame. FIG.1is a networking agriculture diagram according to this application. Referring toFIG.1, the networking agriculture diagram includes seven routers a router1to a router7. Each router may include a plurality of physical interface cards (PIC). The PIC may also be referred to as a network interface card (NIC). Each physical interface card may include a plurality of ports. For example, a physical interface card may include eight Gigabit Ethernet (GE) ports. A bandwidth of each GE port is 1 Gigabit per second.FIG.1shows two outbound ports (a first outbound port and a second outbound port) in the router1, and two outbound ports (a third outbound port and a fourth outbound port) in the router2. The router1is connected to the router2using the first outbound port. The router1is connected to the router3using the second outbound port. The router2is connected to the router4using the third outbound port. The router2is connected to the router5using the fourth outbound port. After receiving a packet, the router1determines an outbound port used to forward the packet, for example, the first outbound port, and forwards the packet from the first outbound port. After receiving the packet forwarded by the router1, the router2determines an outbound port used to forward the packet, for example, the third outbound port, and forwards the packet from the third outbound port. FIG.2is a schematic structural diagram of a router2inFIG.1in embodiments. Other routers (for example, the router1) inFIG.1may also use the structure shown inFIG.2. Referring toFIG.2, the router2includes a control board1210, a switching board1220, an interface board1230, and an interface board1240. The control board1210includes a central processing unit1211. The control board1210may be configured to execute a routing protocol. The routing protocol may be the Border Gateway Protocol (BGP) or the Interior Gateway Protocol (IGP). The control board1210may generate a routing table by executing a routing protocol, and send the routing table to the interface boards1230and1240. It should be noted that, the router2inFIG.1may also use a structure different from the structure shown inFIG.2. For example, the router2inFIG.1may include only a control board and an interface board, and does not include a switching board. Certainly, the router2inFIG.1may include more than two interface boards. When the router2includes only one interface board and does not include a switching board, after an IP packet received by an inbound port of the interface board is processed by the interface board, the IP packet may be sent from an outbound port of the interface board. When the router2includes a plurality of interface boards and includes a switching board, after an IP packet received by an inbound port of an interface board of the router2is processed by the switching board, the IP packet may be sent from an outbound port of another interface board of the router2. This application does not limit specific structures of the router2and other routers inFIG.1. The interface board1230may forward the IP packet by searching the routing table. Further, the interface board1230includes a central processing unit1231, a network processor1232, a physical interface card1233, and a memory1234. It should be noted that,FIG.2does not show all components that can be included in the interface board1230. In a specific implementation, the interface board1230may further include other components. For example, to enable the interface board1230to have a function of queue scheduling and management, the interface board1230may further include a traffic manager. In addition, to enable a packet from the interface board1230to be switched from the switching board1220to the interface board1240, the interface board1230may further include an ingress fabric interface chip (iFIC). For a specific implementation of the interface board1230including the traffic manager and the iFIC, refer toFIG.3and corresponding descriptions. The central processing unit1231may receive the routing table sent by the central processing unit1211, and store the routing table in the memory1234. The physical interface card1233may be configured to receive an IP packet sent by the router1. The network processor1232may search the routing table in the memory1234for a routing entry matching the IP packet received by the physical interface card1233, and send the IP packet to the switching board1220according to the matched routing entry. The switching board1220may be configured to switch an IP packet from one interface board to another interface board. For example, the switching board1220may switch the IP packet from the interface board1230to the interface board1240. Specifically, the switching board1220may switch the IP packet from the interface board1230to the interface board1240in a manner of cell switching. For example, the network processor1232may obtain a destination IP address in the IP packet. The network processor1232may search, according to a longest prefix match algorithm, the routing table for the routing entry matching the IP packet, and determine an outbound port according to the routing entry matching the IP packet. The routing entry matching the IP packet includes an identifier of the outbound port. Before the IP packet sent by the network processor1232to the switching board1220arrives at the switching board1220, the interface board1230may perform queue scheduling and management on the IP packet. Specifically, the interface board1230may perform queue scheduling and management on the IP packet using a traffic manager301inFIG.3. The interface board1240may forward the IP packet by searching the routing table. The interface board1240includes a central processing unit1241, a network processor1242, a physical interface card1243, and a memory1244.FIG.2does not show all components that can be included in the interface board1240. In a specific implementation, the interface board1240may further include other components. For example, to enable the interface board1240to have a function of queue scheduling and management, the interface board1240may further include a traffic manager. In addition, to enable the interface board1240to correctly receive a packet from the interface board1230through the switching board1220, the interface board1240may further include an egress fabric interface chip (eFIC). For a specific implementation of the interface board1240including the traffic manager and the eFIC, refer toFIG.4and corresponding descriptions. The central processing unit1241may receive the routing table sent by the central processing unit1211, and store the routing table in the memory1244. The network processor1242may be configured to receive an IP packet from the switching board1220. The IP packet from the switching board1220may be an IP packet sent by the router1and received by the physical interface card1233. The network processor1242may search the routing table in the memory1244for a routing entry matching the IP packet from the switching board1220, and send the IP packet to the physical interface card1243according to the matched routing entry. The physical interface card1243may be configured to send the IP packet to the router4. Before the IP packet sent by the network processor1242to the physical interface card1243arrives at the physical interface card1243, the interface board1240may perform queue scheduling and management on the IP packet. Specifically, the interface board1240may perform queue scheduling and management on the IP packet using a traffic manager402inFIG.4. A plurality of packets need to be transmitted in a network, and a time of sending each packet may be different. To reduce disorder of packets transmitted in the network, a router includes a memory. The memory may be a first in first out memory. The router may use the memory to perform queue scheduling and management on a to-be-forwarded packet. In addition, the router may receive a large quantity of packets within a short time, and the large quantity of packets may cause a congestion degree of a first in first out queue in the memory of the router to be relatively high. To reduce the congestion degree of the first in first out queue, the router may perform drop management on a packet to be enqueued to the first in first out queue. FIG.3is a schematic structural diagram of the interface board1230shown inFIG.2in a possible implementation. Referring toFIG.3, the interface board1230includes the network processor1232, the traffic manager301, a memory302, and an iFIC303. It should be noted that,FIG.3shows only some components included in the interface board1230. In a specific implementation, the interface board1230shown inFIG.3may further include a component in the interface board1230shown inFIG.2. The interface board shown inFIG.3can perform queue scheduling and management on upstream traffic. The upstream traffic may be traffic that is received by the interface board1230through the physical interface card1233and is to be sent to the switching board1220. Specifically, after a packet received through the physical interface card1233is processed by the network processor1232and the traffic manager301, the packet is sent to the iFIC303. After receiving the packet sent by the traffic manager301, the iFIC303may generate a plurality of cells according to the packet, and send the plurality of cells to the switching board1220. The packet queue may be a first in first out queue. The memory302may be a first in first out memory. It should be noted that, functions of the memory1234and the memory302are different. The memory1234is configured to store a routing table. The network processor searches the routing table by accessing the memory1234. The memory302is configured to store the first in first out queue. The traffic manager301manages the first in first out queue by accessing the memory302. Therefore, the memory1234and the memory302may be relatively independent memories. Further, the memory302is configured to store and maintain a packet queue. The packet queue includes a plurality of packets. The traffic manager301can perform enqueue management on a packet that is to enter the packet queue, and perform dequeue management on a packet that is to leave the packet queue. Further, the traffic manager301can store and maintain a packet descriptor queue. The packet descriptor queue includes a plurality of packet descriptors. The plurality of packets included in the packet queue correspond to the plurality of packet descriptors included in the packet descriptor queue on a one-to-one basis. Each packet descriptor is used to indicate information related to a corresponding packet. For example, the packet descriptor may include a storage location of the packet corresponding to the packet descriptor in the memory302. In addition, the packet descriptor may further include a time of entering the router2by the packet corresponding to the packet descriptor. Specifically, the time of entering the router2by the packet corresponding to the packet descriptor may be a time of receiving the packet corresponding to the packet descriptor by the physical interface card1233. The traffic manager301can perform enqueue management on the packet from the network processor1232. For example, the traffic manager301may determine, according to a WRED algorithm, whether to drop the packet from the network processor1232. The WRED algorithm defines a maximum queue threshold and a minimum queue threshold. Alternatively, the traffic manager301may determine, according to the WRED algorithm, whether to perform ECN marking on the packet from the network processor1232. For ECN marking, refer to descriptions in request for comments (RFC) 3168 published by the Internet Engineering Task Force (IETF), where content of a related part in the document is incorporated in this specification by reference in its entirety. Herein for brevity, details are not further described. Certainly, the traffic manager301may also determine, according to another algorithm, whether to drop the packet from the network processor1232. If the traffic manager301determines not to drop the packet from the network processor1232, the traffic manager301may store the packet in the packet queue of the memory302. Further, the traffic manager301may store the packet in a tail of the packet queue of the memory302. In addition, the traffic manager301generates, according to the storage location of the packet in the memory302, a packet descriptor corresponding to the packet, and stores the packet descriptor in the packet descriptor queue. Further, the traffic manager301may store the packet descriptor in a tail of the packet descriptor queue. The packet descriptor queue may be stored in the traffic manager301. Further, the packet descriptor queue may be stored in a queue manager in the traffic manager. For details, refer toFIG.6and descriptions about an embodiment inFIG.6. The traffic manager301can perform dequeue management on the packet queue stored in the memory302. For example, when the traffic manager301determines, according to weighted fair queuing (WFQ), that a packet in the packet queue stored in the memory302needs to be sent, the traffic manager301may send a scheduling signal to the memory302according to a head of the packet descriptor queue. Certainly, the traffic manager301may also determine, according to another queue scheduling algorithm, that the packet in the packet queue stored in the memory302needs to be sent. The scheduling signal includes a storage location of the packet in a head of the packet queue. The scheduling signal is used to instruct the memory302to provide the packet located in the head of the packet queue to the traffic manager301. The memory302provides the packet located in the head of the packet queue to the traffic manager301and deletes the sent packet from the packet queue. The traffic manager301obtains, from the memory302, the packet located in the head of the packet queue, and sends the packet to the iFIC303. After the traffic manager301sends the packet to the iFIC303, the traffic manager301deletes, from the packet descriptor queue, a packet descriptor corresponding to the sent packet. FIG.4is a schematic structural diagram of the interface board1240shown inFIG.2in a possible implementation. Referring toFIG.4, the interface board1240includes the network processor1242, the traffic manager402, a memory403, the physical interface card1243, and an eFIC401. It should be noted that,FIG.4shows only some components included in the interface board1240. In a specific implementation, the interface board1240shown inFIG.4may further include a component in the interface board1240shown inFIG.2. The interface board shown inFIG.4can perform queue scheduling and management on downstream traffic. The downstream traffic may be traffic that is received by the interface board1240through the switching board1220and is to be sent to the physical interface card1243. After receiving the downstream traffic, the physical interface card1243may send the downstream traffic to the router4through the third outbound port. After receiving a plurality of information elements from the switching board1220, the eFIC401can generate a packet according to the plurality of cells, and send the packet to the network processor1242. The traffic manager402may perform drop management on the packet received by the network processor1242. The traffic manager402may perform enqueue management on the packet received by the network processor1242, and the network processor1242places the received packet in a packet queue of the memory403according to scheduling of the traffic manager and according to a scheduling algorithm, for example, in a tail of the packet queue. The traffic manager402may perform dequeue management on the packet queue stored in the memory403. The packet queue may be a first in first out queue. The memory403may be a first in first out memory. The network processor1242and the memory403may be integrated in a chip. Alternatively, the network processor1242and the memory403may correspond to different chips. After the traffic manager402obtains the packet in the packet queue stored in the memory403, the traffic manager402may send the obtained packet to the physical interface card1243. The physical interface card1243may send the packet to the router4through the third outbound port. For a specific implementation of performing queue scheduling and management by the interface board shown inFIG.4, refer to descriptions about the embodiment corresponding toFIG.3. Details are not further described herein. FIG.5is a schematic flowchart of a packet processing method according to this disclosure. Referring toFIG.5, the method includes S501, S502, S503, and S504. The method shown inFIG.5is performed by a forwarding apparatus. For example, the forwarding apparatus may be a traffic manager. The traffic manager may be a component in a network apparatus. For example, the network apparatus may be a router, a network switch, a firewall, or a load balancer. Specifically, the method shown inFIG.5may be performed by the traffic manager402shown inFIG.4. The traffic manager402may be located on the interface board1230or the interface1240. S501. A forwarding apparatus receives a first packet. For example, the forwarding apparatus may be a network apparatus. The network apparatus may include a plurality of receive ports. Each receive port may be an Ethernet port. Referring toFIG.2, the forwarding apparatus may be the router2shown inFIG.2. The physical interface card1243may include a plurality of Ethernet ports. The plurality of receive ports may be a plurality of Ethernet ports included in the physical interface card1243. The forwarding apparatus may receive the first packet through a receive port in the plurality of receive ports. The first packet may be sent or forwarded by an upstream apparatus of the forwarding apparatus. For example, the forwarding apparatus and the upstream apparatus may be located in a same autonomous system (AS). The forwarding apparatus and the upstream apparatus are two peers. A BGP session is set up between the forwarding apparatus and the upstream apparatus. The forwarding apparatus uses the BGP session to receive the first packet from the upstream apparatus. For example, a traffic manager located in the network apparatus may be connected to a plurality of receive ports and a plurality of transmit ports of the network apparatus. For example, each transmit port may be a GE port. Each receive port may be a GE port. The traffic manager includes a transceiver. The traffic manager may receive a packet from a receive port of the network apparatus through the transceiver. After receiving the packet, the traffic manager may store the packet in a first memory. The traffic manager may send the packet stored in the first memory to a transmit port of the network apparatus through the transceiver. Referring toFIG.4, the forwarding apparatus may be the traffic manager402shown inFIG.4. The traffic manager402may include a plurality of transceivers (not shown in the figure). The traffic manager402may receive a packet from the network processor1242through a transceiver coupled with the network processor1242. After receiving the packet from the network processor1242, the traffic manager402may store the packet in the memory403. In addition, the traffic manager402may send the packet stored in the memory403to the physical interface card1243through a transceiver coupled with the physical interface card1243. The memory403may be a first in first out memory. The first packet belongs to a first packet flow. The forwarding apparatus includes a first transmit port and a first memory coupled with the first transmit port, the first memory is configured to store the packets that is in the first packet flow and received by the forwarding apparatus, and the first transmit port is configured to send the packets that is in the first packet flow and stored in the first memory. For example, the first packet may be an IP packet. The first packet flow may be a plurality of IP packets having a same quintuple. The quintuple includes a source IP address, a destination IP address, a source port, a destination port, and a protocol. The source IP address, the destination IP address, and the protocol are fields in a layer-3 header (for example, an IP header). The source port and the destination port are fields in a layer-4 header (for example, a transmission control protocol (TCP) header or a user data protocol (UDP) header). The first packet may also be a MPLS packet or an Ethernet frame. S502. The forwarding apparatus determines at least two types of information in four types of information related to the first packet. The four types of information are a first type of information, a second type of information, a third type of information, and a fourth type of information respectively. For example, the at least two types of information related to the first packet may include only the first type of information and the second type of information. Alternatively, the at least two types of information related to the first packet may include only the first type of information and the third type of information. Alternatively, the at least two types of information related to the first packet may include only the first type of information and the fourth type of information. Alternatively, the at least two types of information related to the first packet may include only the second type of information and the third type of information. Alternatively, the at least two types of information related to the first packet may include only the first type of information, the second type of information, and the third type of information. Alternatively, the at least two types of information related to the first packet may include only the first type of information, the second type of information, the third type of information, and the fourth type of information. For example, when the at least two types of information related to the first packet include only the first type of information and the second type of information, the forwarding apparatus may determine only the first type of information and the second type of information, and the forwarding apparatus may not perform an action of determining the third type of information. The forwarding apparatus may not perform an action of determining the fourth type of information either. Certainly, when the at least two types of information related to the first packet include only the first type of information and the second type of information, the forwarding apparatus needs to determine the first type of information and the second type of information. In addition, the forwarding apparatus may determine other information. For example, the forwarding apparatus may determine the third type of information. An engineer may use a command line to configure a type of information that needs to be determined by the forwarding apparatus such that the forwarding apparatus has a function of determining at least two types of information. The first type of information indicates a duration of staying in the first memory by the first packet flow when the first packet is received. For example, the first memory stores a first in first out queue. When receiving the packet in the first packet flow, the forwarding apparatus needs to enqueue the packet to the first in first out queue. When the packet is enqueued to the first in first out queue, the packet is located in a tail of the first in first out queue. When the packet is located in a head of the first in first out queue, the forwarding apparatus may dequeue the packet. After the packet is dequeued, the forwarding apparatus may send the first packet through the first transmit port. A duration from enqueuing the packet to the first in first out queue to dequeuing the packet from the first in first out queue is a duration of staying in the first memory by the packet. The duration of staying in the first memory by the first packet flow may be an average value of durations of staying in the first memory by the plurality of packets in the first packet flow. For example, the forwarding apparatus may be a traffic manager. When receiving a packet1in the first packet flow, the traffic manager may first enqueue the packet1to the first in first out queue, and store an enqueuing time t1of the packet1. The traffic manager may store a dequeuing time t2of the packet1when dequeuing the packet from the first in first out queue. The traffic manager may determine a duration of staying in the first memory by the packet1according to a difference between t2and t1. The traffic manager may use the difference between the t2and the t1as the duration of staying in the first memory by the first packet flow when the packet1is received. In addition, the traffic manager may receive a plurality of packets in the first packet flow, perform enqueue management and dequeue management on the plurality of packets, and obtain an enqueuing time and a dequeuing time of each packet. The traffic manager may obtain a duration of staying in the first memory by each packet in the plurality of packets by referring to the procedure for processing the packet1. The traffic manager may use the average value of the plurality of durations as the duration of staying in the first memory by the first packet flow when the first packet is received. The second type of information indicates usage of the first memory when the first packet is received. For example, the first memory may be only used to store the packet in the first packet flow. Alternatively, the first memory may be not only used to store the packet of the first packet flow, but also used to store packets of other packet flows. Before the first packet flow is forwarded by the forwarding apparatus through the first transmit port, the first memory needs to store the first packet flow. It may be understood that, when the usage of the first memory is relatively low, a congestion degree of the first packet flow may be considered as relatively low. In this case, a drop algorithm of a relatively low drop degree may be used to perform drop management on the first packet flow. For example, when the usage of the first memory is 5%, available storage space of the first memory is relatively large. A WRED algorithm of a relatively low drop degree may be used to perform drop management on the first packet flow. Thereby, most packets of the first packet flow that are received by the forwarding apparatus may be stored in the first memory. When the usage of the first memory is relatively high, it may be considered that a congestion degree of the first packet flow is relatively high. In this case, a drop algorithm of a relatively high drop degree may be used to perform drop management on the first packet flow. For example, when the usage of the first memory is 95%, the available storage space of the first memory is relatively small. A WRED algorithm of a relatively high drop degree may be used to perform drop management on the first packet flow. Thereby, only a few packets of the first packet flow that are received by the forwarding apparatus may be stored in the first memory. For example, the forwarding apparatus may be a traffic manager. The traffic manager may store a size of storage space of the first memory. The size of the storage space of the first memory is equal to a sum of a size of the available storage space of the first memory and a size of used storage space of the first memory. When the first memory does not store any packet, the size of the available storage space of the first memory is equal to the size of the storage space of the first memory. When the first memory stores a packet, the size of the available storage space of the first memory is less than the size of the storage space of the first memory. The first memory may send the size of the used storage space of the first memory to the traffic manager. The traffic manager may use a quotient obtained by dividing the size of the used storage space of the first memory by the size of the storage space of the first memory, as the usage of the first memory when the first packet is received. Alternatively, the first memory may send the size of the available storage space of the first memory to the traffic manager. The traffic manager may use a difference between the size of the storage space of the first memory and the size of the available storage space of the first memory, as the size of the used storage space of the first memory. Further, the traffic manager may use a quotient obtained by dividing the size of the used storage space of the first memory by the size of the storage space of the first memory, as the usage of the first memory when the first packet is received. The third type of information indicates whether the first packet flow is a victim of a congestion control mechanism when the first packet is received. A class of service of the first packet flow is a first class of service. When the forwarding apparatus receives a backpressure signal corresponding to the first class of service through the transmit port of the forwarding apparatus, the first packet flow is a victim of the congestion control mechanism. When the forwarding apparatus does not receive a backpressure signal corresponding to the first class of service through the transmit port of the forwarding apparatus, the first packet flow is not a victim of the congestion control mechanism. For example, the first packet flow may be an Ethernet frame flow. A frame header of an Ethernet frame may include a class of service. The class of service is a field of three bits. The value of the field is a value from 0 to 7. The foregoing eight values are also referred to as CS0to CS7. The first class of service may be one of CS0to CS7. For example, the first class of service may be CS0. For example, the forwarding apparatus may be a traffic manager. The transmit port of the forwarding apparatus may be a transceiver of the traffic manager. The forwarding apparatus may include at least one transmit port. The at least one transmit port is configured to send the first packet flow. Further, the traffic manager may send, through the at least one transmit port, the first packet flow stored in the first memory. For example, the congestion control mechanism may be a backpressure mechanism. For example, the following describes the backpressure mechanism using an example in which the forwarding apparatus is a traffic manager, the transmit port of the forwarding apparatus is a transceiver of the traffic manager, and the receive port of the forwarding apparatus is the transceiver of the traffic manager. The transceiver of the traffic manager receives the backpressure signal. The backpressure signal may be triggered by downstream congestion of the traffic manager. For example, the transmit port of the network apparatus of the traffic manager is congested. Alternatively, a next-hop network apparatus of the network apparatus of the traffic manager is congested. The backpressure signal may comply with priority-based flow control (PFC) defined by Institute of Electrical and Electronics Engineers (IEEE) standard 802.1Qbb. Alternatively, the backpressure signal may comply with a pause frame (pause frame) defined in 802.13X published by the IEEE. The backpressure signal may be used to indicate that a packet flow of the first class of service is congested. For example, a packet flow of the class of service CS0is congested. It should be noted that, the traffic manager may be configured to forward a packet flow whose class of service is the first class of service. The traffic manager may also be configured to forward a plurality of packet flows whose class of service is the first class of service. In addition, the traffic manager may be further configured to forward a packet flow of another class of service. For example, the traffic manager is further configured to forward a packet flow whose class of service is CS2. After the traffic manager receives the backpressure signal, the traffic manager may reduce a rate of sending packets of the first class of service to the downstream, to mitigate the downstream congestion. The traffic manager may maintain a backpressure waterline. The traffic manager determines, according to the backpressure waterline, whether to send a backpressure signal to the upstream of the traffic manager. For example, the upstream of the traffic manager may be a receive port of the network apparatus of the traffic manager. Alternatively, the upstream of the traffic manager may be a previous-hop network apparatus of the network apparatus of the traffic manager. When the traffic manager determines that a length of a packet queue stored in the memory is equal to or greater than the backpressure waterline, the traffic manager determines that a backpressure signal needs to be sent to the upstream. When the traffic manager needs to send a backpressure signal to the upstream, the traffic manager may send the backpressure signal to the upstream through the transceiver of the traffic manager. When the traffic manager receives a backpressure signal from the downstream, the traffic manager needs to reduce a rate of sending the packet flow of the first class of service to the downstream, to mitigate the downstream congestion. When the traffic manager performs enqueue management on the received packet according to the WRED algorithm, the traffic manager needs to determine, according to the length of the packet queue and a relationship between a minimum queue threshold and a maximum queue threshold corresponding to the WRED algorithm, whether the received packet needs to be dropped, or whether ECN marking needs to be performed on the received packet. When determining that the received packet needs to be dropped, the traffic manager avoids enqueuing the received packet to the packet queue. Alternatively, when determining that ECN marking needs to be performed on the received packet, the traffic manager enqueues, to the packet queue, the packet on which ECN marking is performed. In a case in which the rate of sending the packet flow of the first class of service by the traffic manager to the downstream needs to be reduced, compared with a case in which the rate of sending the packet flow of the first class of service by the traffic manager to the downstream does not change, after a shorter duration, a length of the packet queue that is stored in the memory and includes the packet flow of the first class of service may reach the minimum queue threshold or the maximum queue threshold. Therefore, compared with the case in which the rate of sending the packet flow of the first class of service by the traffic manager to the downstream does not change, a drop probability of the packet of the first class of service received by the traffic manager increases, or a probability of ECN marking of the packet of the first class of service received by the traffic manager increases. Therefore, the packet flow of the first class of service is a victim of the congestion control mechanism. It is assumed that a class of service of a packet flow1is CS1, and that a class of service of a packet flow2is CS2. The traffic manager is configured to forward the packet flow1and the packet flow2. The traffic manager performs enqueue management and dequeue management on the packet flow1. Further, the traffic manager uses a WRED1algorithm to perform enqueue management on the packet flow1. A minimum queue threshold corresponding to the WRED1algorithm is MIN1, and a maximum queue threshold corresponding to the WRED1algorithm is MAX1. The traffic manager may determine, according to the WRED1algorithm, whether a packet in the packet flow1needs to be dropped. When determining that the packet in the packet flow1needs to be dropped, the traffic manager drops the packet. When determining that the packet in the packet flow1does not need to be dropped, the traffic manager enqueues the packet to a first in first out queue. Optionally, the traffic manager may determine, according to the WRED1algorithm, whether ECN marking needs to be performed on the packet in the packet flow1. When determining that ECN marking needs to be performed, the traffic manager performs ECN marking on the packet. When determining that ECN marking does not need to be performed, the traffic manager avoids performing ECN marking on the packet. The traffic manager receives a backpressure signal1sent by the downstream, where the backpressure signal1is used to indicate that the packet flow whose class of service is CS1is congested. After receiving the backpressure signal1, the traffic manager reduces a rate of sending the packet flow1to the downstream. Consequently, a duration required for a length of the first in first out queue including the packet flow1to reach the MIN1or the MAX1becomes shorter. For example, if the traffic manager does not receive the backpressure signal1, the duration required for the length of the first in first out queue including the packet flow1to reach the MIN1is 2 seconds. If the traffic manager receives the backpressure signal1, the duration required for the length of the first in first out queue including the packet flow1to reach the MIN1is 1 second. In other words, compared with the case in which no backpressure signal is received, when the traffic manager receives the backpressure signal, a drop probability of the packet in the packet flow1is increased, or a probability of ECN marking of the packet in the packet flow1is increased. Therefore, the packet flow1is a victim of the congestion control mechanism. The packet flow2is not a victim of the congestion control mechanism. For example, the forwarding apparatus may be a traffic manager. The backpressure signal may be a pause frame. The pause frame carries an identifier of a class of service of a congested packet flow. After receiving the pause frame, the traffic manager may parse the pause frame, and therefore determine the class of service of the congested packet flow. The class of service of the congested packet flow may be the first class of service or may be another class of service. In addition, the first packet received by the traffic manager may be sent by the network processor. The network processor may parse the first packet and therefore determine the class of service of the first packet. The network processor may send the class of service of the first packet, that is, the first class of service, to the traffic manager. The traffic manager may determine, according to the class of service of the first packet and the class of service of the congested packet flow, whether the first packet flow is a victim of the congestion control mechanism. When the class of service of the first packet is the same as that of the congested packet flow, the traffic manager may determine that the first packet flow is a victim of the congestion control mechanism. When the class of service of the first packet is different from that of the congested packet flow, the traffic manager may determine that the first packet flow is not a victim of the congestion control mechanism. The fourth type of information indicates a drop priority of the first packet. The drop priority of the first packet is determined based on a field used to indicate a scheduling priority in the first packet. Alternatively, the drop priority of the first packet is determined based on protocol drop sensitivity of the first packet. For example, the field used to indicate the scheduling priority in the first packet is an EXP field in an MPLS label, a PCP field in a VLAN tag, or a DSCP field in an IP header. For example, the protocol drop sensitivity indicates a degree of reducing a rate of sending the protocol packet by a source of the protocol packet because the protocol packet is dropped or is ECN-marked in a transmission process. For example, a source of a TCP packet is a TCP sender. According to TCP, after the TCP packet is dropped in a transmission process, the TCP sender may reduce a rate of sending the TCP packet. According to the UDP, after a UDP packet is dropped in a transmission process, a source of the UDP packet does not reduce a rate of sending the UDP packet. Therefore, TCP drop sensitivity is higher than UDP drop sensitivity. For example, if a protocol of the packet flow1is a TCP packet flow, and a protocol of the packet flow2is a UDP packet flow, drop sensitivity of the packet flow1is higher than drop sensitivity of the packet flow2. For example, the forwarding apparatus may be a traffic manager. The traffic manager stores a mapping table1between scheduling priorities and drop priorities. Specifically, the mapping table1includes a plurality of entries, and each entry includes a value of a scheduling priority and a value of a drop priority corresponding to the scheduling priority. After receiving the first packet, the traffic manager may parse the first packet to obtain the field used to indicate the scheduling priority in the first packet. The traffic manager may search, using the value of the field used to indicate the scheduling priority in the first packet as a search keyword, the mapping table1for an entry1matching the field used to indicate the scheduling priority in the first packet. A value of a scheduling priority included in the entry1is equal to the value of the field used to indicate the scheduling priority in the first packet. The traffic manager may use a drop priority corresponding to a value of a drop priority in the entry1as the drop priority of the first packet. For example, the forwarding apparatus may be a traffic manager. The traffic manager stores a mapping table2between protocols and drop priorities. Specifically, the mapping table2includes a plurality of entries, and each entry stores a type of a protocol and a value of a drop priority corresponding to the protocol. After receiving the first packet, the traffic manager may parse the first packet to obtain a type of a protocol of the first packet. The traffic manager may search, using the type of the protocol of the first packet as a search keyword, the mapping table2for an entry2matching the type of the protocol of the first packet. A type of a protocol included in the entry2is the same as the type of the protocol of the first packet. The traffic manager may use a drop priority corresponding to a value of a drop priority in the entry2as the drop priority of the first packet. S503. The forwarding apparatus determines, based on the at least two types of information related to the first packet, whether ECN marking needs to be performed on the first packet. For example, the forwarding apparatus may maintain a mapping table. Further, the traffic manager of the forwarding apparatus may maintain the mapping table. The mapping table may include a plurality of entries. An entry of the mapping table may store the at least two types of information related to the first packet, an identifier of the WRED algorithm, and the maximum queue threshold and the minimum queue threshold corresponding to the WRED algorithm. When the forwarding apparatus receives the first packet, the forwarding apparatus may search, using the at least two types of information related to the first packet as a search keyword, the mapping table for an entry matching the at least two types of information related to the first packet. The forwarding apparatus may determine, according to a maximum queue threshold and a minimum threshold in the matched entry, whether ECN marking needs to be performed on the first packet. For example, when the forwarding apparatus detects that the forwarding apparatus receives the first packet, the length of the first in first out queue in the first memory is L. When L is less than the minimum queue threshold in the matched entry, the forwarding apparatus determines that ECN marking does not need to be performed on the first packet. When L is greater than the maximum queue threshold in the matched entry, the forwarding apparatus determines that ECN marking needs to be performed on the first packet. When L is greater than the minimum queue threshold in the matched entry and less than the maximum queue threshold in the matched entry, the forwarding apparatus determines that ECN marking may be performed on the first packet. The forwarding apparatus may further determine, according to other factors, whether ECN marking needs to be performed on the first packet. Before performing S503, the forwarding apparatus may determine whether the first packet is ECN-capable. If determining that the first packet is already ECN-capable, the forwarding apparatus may perform S503. For example, when the first packet includes an ECN-capable transport codepoint (ECT codepoint) ‘10’, the forwarding apparatus may determine that the first packet is already ECN-capable. When the first packet includes an ECT codepoint ‘01’, the forwarding apparatus may determine that the first packet is already ECN-capable. When a value of an ECN field included in the first packet is equal to ‘00’, the forwarding apparatus may determine that the first packet is not ECN-capable. Optionally, the forwarding apparatus may not perform an action of determining whether ECN marking needs to be performed on the first packet, but determines, based on the at least two types of information related to the first packet, whether the first packet needs to be dropped. Further, the forwarding apparatus may determine, according to the maximum queue threshold and the minimum queue threshold in the matched entry, whether the first packet needs to be dropped. For example, when the forwarding apparatus detects that the forwarding apparatus receives the first packet, the length of the first in first out queue in the first memory is L. When L is less than the minimum queue threshold in the matched entry, the forwarding apparatus determines that the first packet does not need to be dropped. When L is greater than the maximum queue threshold in the matched entry, the forwarding apparatus determines that the first packet needs to be dropped. When L is greater than the minimum queue threshold in the matched entry and less than the maximum queue threshold in the matched entry, the forwarding apparatus determines that the first packet may be dropped. The forwarding apparatus may further determine, according to other factors, whether the first packet needs to be dropped. Before the forwarding apparatus determines whether the first packet needs to be dropped, the forwarding apparatus may determine whether the first packet is already ECN-capable. If determining that the first packet is not ECN-capable, the forwarding apparatus may perform an action of determining whether the first packet needs to be dropped. For example, when the value of the ECN field included in the first packet is equal to ‘00’, the forwarding apparatus may determine that the first packet is not ECN-capable. Further, the forwarding apparatus may perform the action of determining whether the first packet needs to be dropped. Optionally, the forwarding apparatus may not perform the action of determining whether ECN marking needs to be performed on the first packet, but determines, based on the at least two types of information related to the first packet, whether the first packet needs to be dropped. For a specific implementation, refer to the foregoing processing step of determining whether the first packet needs to be dropped after determining that the first packet is not ECN-capable. Details are not further described herein. Table 1 is a possible schematic diagram of the mapping table. Referring to Table 1, the mapping table may include an identifier of an entry, the first type of information, the second type of information, the third type of information, the fourth type of information, the identifier of the WRED algorithm, the maximum queue threshold, and the minimum queue threshold. The mapping table shown in Table 1 includes four entries. The first type of information, the second type of information, the third type of information, and the fourth type of information belong to a match field. A search keyword is compared with the match field such that an entry matching the search keyword is found from the mapping table. In a specific implementation, the mapping table may use a structure different from Table 1. For example, the mapping table may include a plurality of associated sub tables. For example, the mapping table may include Table 2 and Table 3. The mapping table may include more or fewer entries. In addition, the mapping table may include more or fewer columns. For example, the match field may include only two types of information (for example, the first type of information and the fourth type of information). The match field may include only three types of information (for example, the second type of information, the third type of information, and the fourth type of information). The forwarding apparatus needs to determine a quantity of types of information related to the first packet, and the quantity of types of information included in the match field is related to the determined quantity. For example, in S502, when the forwarding apparatus determines that the quantity of types of information related to the first packet is 2, the match field may include only two types of information (for example, the second type of information and the third type of information). For another example, in S502, when the forwarding apparatus determines that the quantity of types of information related to the first packet is 3, the match field may include only three types of information (for example, the first type of information, the second type of information, and the third type of information). In the mapping table shown in Table 1, the first type of information is indicated by a range. For example, a value of the first type of information in an entry1is 3 to 5 seconds. In a specific implementation, alternatively, a value of the first type of information may not be a range. For example, the value of the first type of information is 1 second. In the mapping table shown in Table 1, the second type of information is indicated by a range. For example, a value of the second type of information in an entry2is 70% to 80%. In a specific implementation, alternatively, a value of the second type of information may not be a range. For example, the value of the second type of information is 75%. In addition, in different entries, values of the first type of information may be equal or may overlap partially. For example, the value of the first type of information in an entry4and the value of the first type of information in an entry3overlap partially. In different entries, values of the second type of information may be equal or may overlap partially. For example, the value of the second type of information in the entry3and the value of the second type of information in the entry4overlap partially. TABLE 1FirstSecondThirdFourthIdentifierMaximumMinimumIdentifiertype oftype oftype oftype ofof a WREDqueuequeueof an entryinformationinformationinformationinformationalgorithmthresholdthresholdEntry 11 second60% toVictimDropWRED1MAX1MIN170%priority 1Entry 22 seconds70% toNot aDropWRED2MAX2MIN280%victimpriority 2Entry 33 to 520% toVictimDropWRED3MAX3MIN3seconds25%priority 3Entry 44 to 1020% toNot aDropWRED4MAX4MIN4seconds30%victimpriority 4 S504. When determining that ECN marking needs to be performed on the first packet, the forwarding apparatus performs ECN marking on the first packet. In a specific implementation, when the forwarding apparatus determines that ECN marking needs to be performed on the first packet, the forwarding apparatus may change the value of the ECN field included in the first packet from ‘10’ to ‘11’. Alternatively, the forwarding apparatus may change the value of the ECN field included in the first packet from ‘01’ to ‘11’. After the forwarding apparatus performs ECN marking on the first packet, the forwarding apparatus may store, in the first memory, the first packet on which ECN marking is performed. Specifically, the first packet on which ECN marking is performed may be enqueued to the first in first out queue stored in the first memory. Alternatively, the forwarding apparatus may not perform ECN marking on the first packet, but stores the first packet in the first memory when determining that the first packet does not need to be dropped. Further, if the forwarding apparatus determines that the first packet needs to be dropped, the forwarding apparatus may drop the first packet. If the forwarding apparatus determines that the first packet does not need to be dropped, the forwarding apparatus may store the first packet in the first memory. When the forwarding apparatus stores the first packet in the first memory, the forwarding apparatus may enqueue the first packet to the first in first out queue stored in the first memory. With reference toFIG.4andFIG.6, the following describes the method shown inFIG.5using an example.FIG.6is a schematic structural diagram of a traffic manager according to this application. For example, a traffic manager600shown inFIG.6may be configured to implement the traffic manager402inFIG.4or the traffic manager301inFIG.3. Referring toFIG.6, the traffic manager600includes a scheduler601, a queue manager602, a WRED circuit603, and an active queue management (AQM) circuit604. The AQM circuit604is coupled with the WRED circuit603. The WRED circuit603is coupled with the queue manager602. The queue manager602is coupled with the scheduler601. The queue manager602is coupled with the AQM circuit604. In addition, the AQM circuit604is coupled with the network processor1242inFIG.4. The AQM circuit604is coupled with the memory403. The network processor1242can receive an IP packet. The network processor1242can send the IP packet to the AQM circuit604. The AQM circuit604can determine information related to the IP packet. The following uses an example to describe a specific implementation of determining information related to the IP packet by the AQM circuit604. For example, the network processor1242can parse the IP packet to obtain a field used to indicate a scheduling priority in the IP packet. For example, the network processor1242can obtain a DSCP field from an IP header of the IP packet. The network processor1242can send the DSCP field to the AQM circuit604. The AQM circuit604may maintain a mapping table between DSCP fields and drop priorities. The AQM circuit604may determine a drop priority of the IP packet according to the mapping table between DSCP fields and drop priorities that is stored in the AQM circuit604. For example, the forwarding apparatus may forward a plurality of packet flows. The network processor1242can store an identifier of a receive port used to receive each packet flow in the plurality of packet flows, and an identifier of a transmit port used to send each packet flow in the plurality of packet flows, and feature information (for example, a quintuple) of each packet flow in the plurality of packet flows. In addition, the network processor1242can detect whether a receive port that triggers transmission of a backpressure signal to the receive port of the forwarding apparatus due to congestion exists in the forwarding apparatus. After the network processor1242receives the IP packet, the network processor1242may parse the IP packet to obtain feature information of the IP packet. The network processor1242may determine, according to the feature information of the IP packet, a receive port, a transmit port, and a class of service of a packet flow to which the IP packet belongs. The network processor1242can send the foregoing information of the IP packet to the traffic manager. After receiving the backpressure signal, the traffic manager may parse the backpressure signal to determine a class of service of a congested packet flow. The traffic manager may compare the class of service of the IP packet and the class of service of the congested packet flow to determine whether the packet flow to which the IP packet belongs is a victim of the congestion control mechanism. When the class of service of the IP packet is the same as that of the congested packet flow, the traffic manager determines that the packet flow to which the IP packet belongs is a victim of the congestion control mechanism. When the class of service of the IP packet is different from that of the congested packet flow, the traffic manager determines that the packet flow to which the IP packet belongs is not a victim of the congestion control mechanism. For example, the AQM circuit604may prestore a size of memory space of the memory403. In addition, the AQM circuit604may detect the size of the storage space occupied in the memory403at a time. Further, the AQM circuit604may determine usage of the memory403at a time. For example, the memory403may maintain one or more packet queues. Each packet queue corresponds to a packet flow. The queue manager602may count a quantity of bytes of a packet dequeued from the packet queue stored in the memory403in a duration. Further, the queue manager602may determine a transmission rate of the packet flow in a duration according to the quantity of the bytes of the dequeued packet and the duration. The queue manager602may send the transmission rate of the packet flow to the AQM circuit604. Further, the AQM circuit604may determine the transmission rate of the packet flow. After the AQM circuit604determines the information related to the IP packet, the AQM circuit604can determine an identifier of a WRED algorithm according to the information related to the IP packet. In a specific implementation, the AQM circuit604may store a mapping table between information related to IP packets and identifiers of WRED algorithms. In the following description, it is assumed that the information related to the IP packet includes only usage of the memory and a duration of staying in the memory by the packet flow. Table 2 is a possible implementation of the mapping table. Referring to Table 2, the mapping table includes an identifier of an entry, the usage of the memory, the duration of staying in the memory by the packet flow, and the identifier of the WRED algorithm. After receiving the IP packet, the AQM circuit604may determine that the usage of the memory403is 25% when the IP packet is received. In addition, the AQM circuit604may determine that the duration of staying in the memory by the packet flow to which the IP packet belongs is 2.5 seconds when the IP packet is received. The AQM circuit604uses 25% (the usage of the memory) and 2.5 seconds (the duration of staying in the memory by the packet flow) as search keywords to search the table for a matched entry. The AQM circuit604determines that an entry1is the matched entry. The AQM circuit604determines, according to the entry1, that the identifier of the WRED algorithm is WRED1. TABLE 2IdentifierUsage ofDuration of staying in theIdentifier of aof an entrythe memorymemory by the packet flowWRED algorithmEntry 120% to 30%2 to 3 secondsWRED1Entry 240% to 50%5 to 10 secondsWRED2 The WRED circuit603can receive the IP packet from the AQM circuit604, and the identifier of the WRED algorithm corresponding to the IP packet. The WRED circuit603can execute the WRED algorithm. Further, the WRED circuit603may store a maximum queue threshold and a minimum queue threshold of the first WRED algorithm. The WRED circuit603can obtain a length of the packet queue from the queue manager602. The WRED circuit603can determine, according to the identifier of the WRED algorithm, whether ECN marking needs to be performed on the IP packet, or whether the IP packet needs to be dropped. Alternatively, when the WRED circuit603determines, according to the identifier of the WRED algorithm, that ECN marking does not need to be performed on the IP packet, the WRED circuit603can determine, according to the identifier of the WRED algorithm, whether to drop the IP packet. For example, the WRED circuit603may store Table 3. Referring to Table 3, each entry in Table 3 includes an identifier of a WRED algorithm, a maximum queue threshold, and a minimum queue threshold. When the WRED circuit603receives the WRED1sent by the AQM circuit604, the WRED circuit603uses the WRED1as a search keyword to search Table 3 for an entry matching the WRED1. The WRED circuit603determines a parameter of the WRED1according to the entry matching the WRED1. The parameter of the WRED1includes a maximum queue threshold and a minimum queue threshold. The maximum queue threshold is equal to MAX1. The minimum queue threshold is equal to MIN1. Further, the WRED circuit603obtains the length of the packet queue from the queue manager602. The WRED circuit603determines, according to the obtained information, whether ECN marking needs to be performed on the IP packet, or whether the IP packet needs to be dropped. TABLE 3Identifier of a WREDMaximum queueMinimum queuealgorithmthresholdthresholdWRED1MAX1MIN1WRED2MAX2MIN2 In addition, when the WRED circuit603determines that ECN marking needs to be performed on the IP packet, the WRED circuit603can perform ECN marking on the IP packet. In addition, the WRED circuit603can enqueue the IP packet on which ECN marking is performed to a first in first out queue stored in the memory403. When the WRED circuit603determines that the IP packet needs to be dropped, the WRED circuit603can drop the IP packet, to prevent the IP packet from being enqueued to the first in first out queue stored in the memory403. The queue manager602is configured to store and maintain a packet descriptor queue. Further, when the WRED circuit603determines to enqueue the received packet, the queue manager602adds, to a tail of the packet descriptor queue, a packet descriptor corresponding to the packet enqueued to the packet queue. The queue manager602may notify the scheduler601that a new packet descriptor is added to the tail of the packet descriptor queue. The scheduler601may determine, according to information notified by the queue manager602, a policy for scheduling the packet in the packet queue. After the queue manager602receives a scheduling command sent by the scheduler601, the queue manager602may send a read request to the memory302according to a packet descriptor located in a head of the packet descriptor queue, to obtain the packet corresponding to the packet descriptor located in the head of the packet descriptor queue, that is, the packet located in the head of the packet queue. The queue manager602may send the packet to the iFIC303. Because the queue manager602has scheduled the packet located in the head of the packet queue out of the packet queue, the queue manager602deletes the packet descriptor located in the head of the packet descriptor queue. In addition, the queue manager602determines, according to the maintained packet descriptor queue, a quantity of packet descriptors included in the packet descriptor queue, determines a length of the first in first out queue according to the quantity of the packet descriptors, and notifies the length of the first in first out queue to the WRED circuit603. Optionally, in the method shown inFIG.5, the method may further include the following steps. The forwarding apparatus receives a second packet. For example, the forwarding apparatus may be a network apparatus. The network apparatus may include a plurality of receive ports. Each receive port may be an Ethernet port. The traffic manager located in the network apparatus may be connected to a plurality of receive ports and a plurality of transmit ports of the network apparatus. For example, each transmit port may be a GE port. Each receive port may be a GE port. Referring toFIG.2, the network apparatus may be the router2shown inFIG.2. The physical interface card1243may include a plurality of Ethernet ports. The plurality of receive ports of the network apparatus may be a plurality of Ethernet ports included in the physical interface card1243. The traffic manager includes a transceiver. The traffic manager may receive a packet from a receive port of the network apparatus through the transceiver. After receiving the packet, the traffic manager may store the packet in a memory. The traffic manager may send the packet stored in the memory to a transmit port of the network apparatus through the transceiver. The transceiver that receives the second packet in the traffic manager and the transceiver that receives the first packet in the traffic manager may be different transceivers. Referring toFIG.4, the forwarding apparatus may be the traffic manager402shown inFIG.4. The traffic manager402may include a plurality of transceivers (not shown in the figure). The traffic manager402may receive a packet from the network processor1242through a transceiver coupled with the network processor1242. After receiving the packet from the network processor1242, the traffic manager402may store the packet in the memory403. In addition, the traffic manager402may send the packet stored in the memory403to the physical interface card1243through a transceiver coupled with the physical interface card1243. The memory403may be a first in first out memory. The second packet belongs to a second packet flow, the forwarding apparatus includes a second transmit port and a second memory coupled with the second transmit port, the second memory is configured to store the packets that is in the second packet flow and received by the forwarding apparatus, and the second transmit port is configured to send the packets that is in the second packet flow and stored in the second memory. For example, the forwarding apparatus may include a plurality of transmit ports and a plurality of memories. Each transit port is coupled with a memory. The plurality of memories are configured to store a plurality of packet flows. Each packet flow stored in the memory is a packet queue. Specifically, the packet queue is a first in first out queue. When sending a packet, each transmit port needs to access a packet queue stored in the coupled memory. For example, the first memory and the second memory may be the same memory or may be different memories. The first transmit port and the second transmit port may be the same transmit port or may be different transmit ports. For specific implementations of the second packet and the second packet flow, refer to the descriptions about the first packet and the first packet flow in the foregoing embodiment. It should be noted that, the second packet is different from the first packet. The second packet flow is different from the first packet flow. The forwarding apparatus determines at least two types of information related to the second packet. Specifically, the at least two types of information related to the second packet are information in the following four types of information. The four types of information are a fifth type of information, a sixth type of information, a seventh type of information, and an eighth type of information. The fifth type of information indicates a duration of staying in the second memory by the second packet flow when the second packet is received. The sixth type of information indicates usage of the second memory when the second packet is received. The seventh type of information indicates whether the second packet flow is a victim of the congestion control mechanism when the second packet is received. A class of service of the second packet flow is a second class of service. When the forwarding apparatus receives a backpressure signal corresponding to the second class of service through the transmit port of the forwarding apparatus, the second packet flow is a victim of the congestion control mechanism. When the forwarding apparatus does not receive a backpressure signal corresponding to the second class of service through the transmit port of the forwarding apparatus, the second packet flow is not a victim of the congestion control mechanism. For example, the forwarding apparatus may be a traffic manager, and the transmit port of the forwarding apparatus may be a transceiver of the traffic manager. The forwarding apparatus may include at least one transmit port. The at least one transmit port is configured to send the second packet flow. For example, the traffic manager may send, through the at least one transmit port, the second packet flow stored in the second memory. The transmit port configured to send the first packet flow and the transmit port configured to send the second packet flow in the traffic manager may be the same transmit port or may be different transmit ports. The second class of service is different from the first class of service. When the traffic manager receives the backpressure signal corresponding to the second class of service, the traffic manager may determine that the packet flow corresponding to the second class is congested. Further, the traffic manager may reduce a transmission rate of the packet flow corresponding to the second class of service. Therefore, the traffic manager may reduce a rate of sending the second packet flow to the downstream through the transmit port. If the traffic manager uses the WRED algorithm to perform drop management or ECN marking management on the second packet, a length of the packet queue including the second packet flow reaches the minimum queue threshold or the maximum queue threshold of the WRED algorithm in a shorter duration. Therefore, a drop probability of the second packet is increased, or a probability of ECN marking of the second packet is increased. Therefore, when the traffic manager receives the backpressure signal corresponding to the second class of service, the second packet flow is a victim of the congestion control mechanism, and the first packet flow is not a victim of the congestion control mechanism. For a meaning and a specific implementation of the victim of the congestion control mechanism, refer to the foregoing descriptions about the third type of information. Details are not further described herein. The eighth type of information indicates a drop priority of the second packet. The drop priority of the second packet is determined based on a field used to indicate a scheduling priority in the second packet, or is determined based on protocol drop sensitivity of the second packet. For the fifth type of information, the sixth type of information, the seventh type of information, and the eighth type of information, refer to the descriptions about the first type of information, the second type of information, the third type of information, and the fourth type of information respectively in the foregoing embodiment. It should be noted that, the first type of information, the second type of information, the third type of information, and the fourth type of information are information related to the first packet. The fifth type of information, the sixth type of information, the seventh type of information, and the eighth type of information are information related to the second packet. Different names are used in this application so that a reader can distinguish. It should be noted that, the forwarding apparatus may determine only two types of information related to the second packet. For example, the forwarding apparatus may determine only the sixth type of information and the eighth type of information. Certainly, the forwarding apparatus may also determine more types of information related to the second packet. In addition, for specific implementations of determining the fifth type of information, the sixth type of information, the seventh type of information, and the eighth type of information by the forwarding apparatus, refer to the descriptions about the specific implementations of determining the first type of information, the second type of information, the third type of information, and the fourth type of information by the forwarding apparatus in the foregoing embodiment. Details are not further described herein. After determining the information related to the second packet, the forwarding apparatus may compare the at least two types of information related to the second packet with the at least two types of information related to the first packet, to determine whether the drop probability of the first packet or the drop probability of the second packet is higher. Alternatively, after determining the information related to the second packet, the forwarding apparatus may compare the at least two types of information related to the second packet with the at least two types of information related to the first packet, to determine whether the probability of ECN marking of the first packet or the probability of ECN marking of the second packet is higher. Further, the comparison performed by the forwarding apparatus may be implemented in a plurality of manners, and obtaining a result through comparison may also be implemented in a plurality of manners. The following uses an example for description. For example, when the duration of staying in the first memory by the first packet flow is longer than the duration of staying in the second memory by the second packet flow, and at least one of the following three conditions is satisfied, the forwarding apparatus determines that the probability of ECN marking of the first packet is higher than the probability of ECN marking of the second packet, or the forwarding apparatus determines that the drop probability of the first packet is higher than the drop probability of the second packet the usage of the first memory is equal to the usage of the second memory, the first packet flow and the second packet flow are victims of the congestion control mechanism, and the drop priority of the first packet is equal to the drop priority of the second packet. In the foregoing technical solution, the duration of staying in the first memory by the first packet flow is longer than the duration of staying in the second memory by the second packet flow. Therefore, it may be considered that the congestion degree of the first packet flow is higher than the congestion degree of the second packet flow. To reduce the congestion degree of the packet flow, the drop probability of the first packet and the drop probability of the second packet may be set. Specifically, the drop probabilities may be so set that the drop probability of the first packet is higher than the drop probability of the second packet. Alternatively, to reduce the congestion degree of the packet flow, the probability of ECN marking of the first packet and the probability of ECN marking of the second packet may be set. Specifically, the probabilities of ECN marking may be so set that the probability of ECN marking of the first packet is higher than the probability of ECN marking of the second packet. In the foregoing technical solution, different settings are performed for different packet flows. This helps reduce the congestion degree of the packet flow. For example, when the usage of the first memory is higher than the usage of the second memory, and at least one of the following three conditions is satisfied, the forwarding apparatus determines that the probability of ECN marking of the first packet is higher than the probability of ECN marking of the second packet, or the forwarding apparatus determines that the drop probability of the first packet is higher than the drop probability of the second packet the duration of staying in the first memory by the first packet flow is equal to the duration of staying in the second memory by the second packet flow, the first packet flow and the second packet flow are victims of the congestion control mechanism, and the drop priority of the first packet is equal to the drop priority of the second packet. In the foregoing technical solution, the usage of the first memory is higher than the usage of the second memory. Therefore, it may be considered that the congestion degree of the first packet flow is higher than the congestion degree of the second packet flow. To reduce the congestion degree of the packet flow, the drop probability of the first packet and the drop probability of the second packet may be set. Specifically, the drop probabilities may be so set that the drop probability of the first packet is higher than the drop probability of the second packet. Alternatively, to reduce the congestion degree of the packet flow, the probability of ECN marking of the first packet and the probability of ECN marking of the second packet may be set. Specifically, the probabilities of ECN marking may be so set that the probability of ECN marking of the first packet is higher than the probability of ECN marking of the second packet. In the foregoing technical solution, different settings are performed for different packet flows. This helps reduce the congestion degree of the packet flow. For example, when the first packet flow is a victim of the congestion control mechanism and the second packet flow is not a victim of the congestion control mechanism, and at least one of the following three conditions is satisfied, the forwarding apparatus determines that the probability of ECN marking of the first packet is lower than the probability of ECN marking of the second packet, or the forwarding apparatus determines that the drop probability of the first packet is lower than the drop probability of the second packet the duration of staying in the first memory by the first packet flow is equal to the duration of staying in the second memory by the second packet flow, the usage of the first memory is equal to the usage of the second memory, and the drop priority of the first packet is equal to the drop priority of the second packet. In the foregoing technical solution, the first packet flow is a victim of the congestion control mechanism and the second packet flow is not a victim of the congestion control mechanism. Therefore, it may be considered that the congestion degree of the first packet flow is higher than the congestion degree of the second packet flow. Further, the drop probability of the packet of the first packet flow according to the WRED algorithm is higher than the drop probability of the packet of the second packet flow according to the WRED algorithm. Alternatively, the drop probability of the packet of the first packet flow according to the WRED algorithm is higher than the drop probability of the packet of the second packet flow according to the WRED algorithm. In order that the first packet flow and the second packet flow can be enqueued to the first in first out queue in a balanced manner, the drop probability of the first packet and the drop probability of the second packet may be set. Specifically, the drop probabilities may be so set that the drop probability of the first packet is lower than the drop probability of the second packet. Alternatively, in order that ECN marking can be performed on the first packet flow and the second packet flow in a balanced manner, the probability of ECN marking of the first packet and the probability of ECN marking of the second packet may be set. Specifically, the probabilities of ECN marking may be so set that the probability of ECN marking of the first packet is lower than the probability of ECN marking of the second packet. The foregoing technical solution can meet a service requirement of the first packet flow and a service requirement of the second packet flow, and implement more balanced service processing. For example, the traffic manager performs drop management on the packet flow1and the packet flow2according to WRED1. The minimum queue threshold corresponding to the WRED1is MIN1, and the corresponding maximum queue threshold is MAX1. When the traffic manager receives the packet1in the packet flow1and the packet2in the packet flow2, the traffic manager receives the backpressure signal1of the corresponding class of service1. The class of service of the packet flow1is the class of service1. After receiving the backpressure signal1, the traffic manager reduces the rate of sending the packet flow1. Consequently, the length of the packet queue including the packet flow1reaches MIN1corresponding to the WRED1in a shorter duration. Therefore, the drop probability of the packet flow1is increased. Packets that can be enqueued to the packet flow1in the first in first out queue are reduced. To mitigate the case in which packets in the packet flow1that can be enqueued to the first in first out queue are reduced, the WRED algorithm for performing drop management on the packet flow1by the traffic manager may be adjusted from WRED1to WRED2. The minimum queue threshold corresponding to the WRED2is MIN2, and the corresponding maximum queue threshold is MAX2. MIN2is greater than MIN1, and MAX2is greater than MAX1. After the WRED algorithm is adjusted, the drop probability of the packet1is reduced. Further, the drop probability of the packet1can be lower than the drop probability of the packet2. Further, both the packet flow1and the packet flow2can be enqueued to the first in first out queue in a balanced manner. Alternatively, in the foregoing solution, the traffic manager may perform ECN marking management on the packet flow1and the packet flow2according to WRED1. After receiving the backpressure signal1, the traffic manager reduces the rate of sending the packet flow1. Consequently, the length of the packet queue including the packet flow1reaches MIN1corresponding to the WRED1in a shorter duration. Therefore, the possibility of ECN marking of the packet flow1is increased. To mitigate the case in which ECN-marked packets in the packet flow1that are enqueued to the first in first out queue are increased, the WRED algorithm for performing ECN marking management on the packet flow1by the traffic manager may be adjusted from WRED1to WRED2. The minimum queue threshold corresponding to the WRED2is MIN2, and the corresponding maximum queue threshold is MAX2. MIN2is greater than MIN1, and MAX2is greater than MAX1. After the WRED algorithm is adjusted, the drop probability of the packet1is reduced. Further, the possibility of ECN marking of the packet1can be lower than the possibility of ECN marking of the packet2. Further, ECN marking can be performed on the packet flow1and the packet flow2in a balanced manner. For example, when the drop priority of the first packet is lower than the drop priority of the second packet, and at least one of the following three conditions is satisfied, the forwarding apparatus determines that the probability of ECN marking of the first packet is lower than the probability of ECN marking of the second packet, or the forwarding apparatus determines that the drop probability of the first packet is lower than the drop probability of the second packet a transmission rate of sending the first packet flow by the forwarding apparatus through the first transmit port is equal to a transmission rate of sending the second packet flow by the forwarding apparatus through the second transmit port, the usage of the first memory is equal to the usage of the second memory, and the first packet flow and the second packet flow are victims of the congestion control mechanism. In the foregoing technical solution, the first packet flow is a victim of the congestion control mechanism and the second packet flow is not a victim of the congestion control mechanism. Therefore, it may be considered that the congestion degree of the first packet flow is higher than the congestion degree of the second packet flow. Further, the drop probability of the packet of the first packet flow according to the WRED algorithm is higher than the drop probability of the packet of the second packet flow according to the WRED algorithm. Alternatively, the drop probability of the packet of the first packet flow according to the WRED algorithm is higher than the drop probability of the packet of the second packet flow according to the WRED algorithm. In order that the first packet flow and the second packet flow can be enqueued to the first in first out queue in a balanced manner, the drop probability of the first packet and the drop probability of the second packet may be set. Specifically, the drop probabilities may be so set that the drop probability of the first packet is lower than the drop probability of the second packet. Alternatively, in order that ECN marking can be performed on the first packet flow and the second packet flow in a balanced manner, the probability of ECN marking of the first packet and the probability of ECN marking of the second packet may be set. Specifically, the probabilities of ECN marking may be so set that the probability of ECN marking of the first packet is lower than the probability of ECN marking of the second packet. The foregoing technical solution can meet the service requirement of the first packet flow and the service requirement of the second packet flow, and implement more balanced service processing. On a basis that the probability of ECN marking of the first packet is lower than the probability of ECN marking of the second packet, the forwarding apparatus performs ECN marking on the second packet, and avoids performing ECN marking on the first packet. After the forwarding apparatus performs ECN marking on the second packet, the forwarding apparatus may store, in the second memory, the second packet on which ECN marking is performed. Further, the second packet on which ECN marking is performed may be enqueued to the first in first out queue stored in the second memory. It should be noted that, the first packet flow in this application is a plurality of packets having a same feature. At least one field in a packet header may be used to indicate a feature of a packet. For example, a plurality of IP packets having a same destination IP address may form the first packet flow. According to the foregoing example, if destination IP addresses of two IP packets are different, the two IP packets belong to different packet flows, for example, belong to the first packet flow and the second packet flow respectively. For another example, a plurality of IP packets having a same destination IP address and a same source IP address may form the first packet flow. For another example, a plurality of IP packets having a same quintuple may form the first packet flow. In addition, an inbound port configured to receive a packet may also be configured to indicate a feature of a packet. For example, if a plurality of packets are received by a same inbound port on the physical interface card1233, the plurality of packets belong to the first packet flow. If a plurality of packets are received by different inbound ports, the plurality of packets do not belong to a same packet flow. The packet in this application may be an IP packet or another packet. For example, the packet in this application may be an Ethernet frame. The second packet flow in this application is a plurality of packets having a same feature. At least one field in a packet header may be used to indicate a feature of a packet. For a specific implementation of the field in the packet field, refer to the foregoing descriptions about the first packet flow. It should be noted that, the feature of the packet in the first packet flow is different from the feature of the packet in the second packet flow. It may be understood that, the memory302may store and maintain only one packet queue, for example, the packet queue formed by the first packet flow. The memory302may also store and maintain a plurality of packet queues simultaneously, for example, the packet queue formed by the first packet flow and the packet queue formed by the second packet flow. In addition, scheduling priorities of the plurality of packet queues may be the same or may be different. When a scheduling priority of the packet queue formed by the first packet flow is higher than a scheduling priority of the packet queue formed by the second packet flow, the packet in the packet queue formed by the first packet flow is scheduled out of the memory302earlier than the packet in the packet queue formed by the second packet flow. Correspondingly, the queue manager602may store and maintain only one packet descriptor queue, for example, a packet descriptor queue corresponding to the packet queue formed by the first packet flow. The queue manager602may also store and maintain a plurality of packet descriptor queues simultaneously, for example, the packet descriptor queue corresponding to the packet queue formed by the first packet flow, and a packet descriptor queue corresponding to the packet queue formed by the second packet flow. Correspondingly, the WRED circuit603may perform drop management on a packet of only one packet flow, for example, perform drop management on only the packet of the first packet flow. The WRED circuit603may also perform drop management on packets of a plurality of packet flows simultaneously, for example, perform drop management on the packet of the first packet flow and the packet of the second packet flow. Optionally, in the foregoing technical solution, when the forwarding apparatus determines that ECN marking does not need to be performed on the first packet, the method further includes the forwarding apparatus determines, based on the at least two types of information related to the first packet, whether the first packet needs to be dropped, and when the forwarding apparatus determines that the first packet does not need to be dropped, the forwarding apparatus stores the first packet in the first memory. Optionally, in the foregoing technical solution, that the forwarding apparatus determines, based on the at least two types of information related to the first packet, whether ECN marking needs to be performed on the first packet includes the forwarding apparatus determines a WRED algorithm based on the at least two types of information related to the first packet, and the forwarding apparatus determines, according to the WRED algorithm, whether ECN marking needs to be performed on the first packet. Alternatively, in the foregoing technical solution, that the forwarding apparatus determines, based on the at least two types of information related to the first packet, whether the first packet needs to be dropped includes the forwarding apparatus determines a WRED algorithm based on the at least two types of information related to the first packet, and the forwarding apparatus determines, according to the WRED algorithm, whether the first packet needs to be dropped. Optionally, in the foregoing technical solution, the field used to indicate the scheduling priority in the first packet is an EXP field in an MPLS label, a PCP field in a VLAN tag, or a DSCP field in an IP header. For the EXP field, refer to the descriptions in the RFC 3032 published by the IETF. For the PCP field, refer to the descriptions in the IEEE802.1P published by the IEEE. Scheduling priorities are used to indicate a sequence of scheduling packets. A packet with a high scheduling priority is scheduled earlier than a packet with a low scheduling priority. For example, for a plurality of packets stored in the memory, a packet with a high scheduling priority is scheduled out of the memory earlier than a packet with a low scheduling priority. After a packet is scheduled out of the memory, the packet is sent by a transmit port. Optionally, in the foregoing technical solution, when the scheduling priority of the first packet is higher than the scheduling priority of the second packet, the drop priority of the first packet is lower than the drop priority of the second packet. A drop priority is used to indicate a drop probability of a packet. A drop probability of a packet with a high priority is higher than a drop probability of a packet with a low priority. Optionally, in the foregoing technical solution, when the scheduling priority of the first packet is lower than the scheduling priority of the second packet, the drop priority of the first packet is higher than the drop priority of the second packet. Optionally, in the foregoing technical solution, when the protocol drop sensitivity of the first packet is higher than the protocol drop sensitivity of the second packet, the drop priority of the first packet is lower than the drop priority of the second packet. For example, the first packet may be a TCP packet, and the second packet may be a UDP packet. That the drop priority of the first packet is lower than the drop priority of the second packet may cause the drop probability of the first packet to be lower than the drop probability of the second packet. The drop probability of the first packet is relatively low. Therefore, a probability that a TCP sender sending the first packet flow reduces a transmission rate of the first packet flow due to dropping of the first packet is reduced. This helps the TCP sender keep the transmission rate of the first packet flow stable. In the foregoing technical solution, on a basis that the probability of ECN marking of the first packet is lower than the probability of ECN marking of the second packet, the forwarding apparatus performs ECN marking on the second packet, and avoids performing ECN marking on the first packet. Alternatively, on a basis that the drop probability of the first packet is lower than the drop probability of the second packet, the forwarding apparatus stores the first packet in the first memory, and avoids storing the second packet in the second memory. Optionally, after the forwarding apparatus determines the at least two types of information related to the first packet, and before the forwarding apparatus determines, based on the at least two types of information related to the first packet, whether the first packet needs to be dropped, the method further includes the forwarding apparatus determines, based on the at least two types of information related to the first packet, that ECN marking does not need to be performed on the first packet. FIG.7is a schematic structural diagram of a forwarding apparatus according to this application. The forwarding apparatus700may be configured to perform S501, S502, S503, and S504. Referring toFIG.7, the forwarding apparatus700includes a receiving unit701, a first determining unit702, a second determining unit703, and a processing unit704. The processing unit704may be specifically a marking unit or a dropping unit. The receiving unit701is configured to receive a first packet, where the first packet belongs to a first packet flow, the forwarding apparatus includes a first transmit port and a first memory coupled with the first transmit port, the first memory is configured to store the packets that is in the first packet flow and received by the forwarding apparatus, and the first transmit port is configured to send the packets that is in the first packet flow and stored in the first memory. The first determining unit702is configured to determine at least two types of information in four types of information related to the first packet. Four types of information related to the first packet are respectively a duration of staying in the first memory by the first packet flow when the first packet is received, usage of the first memory when the first packet is received, whether the first packet flow is a victim of a congestion control mechanism when the first packet is received, where a class of service of the first packet flow is a first class of service, and when the forwarding apparatus receives a backpressure signal corresponding to the first class of service through the transmit port of the forwarding apparatus, the first packet flow is a victim of the congestion control mechanism, or when the forwarding apparatus does not receive a backpressure signal corresponding to the first class of service through the transmit port of the forwarding apparatus, the first packet flow is not a victim of the congestion control mechanism, and a drop priority of the first packet, where the drop priority of the first packet is determined based on a field used to indicate a scheduling priority in the first packet, or is determined based on protocol drop sensitivity of the first packet. For the four types of information related to the first packet, refer to the descriptions about the first type of information, the second type of information, the third type of information, and the fourth type of information. Details are not further described herein. The second determining unit703is configured to determine, based on the at least two types of information related to the first packet, whether ECN marking needs to be performed on the first packet. Alternatively, the second determining unit703is configured to determine, based on the at least two types of information related to the first packet, whether the first packet needs to be dropped. The marking unit is configured to perform ECN marking on the first packet when it is determined that ECN marking needs to be performed on the first packet. The dropping unit is configured to store the first packet in the first memory when it is determined that the first packet does not need to be dropped. Further, the receiving unit701may be configured to perform S501. The first determining unit702may be configured to perform S502. The second determining unit703may be configured to perform S503. The marking unit may be configured to perform S504. Optionally, in the foregoing technical solution, the receiving unit is further configured to receive a second packet, where the second packet belongs to a second packet flow, the forwarding apparatus includes a second transmit port and a second memory coupled with the second transmit port, the second memory is configured to store the packets that is in the second packet flow and received by the forwarding apparatus, and the second transmit port is configured to send the packets that is in the second packet flow and stored in the second memory, and the first determining unit is further configured to determine at least two types of information in four types of information related to the second packet. Four types of information related to the second packet are respectively a duration of staying in the second memory by the second packet flow when the second packet is received, usage of the second memory when the second packet is received, indicated by the sixth information, whether the second packet flow is a victim of the congestion control mechanism when the second packet is received, where a class of service of the second packet flow is a second class of service, and when the forwarding apparatus receives a backpressure signal corresponding to the second class of service through a receive port of the forwarding apparatus, the second packet flow is a victim of the congestion control mechanism, or when the forwarding apparatus does not receive a backpressure signal corresponding to the second class of service through the transmit port of the forwarding apparatus, the second packet flow is not a victim of the congestion control mechanism, and a drop priority of the second packet, where the drop priority of the second packet is determined based on a field used to indicate a scheduling priority in the second packet, or is determined based on protocol drop sensitivity of the second packet. The second determining unit is further configured to, when the duration of staying in the first memory by the first packet flow is longer than the duration of staying in the second memory by the second packet flow, and at least one of the following three conditions is satisfied, determine that a probability of ECN marking of the first packet is higher than a probability of ECN marking of the second packet the usage of the first memory is equal to the usage of the second memory, the first packet flow and the second packet flow are victims of the congestion control mechanism, and the drop priority of the first packet is equal to the drop priority of the second packet, or when the usage of the first memory is higher than the usage of the second memory, and at least one of the following three conditions is satisfied, determine that a probability of ECN marking of the first packet is higher than a probability of ECN marking of the second packet the duration of staying in the first memory by the first packet flow is equal to the duration of staying in the second memory by the second packet flow, the first packet flow and the second packet flow are victims of the congestion control mechanism, and the drop priority of the first packet is equal to the drop priority of the second packet, or when the first packet flow is a victim of the congestion control mechanism and the second packet flow is not a victim of the congestion control mechanism, and at least one of the following three conditions is satisfied, determine that a probability of ECN marking of the first packet is lower than a probability of ECN marking of the second packet the duration of staying in the first memory by the first packet flow is equal to the duration of staying in the second memory by the second packet flow, the usage of the first memory is equal to the usage of the second memory, and the drop priority of the first packet is equal to the drop priority of the second packet, or when the drop priority of the first packet is lower than the drop priority of the second packet, and at least one of the following three conditions is satisfied, determine that a probability of ECN marking of the first packet is lower than a probability of ECN marking of the second packet the duration of staying in the first memory by the first packet flow is equal to the duration of staying in the second memory by the second packet flow, the usage of the first memory is equal to the usage of the second memory, and the first packet flow and the second packet flow are victims of the congestion control mechanism, and the processing unit is further configured to on a basis that the probability of ECN marking of the first packet is lower than the probability of ECN marking of the second packet, perform ECN marking on the second packet, and avoid performing ECN marking on the first packet. Optionally, when the processing unit704is specifically a dropping unit, the second determining unit is further configured to, when the duration of staying in the first memory by the first packet flow is longer than the duration of staying in the second memory by the second packet flow, and at least one of the following three conditions is satisfied, determine that a drop probability of the first packet is higher than a drop probability of the second packet the usage of the first memory is equal to the usage of the second memory, the first packet flow and the second packet flow are victims of the congestion control mechanism, and the drop priority of the first packet is equal to the drop priority of the second packet, or when the usage of the first memory is higher than the usage of the second memory, and at least one of the following three conditions is satisfied, determine that a drop probability of the first packet is higher than a drop probability of the second packet the duration of staying in the first memory by the first packet flow is equal to the duration of staying in the second memory by the second packet flow, the first packet flow and the second packet flow are victims of the congestion control mechanism, and the drop priority of the first packet is equal to the drop priority of the second packet, or when the first packet flow is a victim of the congestion control mechanism and the second packet flow is not a victim of the congestion control mechanism, and at least one of the following three conditions is satisfied, determine that a drop probability of the first packet is lower than a drop probability of the second packet the duration of staying in the first memory by the first packet flow is equal to the duration of staying in the second memory by the second packet flow, the usage of the first memory is equal to the usage of the second memory, and the drop priority of the first packet is equal to the drop priority of the second packet, or when the drop priority of the first packet is lower than the drop priority of the second packet, and at least one of the following three conditions is satisfied, determine that a drop probability of the first packet is lower than a drop probability of the second packet a transmission rate of sending the first packet flow by the forwarding apparatus through the first transmit port is equal to a transmission rate of sending the second packet flow by the forwarding apparatus through the second transmit port, the usage of the first memory is equal to the usage of the second memory, and the first packet flow and the second packet flow are victims of the congestion control mechanism, and the processing unit704is further configured to on a basis that the drop probability of the first packet is lower than the drop probability of the second packet, store the first packet in the first memory, and avoid storing the second packet in the second memory. Optionally, in the foregoing technical solution, when the second determining unit determines that ECN marking does not need to be performed on the first packet, the second determining unit is further configured to determine, based on the at least two types of information related to the first packet, whether the first packet needs to be dropped, and the processing unit is further configured to store the first packet in the first memory when the second determining unit determines that the first packet does not need to be dropped. Optionally, in the foregoing technical solution, the first determining unit is configured to determine a WRED algorithm based on the at least two types of information related to the first packet, and determine, according to the WRED algorithm, whether ECN marking needs to be performed on the first packet. Optionally, in the foregoing technical solution, the field used to indicate the scheduling priority in the first packet is an EXP field in a MPLS label, a PCP field in a VLAN tag, or a DSCP field in an IP header. Optionally, in the foregoing technical solution, when the scheduling priority of the first packet is higher than the scheduling priority of the second packet, the drop priority of the first packet is lower than the drop priority of the second packet. Optionally, in the foregoing technical solution, when the processing unit704is specifically a dropping unit, after the first determining unit702determines the at least two types of information related to the first packet, and before the second determining unit703determines, based on the at least two types of information related to the first packet, whether the first packet needs to be dropped, the second determining unit703is further configured to determine, based on the at least two types of information related to the first packet, that ECN marking does not need to be performed on the first packet. For specific implementations of the receiving unit701, the first determining unit702, the second determining unit703, and the processing unit704, refer to the descriptions in the embodiment shown inFIG.5. Details are not further described herein. In addition, the forwarding apparatus700may further include the traffic manager600shown inFIG.6. That is, the traffic manager600may further implement a function of the forwarding apparatus700. Further, the receiving unit701may be implemented by the AQM circuit604. The first determining unit702may be implemented by the AQM circuit604. The second determining unit703may be implemented by the AQM circuit604and the WRED circuit603. The processing unit704may be implemented by the WRED circuit603. For specific implementations of the receiving unit701, the first determining unit702, the second determining unit703, and the processing unit704, refer to the descriptions in the embodiment shown inFIG.6. Details are not further described herein. FIG.8is a schematic structural diagram of a forwarding apparatus according to this application. A forwarding apparatus800may be configured to perform the method shown inFIG.5. Referring toFIG.8, the forwarding apparatus800includes an input interface801, an output interface802, a processor803, a memory804, and a bus805. The input interface801, the output interface802, the processor803, and the memory804can communicate with each other using the bus805. The input interface801is configured to receive a packet. The output interface802is configured to send a packet. The memory804is configured to store a computer program. In addition, the memory804may be configured to store a to-be-sent packet. The processor803may perform the method shown inFIG.5by accessing the computer program in the memory804. For a specific implementation of performing the method shown inFIG.5by the processor803by accessing the computer program in the memory804, refer to the descriptions about the embodiment shown inFIG.5. Details are not further described herein. In addition, the forwarding apparatus800may further include the traffic manager600shown inFIG.6. That is, the traffic manager600may further implement a function of the forwarding apparatus800. Further, the input interface801may be implemented by the network processor1242. The output interface802may be implemented by the physical interface card1243. The processor803may be implemented by the AQM circuit604and the WRED circuit603. The memory804may be implemented by the memory403. For specific implementations of the input interface801, the output interface802, the processor803, and the memory804, refer to the descriptions in the embodiment shown inFIG.6. Details are not further described herein. This application further provides a computer readable storage medium. The computer readable storage medium is configured to store the computer program. When the computer program is executed, a computer may be enabled to perform the method shown inFIG.5. For details, refer to descriptions about the embodiment shown inFIG.5. Details are not further described herein. In a possible design, the computer readable storage medium may be a non-volatile readable storage medium. It should be understood that sequence numbers of the foregoing processes do not mean execution sequences in various embodiments of this application. The execution sequences of the processes should be determined based on functions and internal logic of the processes, and should not be construed as any limitation on the implementation processes of the embodiments of this application. A person of ordinary skill in the art may be aware that, the modules and method steps in the examples described with reference to the embodiments disclosed in this specification may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use a different method for each specific application to implement described functions. It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and module, reference may be made to a corresponding process in the foregoing method embodiments, and details are not further described herein. All or some of the foregoing embodiments may be implemented by software, hardware, firmware, or any combination thereof. When software is used to implement the embodiments, the embodiments may be implemented completely or partially in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the procedure or functions according to the embodiments of this application are all or partially generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instruction may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instruction may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired or wireless manner. The wired manner may be a coaxial cable, an optical fiber, or a digital subscriber line (DSL). The wireless manner may be infrared, wireless, or microwave. The computer-readable storage medium may be any usable medium accessible by a computer, or a data storage device integrating one or more usable media, such as a server or a data center. The usable medium may be a magnetic medium, an optical medium, a semiconductor medium, or the like. The embodiments in this specification are all described in a progressive manner. For same or similar parts in the embodiments, mutual reference may be made, and each embodiment focuses on a difference from another embodiment. Especially, apparatus and system embodiments are basically similar to the method embodiments, and therefore are described briefly. For related parts, refer to descriptions about the parts in the method embodiments. The magnetic medium may be a floppy disk, a hard disk, or a magnetic tape. The optical medium may be a digital versatile disc (DVD). The semiconductor medium may be a solid state disk (SSD).
114,560
11863460
DETAILED DESCRIPTION I. General Considerations This disclosure is set forth in the context of representative embodiments that are not intended to be limiting in any way. As used in this application the singular forms “a,” “an,” and “the” include the plural forms unless the context clearly dictates otherwise. Additionally, the term “includes” means “comprises.” Further, the term “coupled” encompasses mechanical, electrical, magnetic, optical, as well as other practical ways of coupling or linking items together, and does not exclude the presence of intermediate elements between the coupled items. Furthermore, as used herein, the term “and/or” means any one item or combination of items in the phrase. The systems, methods, and apparatus described herein should not be construed as being limiting in any way. Instead, this disclosure is directed toward all novel and non-obvious features and aspects of the various disclosed embodiments, alone and in various combinations and subcombinations with one another. The disclosed systems, methods, and apparatus are not limited to any specific aspect or feature or combinations thereof, nor do the disclosed things and methods require that any one or more specific advantages be present or problems be solved. Furthermore, any features or aspects of the disclosed embodiments can be used in various combinations and subcombinations with one another. Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth below. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed things and methods can be used in conjunction with other things and methods. Additionally, the description sometimes uses terms like “produce,” “generate,” “display,” “receive,” “evaluate,” “vulnerability,” “weakness,” “scan,” and “perform” to describe the disclosed methods. These terms are high-level abstractions of the actual operations that are performed. The actual operations that correspond to these terms will vary depending on the particular implementation and are readily discernible by one of ordinary skill in the art. Theories of operation, scientific principles, or other theoretical descriptions presented herein in reference to the apparatus or methods of this disclosure have been provided for the purposes of better understanding and are not intended to be limiting in scope. The apparatus and methods in the appended claims are not limited to those apparatus and methods that function in the manner described by such theories of operation. Any of the disclosed methods can be implemented as computer-executable instructions stored on one or more computer-readable media (e.g., non-transitory computer-readable storage media, such as one or more optical media discs, volatile memory components (such as DRAM or SRAM), or nonvolatile memory components (such as hard drives and solid state drives (SSDs))) and executed on a computer (e.g., any commercially available computer, including smart phones or other mobile devices that include computing hardware). Any of the computer-executable instructions for implementing the disclosed techniques, as well as any data created and used during implementation of the disclosed embodiments, can be stored on one or more computer-readable media (e.g., non-transitory computer-readable storage media). The computer-executable instructions can be part of, for example, a dedicated software application, or a software application that is accessed or downloaded via a web browser or other software application (such as a remote computing application). Such software can be executed, for example, on a single local computer (e.g., as an agent executing on any suitable commercially available computer) or in a network environment (e.g., via the Internet, a wide-area network, a local-area network, a client-server network (such as a cloud computing network), or other such network) using one or more network computers. For clarity, only certain selected aspects of the software-based implementations are described. Other details that are well known in the art are omitted. For example, it should be understood that the disclosed technology is not limited to any specific computer language or program. For instance, the disclosed technology can be implemented by software written in C, C++, Java, or any other suitable programming language. Likewise, the disclosed technology is not limited to any particular computer or type of hardware. Certain details of suitable computers and hardware are well-known and need not be set forth in detail in this disclosure. Furthermore, any of the software-based embodiments (comprising, for example, computer-executable instructions for causing a computer to perform any of the disclosed methods) can be uploaded, downloaded, or remotely accessed through a suitable communication means. Such suitable communication means include, for example, the Internet, the World Wide Web, an intranet, software applications, cable (including fiber optic cable), magnetic communications, electromagnetic communications (including RF, microwave, and infrared communications), electronic communications, or other such communication means. II. Exemplary Computing Network Environment FIG.1illustrates an exemplary computing environment100in which some examples of the disclosed technology can be implemented. A number of agents110,111, and112are illustrated inFIG.1. One of the agents110is further detailed as shown, and includes a local agent process120that can manage and communicate with a number of plugins130-135(e.g., a file integrity monitoring (FIM) plugin130, a command output capture rule (COCR) plugin131, an Open Vulnerability Assessment Language (OVAL) plugin132, a Windows event log (WEL) plugin133, a Registry plugin134, and a support plugin135) that are configured to extend the functionality of the agent. Further details and examples of agents are discussed further below. As will be readily understood to one of ordinary skill in the relevant art, the agent technology disclosed in this paragraph is not limited to the functionality of agent plugins130-135, but can be adapted to specific deployments by adding other plugins or removing the depicted plugins. Each of the agents110-112communicates with the rest of the system depicted in the computing environment100via an agent platform server150. As shown, the agent platform server150includes an agent bridge160for sending messages to and from agents (e.g., agents110-112). The agent bridge160can send messages over a computer network to agents executing on other computers, using inter-process and/or inter-thread communication to agents executing on the same computer as the communication bridge, or by using other suitable communication means. The illustrated agent platform server150also includes a message broker170with multiple message queues175-178for temporarily storing messages received from and sent to, for example, the agent bridge160, an agent manager180, an affinity service185, and agent data consumers190. In some examples, the message broker170has a single message queue175. The agent platform server150coordinates operation of the agents by sending and receiving messages using the message broker170. Some agent platform server implementations can contain more than one message broker170organized as a network of message brokers. Additionally, some implementations can include additional instances of the agent bridge160or the agent manager180. Various combinations of message brokers, agent bridges, and agent managers can be used to support high-availability and redundant capabilities. The exemplary computing environment100includes a number of agent data consumers190, including, but not limited to, a compliance server191, a log server192, a policy server193, a change management server194, and a file integrity monitoring server195, an agent reconciliation server196, an agent provisioning server197, and an agent management server198. In some examples, different combinations of agent data consumers190can be deployed in the environment100according to the desired compliance and security applications to be performed. These combinations are not limited to a single machine. The agent bridge160, message broker170, agent manager180, or any combination of the agent data consumers can execute on separate computers, or separate virtual machines on a single or multiple computers. For example, the compliance server191can host a Compliance and Configuration Control (CCC) tool used to detect, analyze, and report on change activity in an IT infrastructure. The CCC tool can assess or receive configurations of the one or more nodes at one or more locations and determine whether the nodes comply with internal and external policies (e.g., government, regulatory, or third-party standards, such as Sarbanes-Oxley, HIPAA, ISO 27001, NIST 800, NERC, PCI, PCI-DSS, Basel II, Bill 198, CIS, DISA, FDCC, FFIEC, GCSx, GLBA, GPG 13, IBTRM, or other IT infrastructure compliance standards). The CCC tool can identify and validate changes to ensure these configurations remain in known and trusted states. In particular implementations, the CCC tool operates by capturing a baseline of server file systems, desktop file system, directory servers, databases, virtual systems, middleware applications, and/or network device configurations in a known good state. Ongoing integrity checks then compare the current states against these baselines to detect changes. The CCC tool collects information used to reconcile changes detected by the agents110-112, ensuring they are authorized and intended changes. The CCC tool can crosscheck detected changes with defined IT compliance policies (e.g., using policy-based filtering), with documented change tickets in a change control management (“CCM”) system, with a list of approved changes, with automatically generated lists created by patch management and software provisioning tools, and/or against other desired and approved changes. This allows the CCC tool to automatically recognize desired changes and expose undesired changes. The CCC tool can also generate one or more reports concerning the monitored nodes showing a wide variety of information (e.g., compliance information, configuration information, usage information, etc.) The compliance-related reports generated by the CCC tool can, in some instances, comprise a score for a node that indicates the relative compliance status of the node as a numerical value in a range of possible values (e.g., a score of 1 to 100 or other such numeric or alphabetical range). The CCC tool can also apply a set of one or more tests to the nodes to evaluate the compliance status of one or more nodes. In such embodiments, the compliance-related reports generated by the CCC tool can include the number of devices that passed a particular test as well as the number of devices that failed the test. Further, the CCC tool can store detected change event data in an event log or transmit the event data as soon as it is detected or shortly after it is detected. Event logs typically comprise a list of activities and configuration changes at nodes of the IT network. An exemplary CCC tool that is suitable for use with the disclosed technology is the Tripwire® Enterprise tool available from Tripwire, Inc. The examples described below are sometimes shown or discussed as being used in connection with the Tripwire Enterprise tool. This particular usage should not be construed as limiting, however, as the disclosed technology can be adapted by those skilled in the art to help monitor and manage IT nodes using other compliance and configuration control tools as well. The compliance server191can also include a security information and event management (SIEM) tool that is used to centralize the storage and interpretation of events, logs, or compliance reports observed and generated in an IT management infrastructure. The event, log, and compliance report information is typically produced by other software running in the IT network. For example, CCC tools generate events that are typically kept in event logs or stored in compliance reports, as discussed above. The SIEM can be used to provide a consistent central interface that an IT administrator can use to more efficiently monitor and manage activity and configuration changes in an IT network. As needed, the IT administrator can access and use the CCC tool, which may provide deeper information than that provided by the SIEM. A SIEM tool can also integrate with external remediation, ticketing, and/or workflow tools to assist with the process of incident resolution. Furthermore, certain SIEMs include functionality for generating reports that help satisfy regulatory requirements (e.g., Sarbanes-Oxley, PCI-DSS, GLBA, or any other such requirement or standard such as any of those listed above). For these reasons, SIEM tools are becoming more widely adopted by IT administrators who desire to use a single, centralized interface for monitoring and managing their increasingly complex IT infrastructures. Logging tools can operate similarly to STEM tools. Accordingly, for any of the embodiments disclosed below, a logging tool may take the place of a SIEM tool. For ease of readability, however, reference will typically be made to just a STEM tool. An exemplary tool for logging and SIEM that is suitable for use with the disclosed technology is the Tripwire® Log Center tool available from Tripwire, Inc. III. Example Agent Implementation FIG.2is a block diagram200further detailing the exemplary agent110introduced above regardingFIG.1. As shown inFIG.2, the agent110includes one or more local agent processes120that interact with a number of different components (e.g., components220,225,230,235,240,250,260, and270) to perform various agent functionalities. It should be readily understood to one of ordinary skill in the art that other examples of agents can include or omit some of the components illustrated inFIG.2. In some examples of the disclosed technology, the agent110provides a common platform for executing pluggable platform and/or native code in a manner that does not require a concurrently active connection to either the agent bridge160or agent data consumers190. By allowing unconnected operation, the agent110is better able to tolerate intermittent network connections, delays, and/or errors in the agent platform server150, agent data consumers190, or interconnecting networks. The agent110includes functionality for automatically adjusting the rate at which data on the host system is acquired based on, for example, currently-available host system resources including cache resources, host system workload, or other host system resources. In some examples, cached data can be resequenced based on priority changes and observed behavior of the host system. In some examples, the agent can automatically adjust and prioritize transmission of cached data to the agent bridge160, based on, for example, the amount of time the agent has been connected to the network, a network reconnection event, and/or using a pseudorandom number to determine when to send cached data to the agent bridge. In some examples, the adjusted rate is based on the amount of lag between messages in a spool (e.g., spooler lag can be defined by an agent as the amount of time between the oldest and newest unsent messages in a spool). In some examples, certain messages can be prioritized over others (e.g., messages carrying Security Content Automation Protocol (SCAP) data can be prioritized so that they are sent with higher priority than other types of messages). In some examples of the disclosed technology, the agent110is implemented in a microkernel-based operating system platform, while in other examples, the agent is implemented using a more traditional monolithic kernel. The agent can include an embedded scheduler (e.g., executed by the local agent process120or another process) that determines when to execute agent tasks, even when the agent is not connected to a bridge or server. In some examples, the agent110is a container-based agent that implements Federal Information Processing Standard (FIPS) cryptographic services for communicating and/or storing data. In some examples, information regarding FIPS containers, names, or other relevant FIPS fields are removed from data (e.g., before transmitting or storing FIPS data) to increase the difficulty of unauthorized decryption of FIPS communications and stored data. In some examples, the agent110includes autonomous configuration capabilities. For example, the agent110can determine software versions and installed hardware associated with its host system or with installed plugins and based on the determined software and hardware, negotiate a more detailed configuration with any of the agent data consumers190. In some examples, the agent110includes support for on-demand push down of plugin modules. In some examples, the agent110includes the capability to automatically switch to different pre-designated endpoints by automatically switching to particular ports and/or bridges. In some examples, the compliance server191communicates a desired spool depth to agents, which in turn adjust the rate at which data is sent to server. In some examples, when a spool associated with an agent becomes completely full, the agent can insert a mark in the spool and then, once space in the spool becomes available, peel off logs when data transmission resumes. As shown inFIG.2, the agent110includes an asynchronous service module220for controlling and coordinating asynchronous services, for example, processing of asynchronous messages received from and sent to the agent bridge. The asynchronous service module220can employ a number of asynchronous input/output (I/O) threads255for performing these tasks. An agent information module225is used to send messages with information about the agent and its associated plugins, including identification information (e.g., one or more UUIDs), catalogs of available messages the agent is capable of consuming or producing, and other agent information. A message dispatcher230sends messages between an agent bridge (e.g., via a bridge connector) and agent plugins. In some examples, the message dispatcher230can send commands to an agent spooler. A message builder235is used to build messages sent by the message dispatcher, including envelopes for such messages. A plugin manager240including a number of plugin connectors245-247for connecting the agent to its plugins. A thread manager250is used to manage agent threads (e.g., bridge writer threads, plugin manager threads, asynchronous I/O threads, or other agent threads). A bridge connector260is used to connect to one or more agent bridges and send messages from, for example, the message builder. A multi-file spooler270includes multiple spool files275-277that can store data from the plugin manager before the data is sent to, for example, one or more of the agent bridges. In some examples of the disclosed technology, agents are designed to provide multi-platform functionality, thus allowing developers to develop agents for, e.g., both Windows and Posix platforms concurrently. In some examples, agents and their corresponding plugins are written in C++ using multi-platform libraries and coding methodologies. In some examples, using languages such as C++ allows for a smaller agent memory footprint than agents implemented using other languages, e.g., Java. In some examples, one or more agents (e.g., agents110-112), agent bridges (e.g., agent bridge160), and/or agent data consumers190(e.g., compliance server191) can be co-located on the same computer system. In other examples, each of the agents, agent bridges, and compliance servers are installed on separate computing systems that are connected using a network or other communication means, or are installed within separate virtual machines connected on a single computing system. In some examples of the disclosed technology, the agent is executed as a non-root/non-administrator user. This provides additional security by restricting access, but in some deployments, it may be desirable to allow limited administrator access to the agent and/or a subset of agent plugins to, for example, allow access to administrator resources (e.g., to access the Windows Event Log (WEL)). The agents can communicate to the bridge using, for example, a proxy provided that supports the SOCKS5 protocol, although other protocols can be employed. In some examples, it is desirable to utilize authentication features provided by the network protocol to limit access to, for example, the bridge and/or compliance server to authenticated agents. In some examples, the SOCKS5 proxy used can be previously installed by a system administrator, and be used to support other communications unrelated to agent traffic. One desirable aspect of not including a proxy server within an agent is that the attack surface of the agent is reduced, as there is no open SOCKS5 port for attackers to attempt to attack. In some examples, the spooler270is supplemented by a parallel Last-In First-Out buffer (LIFO) for certain types of messages. For example, because consumers of SCAP information often prioritize the most recent data available over older data, the agent can use a LIFO as a second spool for data coming from, e.g., an OVAL plugin, such that the newest messages are transmitted to the server first. FIG.3is a block diagram300that further illustrates variations and details regarding the architecture of the exemplary agent110discussed above regardingFIGS.1and2. A. Agent Identification In some examples of the disclosed technology, agents can use a unique identifier (e.g., a UUID (Universally Unique Identifier)), to identify themselves. The agent self-generates its unique identifier. The unique identifier is used to identify messages arriving to the bridge, as well as allowing for the routing of messages from server-side components to the agent. The unique identifier is independent of any network addresses (e.g., IPv4 or IPv6 addresses or other network addresses). In some examples, the unique identifier is associated with a set of MAC addresses associated with network interfaces discovered on the agent's host system. When one or more network addresses (e.g., an IP address) on a system change, the agent can make note of this fact, but this does not substantially change operation of the agent. The agent can send IP addresses and associated names to the server for informational purposes, but identification of the agent by the server is primarily, if not exclusively, based on the unique identifier. An agent's identifier is not changed when the IP addresses of the agent's host system changes. B. Local Agent Process Each agent is controlled by a local agent process (e.g., local agent process120). The local agent process can control, for example, the bridge connector260, message dispatcher230, plugin manager240, and spooler270shown inFIG.3. The message dispatcher230controls communication flow between the plugin manager240, the bridge connector260, and the spooler270. The message dispatcher230communicates with a message builder235for forming the message, and can include a number of bridge handlers236and plugin handlers237that are configured to operate with particular bridges and plugins, respectively. The bridge connector260is used to connect to and send messages to and from the agent bridge160. As shown inFIG.3, the bridge connector260includes an asynchronous heartbeat timer265, and an asynchronous connection timer266. The asynchronous heartbeat timer265is used to determine intervals between sending “heartbeat” messages to the agent bridge160, as will be further detailed below. The asynchronous connection timer266is used to reset a connection attempt to the bridge if the connection is not achieved by the end of a specified timeout period. Also shown is an agent configuration manager310, which is responsible for reading configuration data (e.g., configuration data stored in one or more configuration files315) to determine how to configure the agent. Parameters that can be configured with the configuration data include, but are not limited to, timer and heartbeat time intervals, spooler configuration, plugin configuration, and security and encryption parameters. In some examples, the agent configuration manager310is responsible for searching for and invoking plugins by sending commands of enabled and/or disabled plugins to the plugin manager240. C. Agent Message Catalog The agent110can also be configured to publish a catalog of messages that it consumes and produces. The agent110does not need to publish the plugins that are consuming and/or producing information. In this way, the actual plugins being used is not shown to the consumers. If a plugin becomes disabled, then the messages associated with that plugin are removed from the catalog. If a plugin is configured for load on-demand then messages associated with that plugin will be left in the catalog when the plugin is not running. The agent message catalog can be communicated using the agent heartbeat as a list of capabilities. Also shown is a security manager320, which is responsible for configuring security and encryption of messages send to/from the agent110, as well as storage and management of encrypted data. The security manager320stores related data (e.g., security configuration data and encryption keys) in one or more cryptography files325. Because the data to be sent is stored in the spool file, the agent manager180can send data while a plugin is disabled (or enabled). The agent can be configured dynamically using, for example, Domain Name System (DNS) Service records (SRV records) or a configuration file. In some examples, using DNS SRV records for configuration is preferred when data for a particular DNS domain is sent to a single compliance server. The configuration file setup technique may be preferred when different machines in the same domain will connect to different compliance servers. In some examples, a provisioning service can be used that informs agents about their initial configuration information, configures the agents with specific combinations of plugins, or provides an upgrade of agent or plugin executable code. In some examples, a reconciliation service can be used to match previous agent identifiers and operating system information with current identifiers and current operating system information. This reconciliation service insures continuity in data and logging information stored in the agent data consumers190. Agents can be installed on individual target machines using, e.g., the host operating systems native packaging tools. For example, Windows targets can install agents using the Microsoft MSI installer, while Linux targets can use Red Hat Package Manager (RPM). Using native system tools allows for easy deployment and upgrade of agents using mechanisms such as Active Directory. The core agent component and associated plugins can each be versioned independently. On startup the agent collects data including: a list of IP addresses assigned to the host, domain names associated with the list of IP addresses, and performs a lookup for associated DNS SRV records. D. Agent Plugins The functionality of agents (e.g., agents110-112) can be expanded with the use of agent plugins. For example,FIG.2illustrates a number of plugins that are connected to the local agent process through the use of a plugin manager. Plugins can be written in any suitable computer language (e.g., C++) using multi-platform libraries and coding methodologies, which allows for the sharing of the vast majority of agent code across different host platforms. In some examples, other languages besides C++ can be used to implement plugins, provided that support is available for the messaging layer used to connect the agent to its agent bridge. In some examples, Google Protobuf is used as the messaging layer. As shown inFIG.3, the agent110includes a plugin manager240that controls execution of plugins, and routes messages between plugins (e.g., via plugin connectors245-247), the message dispatcher230, and the spooler270. The plugin manager also includes a number of capability maps (e.g., capability map341) that list the types of messages and services that are produced and consumed by the connected plugins. The plugin maps can be used to build catalog entries, thereby advertising services available from the agent without exposing additional details regarding the plugins, using, e.g., ConfigRequest messages. The loading of plugins can be controlled using a rule set, which specifies which plugins to load and connect to the agent, and the order in which to load plugins. In some examples, some plugins can be chained together, thereby providing a degree of modularization. The plugin manager thread257is used to send data to the plugins indicating the current lag of the spooler (or “delta”) to the plugin manager240for communication to individual plugins via the plugin connector245. Each plugin can have a contract (e.g., an automatically negotiated contract) with the spooler270to respond in a certain period of time, and then return to sleep. In some examples, agents can load plugins on an on-demand basis. In some examples, the agent provides a quarantine functionality to limit resources and/or data that can be accessed by one or more installed plugins. In some examples, plugins can include hooks to allow the plugin to be used across multiple operating system platforms (e.g., by handling both Windows and POSIX hooks). Plugin Startup Plugin startup can be initiated based on a number of events, for example, when an agent (e.g., agent110) is initialized or after a plugin dies. In some examples, an agent initializes a plugin by issuing a command including arguments sufficient to describe the desired plugin operation (e.g., by passing command line arguments to initialize a plugin process and specifying a path to a configuration file, a path to a log file, read pipe identifier, and/or a write pipe). In some examples, the agent110and its associated plugins (e.g., plugins130-135) can communicate with each other using other interprocess communications mechanisms provided by the operating system including, but not limited to, shared memory, anonymous pipes, UNIX pipes, streams, or sockets. In some examples, some or all of the plugin startup is delegated to the plugin manager thread257, which can also route communications between the plugin manager240and the spooler. The configuration file can include information describing messages that the plugin will receive and send as well as expected behaviors for interacting with the agent that initiated the plugin. Passing arguments at plugin initialization provides the plugin enough information to be fully functional once execution begins. In some examples, messages sent between agents110-112, the agent platform server150, and/or the agent data consumers190is compressed. Further, plugins can compress data before transferring to its agent, and messages stored in spooler270can be compressed. Compressing transmitted messages and/or stored messages can reduce network load and/or reduce the spooler270storage capacity requirements. Huffman encoding, Lempel-Ziv encoding, or other suitable compression techniques can be used to compress the messages. In some examples, executable code for one or more of the plugins can be located in a specified directory on the agent host computer and then discovered and loaded by automatically traversing the specified directory and/or sub-directories for the plugin executables. In some examples, messages are passed between an agent and its plugins using message envelopes. Messages sent from a plugin can be numbered and addressed by the associated plugin process. The messages can be addressed using a four-part scheme:sender_uuid: the agent identifier from a handshake requestmessage_type: a string value that equates to a specific message typesequence_major_number: a sequence major number provided by the agentsequence_minor_number: a number that can start at an arbitrary value and be increased by the plugin for sending subsequent messages Together, the sender_uuid, message_type, sequence_major_number, and sequence_minor_number can be used to form envelope addresses. In some examples, a plugin can be added to the agent to handle envelope requirements. By using a major and a minor sequence number, creating file system updates for every message can be avoided, while still providing enough information such that messages can be uniquified in a message spool. This enables agents to find and perform message ResendRequest operations on the spooled messages. Upon invocation of a plugin the agent assigned a new sequence_major_number, thereby making all the messages that the plugin creates, and that the agent writes to the spool, unique even after restart of a plugin or its corresponding agent. An example of major and minor numbers used to address plugin messages are described in Table 1 below. TABLE 1MajorMinorMessageNumberNumberTailConfigResponse51TailFileData51 . . . 101Plugin RestartTailFileData61 . . . 99TailConfigResponse61TailFileData6100 . . . In some examples of the disclosed technology, a number of different messages types from an agent can be received by its plugins, including handshake requests, status requests, and shutdown requests, as discussed further below. Handshake Request and Handshake Response Upon startup, a plugin receives a handshake sent from its corresponding agent. The handshake can include data such as the agent's corresponding identifier (e.g., UUID), a sequence major number, and a data directory identifier. Responsive to receiving the handshake from its agent, the plugin responds with a handshake response message. The handshake response message includes: a plugin identifier (e.g., the name of the plugin), a description of the plugin's capabilities, and a description of messages that will be consumed by the plugin and produced by the plugin. For example, the plugin capabilities list can include a list of message types that can be accepted by the plugin, and the list of plugin messages produced can include configuration and data response message types. Status Requests Agents (e.g., agent110) can periodically send status messages to one or more of their associated plugins. The plugin in turn responds with a StatusResponse message within a predetermined time period, to notify the agent that the plugin is operating correctly. The status request can include an indication of the number of seconds between the time (e.g., the wall clock time) of the last message written to the spool and the time of the last message read and sent to the bridge. This provides the plugin with an indication of the state of the agent. For example, if the agent has sent all the messages in its spool, then the message delta is relatively small and sending another small message from the plugin is not expected to burden the agent. Conversely, if the agent has not sent a message to the bridge for some time (e.g., because the agent is disconnected from its bridge, or the agent is behind in sending data to the bridge), then the plugin can choose to buffer more data, thereby creating a larger message before sending to the agent. Sending fewer messages of larger size, (in some examples, up to about 1 Megabyte), incurs less network transmission overhead than sending more messages of a smaller size. In some examples, plugin StatusResponse messages include the plugin name and a description of its current configuration. This information is collected and stored by the agent manager180(e.g., by receiving StatusResponse messages from the plugin manager240via the agent's bridge connector260). By collecting plugin name and configuration description, a compliance sever (e.g., agent data consumers190) can determine whether a particular plugin has an incorrect or outdated configuration, and address the configuration (by, e.g., sending a new plugin configuration to the agent). Shutdown In some examples of the disclosed technology, plugins can be shut down as follows. A plugin's host agent sends a Shutdown message to the agent instructing it to shut down. The plugin persists its state (e.g., by storing state information in a computer-readable storage media coupled to the agent) and shuts down. A brief period of time after sending the Shutdown message, the agent closes pipes to/from the agent and sends the plugin's associated process a SIGTERM signal. ConfigRequest In some examples of the disclosed technology, the capabilities of plugins can be enhanced using a ConfigRequest to exploit the autonomous nature of agent operations described herein. In a ConfigRequest pattern, a plugin provides a ConfigRequest capability. The plugin responds to ConfigRequests by sending ConfigResponse messages in response, and when configured to do so, additional data with the ConfigResponse. In some examples of the disclosed technology, a ConfigRequest message for a plugin includes a serial number and a description of a configuration. When a new ConfigRequest is received by a plugin, the plugin replaces its current configuration with the new configuration described in the ConfigRequest. Plugins can store their current configuration and runtime state so that the plugin can resume operation using the current configuration and runtime when the plugin is restarted. ConfigResponse The plugin responds with a ConfigResponse message, which tells the Server that a new serial number configuration was received and processed. If the plugin cannot service the entirety of the ConfigRequest, it can include error information in the ConfigResponse. The requested configuration may have been partially applied or rejected completely, as defined by the plugin. Data Messages Once a configuration is accepted and applied, the plugin begins sending data messages to the agent data consumers190via the agent bridge160. Heartbeat Messages The agent periodically sends heartbeat messages. The heartbeat messages can contain information including, but not limited to, current wall clock time, current spooler minor/major number, currently-available messages that can be sent to/from plugins for servicing, and/or messages that cannot currently be serviced by any plugin on the agent. E. Agent Spooler FIG.3further outlines the capabilities and structure of the agent spooler270. The agent spooler270includes a number of subcomponents, including a spool file manager370, which includes one or more spool files380, a message searcher372for searching for data in the spool files, and a priority queue375. The spooler stores data as a number of relatively small (e.g., 32 Megabyte) files on disk to form a complete spool (e.g., a 1 Gigabyte spool file comprising 32 each of 32 Megabyte files). Storing spooled data in small files can be desirable, as it limits data loss in the inadvertent event of agent or plugin shutdown or corruption of the smaller 32-Megabyte spool file. Within a spool file, the data can be further segmented in a series of headers and data, where the header indicates a variable length of the data in the segment. As shown inFIG.3, the spool files380are stored in a number of computer-readable storage media and categorized according to the state of the data in the individual files, for example, completed files381, pending files382, the current read file383, and the current write file384. Further, markers indicating, e.g., current read and write position of the spool files can be stored in a read position file385. As will be readily understood to one of ordinary skill in the art, the spool files380are not limited to storage in, for example, a hard drive, but can also be stored in a solid state drive (SSD), non-volatile or volatile memory, a database, or other suitable storage means. The spool file manager manages the various spool files381-385illustrated inFIG.3. Completed files381are spool files that contain messages that have already been sent to the agent bridge160. Pending files382are spool files that contain unread messages ready to be read and sent to the agent bridge160. The current read file383is an open spool file that is being read by the bridge writer thread256and concurrently forwarded to the bridge connector260. After each message is read, the offset of the previously read message and the spool file name is updated in the read position file385. This offset and file name lags the last message read by one, thereby enabling recovery if the bridge connector260is disconnected from the agent bridge160while a message is in flight (being sent). When the connection is reestablished, the message that was in flight is sent again. The agent platform server150and the agent data consumers190are configured to tolerate duplicate messages. The current write file384is the spool file that is open for writing of messages received from the plugins via the message dispatcher230. At times the current read file385and current write file383can be the same file. The spool files381-385illustrated inFIG.3are one suitable configuration for use with the spooler270, but it will be readily understood by one of ordinary skill in the art that other suitable spool file configurations can be employed. Messages in the spooler270can be identified using a unique session identifier (e.g., unique at least for a particular session of operation on a particular agent). In some examples, the unique session identifier includes a major and a minor number. The major number is a unique identifier for the plugin session. The minor number is unique for each message of that type within the corresponding major number run session. In some examples of the disclosed technology, session identifiers are timestamp agnostic, in other words, the current time on the host computer is not relevant to the ordering of messages according to session identifiers. The session identifiers can be used for relative positioning and sequencing of the messages. In some examples, the spooler270can be configured to overwrite a selected portion of the spool (e.g., 20% of the spool). In some examples, there is one spool file per plugin, while in other examples, two or more plugins share the same spool file. In some examples, message data are encrypted. In some examples, the integrity and authenticity of messages can be verified using HMAC (Hash-based Message Authentication Codes) or other suitable methods to prevent tampering of messages sent by the agent or plugins. The spooler270supports disconnected operations. The supported disconnected operations include spooling when some plugins are disabled and during network disconnections (e.g., intentional or unintentional loss of network connectivity to the agent bridge). By spooling data the agent and its plugins can operate semi-autonomously. Plugins can receive data, configuration information, and perform operations (e.g. vulnerability scans, monitor watch logs, or other operations) on a regular basis, but the data can be returned regardless of the plugin being enabled/disabled. When the agent110reconnects with the bridge160, it can send all or a portion of its spooled data one or more of the agent data consumers190via the agent bridge160. In some examples, the sizes of the spool file(s) are determined based at least in part on the rate at which messages are being sent. For example, if 0 to 5 messages are being sent, a 100 MB spool file may be sufficient, while sending a larger number of messages may consume a 1 GB or even larger spool file, depending on the amount of data spooled and associated overhead. Data stored in the spool files can be secured using, e.g., obfuscation by compression, or encryption. In some examples, spool file data is not encrypted, but data in messages sent from the agent to the bridge is encrypted using, e.g., transport layer security (TLS) encryption. The agent should have at least some read access to the spooler data in order to determine, e.g., spool information such as message type, sequence major/minor numbers, and time stamps. Upon agent startup, the agent spooler270creates a new spool file, so that in the event the end of the previous spool file was corrupted, new data will be added to an uncorrupted file, thereby avoiding appending data to a corrupt spool file that may be unreadable. The priority queue375can be used route some messages to the bridge faster, according to priorities assigned to the messages or plugins producing the messages. For example, for a plugin that processes SCAP data, the most recent messages are the most important and thus are desirable to send earlier than lower-priority data. In some examples, all the higher priority messages in the priority queue375are sent first, while in other examples, the sending of messages is load balanced so that at least some of the lower-priority messages are sent before the higher priority messages are sent. In some instances there may be more than one instance of agent spooler270. For example, spooler instances can be dedicated to individual plugins, or shared by related plugins. IV. Example Agent Platform Server As shown inFIG.1, the agent platform server provides an agent bridge160that can receive messages from agents (e.g., agents110-112), a message broker170(including one or more message queues175-178), and an agent manager. The message broker170can be used to route messages between the agent bridge160and any of the agent data consumers190. Agents (e.g., agents110-112) can establish a network connection (e.g., to a number of agent data consumers190, including the compliance server191, the log server192, the policy server193, the change management server194, etc.) to the agent bridge160that is hosted on an agent platform server. The agent connection to the agent bridge160can be encrypted using, e.g., Transport Layer Security, Secure Sockets Layer, or other suitable encryption scheme. In TCP/UDP examples, the default port used to connect to the bridge is port number 5670, but other suitable network ports can be employed. To provide additional security, the agent can be configured so that it does not listen for incoming connections from the bridge or compliance server. Instead, the agent initiates communication to these network targets. Agents can use the underlying operating system routing information to determine how to connect to the agent bridge. Since agents create connections to the bridge, no return routing (from the bridge to an agent) is necessary. Once the connection is established, messages are sent in both directions. For example, an agent bridge can send advisory messages to its associated agents instructing the agents to hold message, or to hold particular types of messages, until a subsequent advisory message is sent indicated that the agent should resume sending of messages (or resume sending particular types of messages). The agents110-112and the agent bridge160can each establish agent identifiers (e.g., UUIDs) for uniquely identifying the agents and bridges, thereby avoiding reliance on other identifiers, such as IP addresses or MAC addresses that may change or may not provide a unique identifier in virtual environments. In some operational scenarios one or more agent data consumers190may be shutdown for maintenance or is temporarily off-line and undergoing a fault recovery operation. The agent bridge160monitors the message queues determining if the agent data consumers190are removing messages in a timely fashion. If the message removal is slowing or stopped the agent bridge160sends advisory messages to all agents informing them to stop or restart specific message types. This provides a level of fairness for all messages such that a non-operational agent data consumer does not block the traffic of messages to other operational agent data consumers. The message fairness delivery algorithm adds resiliency and robustness in the communication channel from the plugins through the agent and the spooler through the connection to the agent bridge to the message broker and to the agent data consumers. In some examples the agent110could have more than one spooler270dedicated to a particular plugin or message type facilitating message delivery fairness. Messages (e.g., messages from any of the agent data consumers190) are sent to and from agents via the agent bridge160. The agent's bridge subscribes to a topic though which all agent-bound messages travel. Messages can be transported within an envelope. The envelope includes source and/or destination information, and in some examples includes a time stamp, the associated agent identifiers (e.g., UUIDs), message type the sequence major and/or minor numbers, or other information that can be used to route and process the messages contained in the envelope. In some examples, the agent platform server150is hosted on a separate server from any of the agents110-112, while in other examples, the agent platform server150can reside on the same server as the agents110-112and/or the agent data consumers190. In some examples, the agent platform server150resides in the same local area network (LAN) as the agents110-112and/or the agent data consumers, while in other examples, the agent platform server150resides at a separate location or in a computing cloud. A. Agent Bridge The agent bridge160receives and sends messages from and to agents (e.g., agents110-112). The agent bridge160can notify other components when agent connections110-112are created or destroyed, using, e.g., AgentConnect and AgentDisconnect messages. The agent bridge160can generate a bridge identifier (e.g., a QUID) for itself that can be used by the agents110-112and the agent data consumers190to uniquely identify the agent bridge in subsequent messages. AgentConnect/AgentDisconnect When an agent connects to the bridge, the bridge creates an AgentConnect message. An agent manager (e.g., executing on one or more of the agent data consumers190) subscribes to these messages. Similarly, when an agent disconnects from the bridge, the bridge creates an AgentDisconnect message. The agent manager180subscribes to these messages. B. Agent Manager In some examples of the disclosed technology, the computing environment100includes the agent manager180, which can execute on, for example, the agent platform server, or another suitable environment. The agent manager can manage the status of agents and provide information for agent data consumers190upon request. The agent manager180sends messages to the agent110to configure the agent's associated plugins and processes data received from the agent110. Some of the types of agent data that can be tracked by the agent manager include: heartbeats, host names, IPv4/IPv6 addresses, capabilities, capability configurations (capability and serial number), and host platform. In some examples, configuration messages can be initiated by the agent110itself, or by one of the agent data consumers190. OnlineAgentsRequest/OnlineAgentsResponse In some examples of the disclosed technology, the agent manager180can also provide OnlineAgentsRequest services. One or more of the agent data consumers190sends an OnlineAgentsRequest message to get a list of online Agents. The agent manager180responds to this request by sending an OnlineAgentsResponse message. The OnlineAgentsResponse message can be used in conjunction with the ongoing AgentOnline messages to track online agents. The agent manager180sends AgentOnline messages when, for example, an agent connects to the agent bridge or when an agent's capabilities change. In the event that a plugin dies and its associated agent does not restart the plugin, the capabilities for that plugin will no longer be included in catalog published with the associated agents' heartbeat. Upon recognizing the change, the agent manager180can publish a new AgentOnline message for the agent. This allows the agent data consumers190to discover new plugins or determine that plugins are no longer operational, based on changes in the message catalog reported in the heartbeat messages. C. Message Broker The agent platform server150can use the message broker170for distributing messages between the agent bridge160, the agent manager180, and the agent data consumers190. The message queues175-178allow messages to be temporarily stored before sending on to their destination, and can be used to buffer traffic in the event of a failure in the connecting network, one or more of the agents110-112, or one or more of the agent data consumers190. V. Example Agent Data Consumers FIG.1illustrates a number of agent data consumers190, including a compliance server191, a log server192, a policy server193, a change management server194, a file integrity monitoring server195, etc. Although a finite number of agent data consumers190are shown inFIG.1, it will be readily understood to one of ordinary skill in the art that any number of servers can consume agent data, and that some of the agent data consumers190can be omitted, depending on the deployment environment. VI. Example Techniques Performed in Exemplary Agent Systems A. Disconnected Mode The agents disclosed herein are designed to be semi-autonomous. For example, agent plugins are designed to accept a complete configuration describing what acts the agent is to perform and when. As a result, plugins can continue operating according to the current configuration without communicating with the agent manager180or the agent data consumers190. In other words, an agent and its plugins can continue their normal operations of watching a target machine even while being unable to communicate with the agent bridge160, agent manager180, and/or agent data consumers190. Messages and data generated by the plugin can be spooled by the agent until a connection to the bridge can be re-established and the spooled messages sent. B. Example Agent Configuration and Registration FIG.4is a diagram400that outlines an example of communication between an agent and an agent platform server during agent registration, as can be used in some examples of the disclosed technology. The respective acts and messages performed by an agent platform server and an agent are shown along a timeline. It will be readily understood that this is an example, and that some embodiments of the disclosed technology can add, omit, or rearrange the acts outlined inFIG.4. In some examples, the agent is the agent110and the agent platform server150, as described above, although other agent and agent platform server structures can be employed. At act410, the agent turns Federal Information Processing Standards (FIPS) mode on for use in subsequent communications. At act411, the agent checks to determine whether it has already been registered. If the agent has not been registered, then the agent generates keys and a certificate signing request (CSR) at act412. Further, at act412the agent generates a digital signature and client authorization information. At act415, the agent turns FIPS mode off. In other examples, the method proceeds without turning FIPS mode off. At act420, the agent sends a client hello message to the agent platform server. In some examples of the disclosed technology, the agent initiates an anonymous Transport Layer Security (TLS) handshake request with the agent platform server that is further detailed below regardingFIG.5. Responsive to receiving the client hello message, the agent platform server sends a server hello message to the agent at act421. At act440, the agent builds an agent registration request, and sends the request to the agent platform server at act445. The agent registration request includes the keys, CSR, digital signature, and client authorization information that were previously generated. When the agent platform server receives the agent registration request, the server verifies the request. At acts450and451, the agent platform server verifies the registration key and CSR received with the agent registration request. At act455, the agent platform server signs the CSR, and sends the signed CSR to the agent in an agent registration response message at act460. At act470, the agent converts the signed CSR to a Privacy Enhanced Mail (PEM) format certificate and stores the PEM certificate in a local computer readable storage media. At act475, the agent disconnects from the agent platform server. AgentUUIDChange Message In some examples of the disclosed technology, agents are uniquely identified by an agent UUID, which is 128 bits in length. Some examples of when an agent data consumer190can receive an AgentUUIDChange message from an agent include: registration of a new agent, corruption of agent state files, change in one or more MAC addresses on the system, cloning of a virtual machine (VM), and/or replacement of one or more network cards on the agent's host machine. In the cases of a new agent registration or corruption of an agent state file, an AgentUUIDChange message arrives at the compliance server with a new current UUID and the previous UUID is not set because it did not exist or was unreadable. In the case of MAC changes, VM changes, and NIC changes, the AgentUUIDChange message can arrive with both the new current and the previous UUID set. In this case, the server should be configured to take appropriate action, e.g., merging agent data if the messages are from the same system, or treating the data as separate in the case of a cloned system. The server needs to determine whether the assets associated with the agent are the same node or different nodes and whether to associate the previous data stream with the new data stream. In some examples, the server is configured to make these determinations automatically, while in other examples, the server receives input (e.g., from a user or a configuration file) to make the determination. In the event that a virtual machine with the agent is installed is cloned, but not yet started, then the cloned agent will be authenticated (e.g., using password and/or public-key certificates, as described above) and obtain new certificates for connecting to the bridge. The agent will then generate a new UUID as described above. In the event that a virtual machine with the agent installed is cloned after agent startup, then upon startup, the new agent will connect to the bridge using the existing certificates stored on the cloned system. If all MAC addresses on the agent's host system have changed, the agent will generate a new UUID and send an AgentUUIDChange message with the new and previous UUIDs for the agent. The server will create a new asset record corresponding to the new agent's host system. If not all of the MAC addresses on the agent's host system are changed, then the agent will connect to the bridge using the certificate and existing UUID. This allows for the same agent identifier to be used in, for examples, maintenance scenarios where a network interface card is replaced in a host having more than one network interface card, without generating a new agent identifier. C. Example Agent Certificate Handshake FIG.5is a diagram500that outlines an example of communication between an agent and an agent platform server during agent registration, as can be used in some examples of the disclosed technology. The respective acts and messages performed by an agent platform server and an agent are shown along a timeline. At act510, the agent turns FIPS mode on for use in subsequent communication. At act511, the agent loads a certificate issued by a certificate authority. In some examples, the certificate authority is CAcert, although other certificate authorities can be used. At act512, the agent loads an agent identification certificate. At act513, the agent loads an agent privacy key. After loading the certificates at511-513, the agent turns FIPS mode off at act514and issues a hello message to the agent platform server at act520. The hello message includes the certificates loaded at acts511and512and is encrypted using the agent privacy key generated at act514. In other examples, the method proceeds without turning FIPS mode off. After receiving the hello message, the agent platform server verifies the agent peer information at act530, the certificate at act531, and the agent identification certificate at act532. Once the information is verified at acts530-532, the agent platform server responds to the agent with a hello message at act540. After the agent receives the agent platform server's hello message, it responds by verifying the server peer information at act550, the server's certificate authority-issued certificate at act551, and the bridge identity certificate at act552. Once the information in the server hello message has been successfully verified, the agent responds with an agent heartbeat message560. As further detailed above, the agent heartbeat message includes information that can be used to process spooled-off messages from plugins and to identify and request plugin services, including time data, spooler marker numbers, and available messages that are processed by the agent's plugins. FIG.6is a diagram600illustrating generation of a certificate610by the agent platform server based on both the agent identification certificate620loaded at act512and the bridge identification certificate630sent with the agent platform server hello message at act540. The certificate610can be used to authenticate communications between agents and agent bridges. D. Agent/Bridge Server Messaging Flow FIG.7is a diagram700that illustrates an example message flow between various agent/bridge components, as can be performed in certain examples of the disclosed technology. As shown inFIG.7, a message from the agent bridge160(hosted by an agent platform server150) is asynchronously read710by the bridge connector260hosted by an agent (e.g., agent110). The bridge connector260then dispatches715the received message to the message dispatcher230. The message dispatcher230determines the appropriate plugin for servicing the message, and issues a write message720to the appropriate plugin's plugin connector245. The plugin connector245issues an asynchronous write message730to the plugin, and responsive to the message, receives an asynchronous read message735. The plugin connector245then sends a dispatch message740to the message dispatcher230, which in turn sends a handle message750to the plugin handler237. The plugin handler sends a write message760to the plugin connector245and also sends a write message770to the spooler270. The data from the plugin is stored in a spool at the spooler270until the message can be sent to the bridge by sending a write message780to the bridge connector260. Once the bridge connector260establishes a connection to the bridge, after the plugin services the message, its data is returned to the plugin connector with an asynchronous read. In some instances message writes and reads maybe all be synchronous, asynchronous or a combination of both. E. First Example Agent Message Sequence: Resend Request ResendRequest Message As data is received from one or more agents, the agent data consumers190can monitor messages received from the agents (including messages from agent plugins) and request resends of any messages that are missing. In some examples, the bridge (e.g. agent bridge160) can perform similar monitoring and request resends. Because the order in which messages are delivered to the agent data consumers (e.g., the compliance server191) is not necessarily guaranteed, it is typically advantageous for the compliance server191to have handling capabilities for messages received out of order. One high-level example of how a compliance server handles missing messages is provided below. Assume that a compliance server receives the following stream of data, and that the WelConfigResponse and WelData messages are for the same configuration serial_number. An example of major and minor numbers used to address plugin messages are described in Table 2 below. TABLE 2MajorMinorMessageNumberNumberWelConfigResponse51WelData51WelData52WelData53 . . . 50WelData555WelData556 . . . 101WelData61 As shown in Table 2, a number of messages (5-51 through 5-54) have been dropped. The first message is a WelConfigResponse message, which is received in response to a previously-sent WelConfigRequest message. The compliance server then starts receiving a number of WelData messages, which have major number 5 and minor numbers, being consecutive integers from 1 to 50. The next WelData message received by the compliance server has the same major number (5) but a minor number of 55, which indicates a gap in WelData for minor numbers 51-54. Responsive to detecting the missing data, the server sends a ResendRequest message to the agent. An example format for the message is “ResendRequest previous(5, 50),” which indicates that the next five messages starting from minor message number 50 are being requested. The server then waits for data to be sent (e.g., in a ResendResponse message). Assuming the messages were actually dropped, then the missing 4 data messages should be received, followed by a ResendResponse message:status_code FOUND (5, 51) (5, 54) As will be readily understood to one of ordinary skill in the art, it should be noted that the message names and formatting are for exemplary purposes, and that other suitable formats of messages can be used in certain embodiments. Rolled Major Numbers Messages Continuing the exemplary datastream illustrated in Table 2, above, when server receives the WelData messages (5,101) followed by WelData (6,1), it will realize that some data from the agent may be missing. Responsive to determining a gap in the received messages, the server determines the new major sequence number in the received data stream (6) and identifies possible missed messages. The server constructs and sends a ResendRequest for the possibly missing data. The message can be formed as:ResendRequestprevious(5, 101)next (6,1). Upon receipt of this ResendRequest message, the agent scan its spool and resends any messages that are identified as missing by the ResendRequest, in a similar fashion to the ResendResponse message discussed above. Conversely, if the agent determines that there are no missing messages to send in response to the ResentRequest, a message is sent to indicate this to the server:ResendResponserequest=ResendRequest IDstatus_code NOT_FOUND_MISSING(5, 102)(6,0) (where ResendRequest ID is the identifier for the Resend Request being responded to, and messages 5-102 through 6-0 are the missing messages not found in the spooler). In some examples, messages that appear to be missing will occur normally, as the major number used by the agent is incremented upon starting or restarting the associated plugin. Missing or Corrupt Messages Using the same example of a dropped message discussed above, the agent can also send a number of different status codes, depending on the cause of the missing message(s). Table 3 below lists a number of different status codes that can be used in an exemplary embodiment of the disclosed technology: TABLE 3CodeDescriptionFOUNDMessage was found and resentNOT_FOUND_CORRUPTThe internal or data message was corrupt inthe spool and messages are/may be missingNOT_FOUND_MISSINGThe requested message is not in the spool(message may have never existed)NOT_FOUND_TOO_OLDThe message is of a range that is before the oldestmessage in the spool (message may have never existed)NOT_FOUND_UNKNOWNUnknown cause of missing message(s) The following examples illustrate messages being sent in different scenarios responsive to a ResendRequest. For example, if the next message received at the server is (major, minor): (5, 100), then the server would expect to receive messages (5,1) through (5,99) previously, otherwise, the messages are determined to be missing. A ResendRequest for messages between (5,1) and (5,100) is sent. If the agent returns any of the NOT_FOUND_* status codes, then the messages are determined to have been lost. In another variation of this example, if the next message received is (6,1), then a ResendRequest is sent to determine whether any messages between (5,100) and (6,0) were not received. A ResendRequest for messages between (5, 100) and (6, 0) is sent. If the agent returns the code NOT_FOUND_CORRUPT, then the server determines that data has been lost. Alternatively, if the agent returns the code NOT_FOUND_MISSING, then the server determines that all the data between (5,100) and (6,0) has been received from the agent. Alternatively, if the agent returns the code NOT_FOUND_TOO_OLD, then the result is indeterminate—the server determines that data from the agent may or may not have been lost. F. Second Example Agent Message Sequence: Res-end Request This section discussed another example of ResendRequest handling, as can be performed in certain embodiments of the disclosed technology. Assume that a system comprising an agent, bridge, and compliance server are in the state depicted in the diagram800ofFIG.8. As shown inFIG.8, a number of messages are, or have been, stored in a spool memory810(indicated by the files within the dashed lines), which includes a number of individual spool files (e.g., two spooled-off files820, four completed files821, etc.). Each rectangle depicted within the spool memory810represents a different spool file. The spool memory810can be implemented within an agent110, as described above regardingFIGS.1-3, although in other examples, different suitable agent architectures can be employed. Two spooled-off files820have already been removed from the spool and are no longer accessible. The first spooled-off file started at message number (1-10) and ended at (1-25), while the second spooled-off file (1-26) and ended at (1-51). Four completed spool files821have been sent to the agent bridge, but have not yet been removed from the spool. The message data in the completed spool files821is available to be re-sent to the agent bridge if requested. As shown inFIG.8, the first completed file spans two major numbers, from (1-52 to 1-61 and 2-1 to 2-10). The major number was advanced from 1 to 2 due to, for example, restarting of the agent or restarting of a plugin associated with the spool. The current read spool file822is shown, which includes messages from 3-21 to 3-60. A current read pointer830indicates the current read position for spool messages being sent from the agent to the agent bridge. The current read pointer830is advanced as additional messages from the spool are sent to the agent bridge. Two pending files823are shown, which are queued to be read after the current read file822has been completely read and send to the agent bridge. The current write file824is shown, which starts at position3-105and currently ends at3-127. The current write pointer835stores the current position for writing within the current write file824. As shown inFIG.8, each of the spool files820-824includes one of the time stamps840shown. The time stamps can be used to determine the current lag time between messages. Lag time information can be used to, for example, adjust the rate at which messages are sent to the agent bridge or the rate at which plugins produce data. In some examples of the disclosed technology, each of the spool files820-824includes a header indicating the size of the spool file, an envelope, and message data. Messages used to transmit data from the agent to its agent bridge can include an envelope with data that can be used to identify the agent and/or the plugin producing the message data, for example: message type, agent UUID, destination UUID, major/minor number, and timestamp can be included in the message envelope. As an example, assume that the system ofFIG.1is being used to spool plugin data to the agent data consumers190. One of the agent data consumers190determines that some agent data may be missing, and the agent data consumer sends the agent110a ResendRequest message850. The ResendRequest message850includes an indication of the previous message (1-41) and next message (3-32) before and after the messages that may be missing, thereby indicating a range of messages to be searched. The ResendRequest message850also includes the respective time stamps (20140202081200 and 2014020607007) for the possible missing messages. The agent receives the ResendRequest message850and searches the spool memory810(e.g., using a message searcher372). The agent message search determines that: messages 1-42 through 1-51 are too old and have been deleted from the spool (“TOO_OLD”); messages 1-52 through 1-61, 2-01 through 2-54, and 3-01 through 3-31 were found and are still stored in the spool (“FOUND”), and messages 1-62 through 2-00 and 2-55 and 3-00 are missing (“MISSING”) (because the messages were never generated). The agent sends a ResendResponse860indicating the results of the agent message search to one or more of the agent data consumers190. In some examples, the agent also automatically sends the FOUND messages to the agent data consumer that sent the ResendRequest, while in other examples, the agent waits for an additional request to send particular messages from the spool. The agent data consumers190that receive the ResendResponse860can react accordingly. For some applications, lost or missing data may merely be noted, while in other applications, the data can be used to, for example, initiate additional vulnerability scans to replace the missing data. G. Server/Agent/Plugin Contract In some examples of the disclosed technology, there is the shared contract for server components, agents, and their plugins. The contract can be expressed in an Interface Definition Language (IDL) and implemented using an Application Programming Interface (API) to coordinate between server components, agents, and their plugins. The contract establishes a syntax for creating properly formed messages, including defining required and optional fields in the messaging protocol. Under the example contract, the agent is responsible for full verification of a plugin package before it is considered and is available to be launched. The plugin package includes: executable files, configuration information, and/or command line arguments including a manifest. The plugin package can be digitally signed using suitable cryptographic techniques. Agents (e.g., agents110-112) can verify the plugin's digital signature before the plugin is considered for operation. In some examples, the plugin package is re-verified every time the plugin is started, while in other examples, the plugin package is re-verified periodically, at system startup, or other suitable intervals. Plugins can connect to the agent using only the pipes given to it from the command line arguments. Plugins use the configuration and log directories passed to it from the command line arguments. The first message an agent sends to a plugin after connect is a HandshakeRequest message. The agent includes full path information for plugin's directory and the plugin's executable directory (plugin package directory). The executable directory is used for locating shared libraries the plugin may use or delegate to another application, for example: a Real-Time Manager application (RTMGR). A RTMGR application can communicate to operating system kernel modules to obtain user-specific security information that is associated with a particular event or change or operation that a plugin is monitoring. The first message from a plugin sent to the agent is a HandshakeResponse message after it has received a HandshakeRequest. The plugin uses the data directory given to it in the HandshakeRequest it receives from the agent. The Agent periodically sends StatusRequest messages to all plugins. Plugins respond to StatusRequest messages from the agent with StatusResponse messages. Plugins are expected to respond immediately to a Shutdown request message. The plugin periodically persists its last working position if the plugin cannot immediately persist when it receives a Shutdown message from the agent. Upon restart, plugins resume work from the state using its last ConfigRequest. The plugin manages any incomplete units of work. If a plugin has no work to do and it is designed for on-demand loading, it informs the agent by sending a PluginExiting message. The agent subsequently moves the plugins capabilities to the on-demand maps and sends a Shutdown message to the Plugin and removes the plugin from the plugin list. The agent is responsible to re-launch the plugin if a message is received that is contained in the message catalog. Servers, agent and plugins operate to keep the amount of message data in the spooler at or below a selected limit (e.g., to avoid using more than 80% of the spooler capacity). In some examples, the selected limit for the current capacity can be adjusted. Agent and plugin messages can be formed and enveloped as Protobuf messages. Thus, the agent does not need to have detailed information about the plugin's structure or operation. H. Agent Authentication: In some examples of the disclosed technology, the bridge is configured to use a registration key for authentication. When in this registration mode, the agent must supply the correct registration key to the bridge upon first connecting to the bridge in order to authenticate to the bridge. The initial key can be sent using anonymous SSL. After successfully authenticating the key, the agent subsequently receives an encryption certificate (e.g., a public-key certificate such as an X.509 certificate) that can be used for encrypting messages sent using subsequent connections to the bridge. If the registration key is changed (e.g., by system administrator re-configuring the bridge) the agent can continue to use the certificate that was received early using the older key. However, any new agent instances will need to use the new registration key to acquire their corresponding certificates. VII. Exemplary Method of Spooling of Host Data FIG.9is a flow chart900illustrating an example method of spooling host data, as can be performed in some examples of the disclosed technology. For example, an agent hosting an agent spooler270in the computing environment100discussed above can be employed to perform the illustrated method. At process block910, an agent operating on a computing host having one or more network connections collects host data from, e.g., one or more plugins on the computing host. The agent is configured to collect the host data whether or not the agent can currently send data via any of the network connections. The types of data collected using the plugins can include FIM data, COCR data, WEL data, Windows Registry data, or other suitable data. After collecting the data, the method proceeds to process block920. At process block920, the agent receiving a message from an agent bridge indicating a message type to send to at least one of the agent data consumers. Based on receiving the message, the agent initiates sending of data for the indicated message type. The agent determines whether it can send data via the host computer's network connection(s), and if so, the method proceeds to process block930. If not, the method proceeds to process block940. At process block930, the agent has determined that it can send data via the network, and the agent proceeds to send at least a portion of the spooled host data to at least one of agent data consumers. For example, data can be sent to any of the agent data consumers190illustrated inFIG.1via an agent bridge160. In some examples, the collected data is temporarily stored in the spooler before being sent. At process block940, the agent stores at least a portion of the collected host data in a spooler for later transmission. In some examples, the spooler270discussed above can be employed. In some examples, some of the spooled data can be overwritten or removed based on the priority of the data, or according to other suitable criteria. In some examples, messages of a first message type are stored in a first spool of the spooler and messages of a second message type are stored in a second spool of the spooler. In some examples, the rate at which spooled data is sent to the agent data consumers is increased or decreased based at least in part on lag of the spooler. In some examples, the rate at which data is spooled and/or the rate at which the spooled data is sent to the one or more agent data consumers is based at least in part on currently-available resources of the host computer. In some examples, one or more plugins executable on the host computer collect the host data, the agent sends data to the plugins indicating lag of the spooler, and the plugins adjust the rate of collecting host data based at least in part on the indicated spooler lag. In some examples, the host data is collected by one or more plugins executable on the host computer when the agent cannot send data via the network connection, and the spooled host data is sent to the at least one of the agent data consumers via a bridge executing on an agent platform server. After the collected host data is sent (according to process block930) or stored (according to process block940), the method can proceed to collect more host data. As the computer network connection becomes available or unavailable, the agent can elect to store data in the spooler or send data accordingly. The rate at which the spooled data is sent can be increased or decreased based on, for example, the lag of the spooler or host computer resources. In some examples, the data that is collected is based on a request for a type of data. The agent searches for plugins that can provide the requested type of data, and invokes a corresponding plugin if found. In some examples, agent can send a description of the types of data and/or messages that can be produced by the agent. In some examples of the method, a second message is received from an agent bridge indicating a second message type to not send to at least one of the agent data consumers. Based on the receiving the second message, the agent stores for messages of the second message type in a spooler until receiving a third messaging indicating the spooled messages of the second type are to be sent. VIII. Exemplary Method of Sending Messages with Agent-Generated Sequence Numbers FIG.10is a flow chart1000illustrating an example method of resending messages based on agent-generated sequence numbers, as can be performed in some examples of the disclosed technology. For example, a computer hosting an agent110operating in the computing environment100discussed above can be employed to perform the illustrated method. At process block1010, an agent executing on a host computer, sending one or more data messages to a server with the computer network. The data messages including sequence numbers generated by the agent. For example, each of the sequence numbers can include a major number, which is incremented upon starting or restarting the agent, and a minor number, which is incremented with each message sent. Further examples of suitable sequence numbers as can be used in some examples of the disclosed technology are discussed above and illustrated inFIG.8. At process block1020, the agent receives a resend message from the server indicating that one or more of the data messages are to be resent. The messages to be resent are indicated using at least some of the sequence numbers, for example, a range of sequence numbers can be used. The ResendRequest message850illustrated inFIG.8is an example of suitable resend message. After receiving the resend message, the method proceeds to process block1030. At process block1030, the agent searches for the indicated messages based on the one or more of the generated sequence numbers. In some examples, the messages may have already been removed from the agent's host computer. After searching for the messages, the method proceeds to process block1040. At process block1040, the agent can resend one or more of the requested messages and resend any messages still stored on the agent's host computer. In some examples, an additional reply message, such as the ResendRespond message860illustrated in860, can be sent with the resent data messages to describe which of the requested messages have been found, are too old, or are missing. IX. Exemplary Method of Identifying Agent Messages FIG.11is a flow chart1100illustrating an example method of identifying agent messages sent on a computer network, as can be performed in some examples of the disclosed technology. For example, a computer hosting an agent110operating in the computing environment100discussed above can be employed to perform the illustrated method. At process block1110, the agent110self-generates a unique agent identifier for itself. The agent identifier is independent of any network addresses associated with the agent's host computer. The agent identifier can be, for example, a UUID. After generated the agent identifier, the method proceeds to process block1120. At process block1120, the agent sends a first message to at least one agent data consumer (e.g., one or more of the agent data consumers190). The agent data consumer can stored the agent identifier for use in determining the origin of subsequent messages from the agent. After sending the first message, the method proceeds to process block1130. At process block1130, the agent is moved to a different physical and/or virtual host computer. The agent then sends a second message to the agent data consumer including the same agent identifier. By using the same identifier, the agent data consumer can track messages from the same agent even though the agent is operating from a different network address. In some examples, of the exemplary method, the agent can be replicated (e.g., on one or more additional physical and/or virtual hosts), and each of the replicated agents in turn self-generates a unique agent identifier for itself, thereby allowing agent data consumers to distinguish amongst the replicated agents and the original agent. Further, the same agent identifier can be used in scenarios where, e.g., the agent's network address changes when a network interface card is changed, but without changing the agent identifier. X. Exemplary Message Delivery Fairness Methods and Apparatus FIGS.12-16Billustrate methods and apparatus for improved message delivery fairness between agents and servers using a computer network, as can be practiced in some examples of the disclosed technology. Disclosed message delivery fairness methods enable continued delivery of messages from agents, even though one or more consuming servers are offline for maintenance or are undergoing fault recovery operations. For example, the exemplary computing environment100described in further detail above can be used to perform the exemplary methods outlined in each ofFIGS.12-16using the described agents (e.g., agents110-112) and their associated plugins130-135, agent platform server150and associated agent bridge160, message queues and/or topics, and agent manager, and agent data consumers190(e.g., agent data consumers191-198). As will be readily understood by one of ordinary skill in the art, the disclosed methods can also be practiced in other suitable computing environments that have been adapted for use with one or more of the disclosed methods. FIG.12is a diagram1200that illustrates an exemplary set of message flows between agents (e.g., agent1210) and agent data consumers1220using a computer network. As shown, the message flow is parallel between a number of agent plugins1230and their associated agents; serial through the agent and spool subsystem to an agent bridge1240; and parallel again once the messages are placed in the message broker server topics. In scenarios where one of the consuming servers is offline for maintenance or is undergoing a fault recovery operation, unconsumed messages will queue in the inbound topic and will eventually fill the message broker's inflight message holding capacity. When a particular broker is full, some or all of the message flow through the broker will stop. In turn, all operational message consumers become starved, as the bridge is not able to place any new messages on the broker's message topics. Subsequently, this stops the message flow from all connected agents. Since the message flow through the connected agents is serial, the spools will eventually fill, causing the agents to shut down their plugins, and hence message collection and flow to the servers will be stopped. Two factors that can affect message flow between agents and agent data consumers includes (1) the total inflight message capacity of the broker—once the message capacity is reached, message flow through the broker stops; and (2) the serial message flow region of agent input to the spooler through the bridge connection(s)—blockage of any message can cause a blockage of all messages behind it, and can eventually force the agent to shut down plugins once the spool is full. One way to compensate for finite message capacity of a broker is to allow the unconsumed messages of a topic to be deleted if there is no active consumer. This can be undesirable in some application, because it can cause thrashing throughout the system as a flood of reseed requests and response operations are created, and this can lead to unrecoverable loss of data. In some examples of the disclosed technology, serial message flow between agents and brokers can be modified to have parallel channels of multiple spoolers (one per message family or per plugin) and add additional, parallel transport layer security (TLS) connections between an agent and its bridge. However, adding additional connections to the bridge may be undesirable, because of the substantial increase in committed server resources for the bridge, which reduces its connected agent capacity. Using dedicated spoolers per plugin can shorten the serial path between plugins and bridges. In some examples of the disclosed technology, a mechanism is included for the agent bridge1240to communicate a message instructing agents to stop sending message types that are blocked in the message broker, and to send only unblocked messages to the bridge. When the blocked message types are flowing again, the Bridge sends a message instructing the Agents to start sending messages again. Having separate spools for plugins can thereby simplify implementation of this mechanism in the agent. FIG.13illustrates is a diagram1300illustrating an exemplary message flow, including unidirectional merging, as can be used in some examples of the disclosed technology. As shown inFIG.13, an agent bridge1310monitors the message topics1320to the consuming data servers1330. The agent bridge1310sends advisory messages to some or all of its connected agents if the number of messages in a message topic reaches a configurable limit, dubbed a “Stop Sending Level,” which indicates that the consuming server has reached its limit (e.g., because the consuming server is offline, having problems consuming messages at the inbound message rate, or has another issue that causes the limit to be reached). A message topic is a form of a queue, where messages can be sent to more than one consumer (e.g., messages from agents can be queued and then sent to each of a plurality of agent data consumers). If the limit is met or exceeded an advisory message is sent instructing agents to stop sending messages of the associated type(s). Once the issue with the consuming server is resolved (e.g., the server is back online) and the topic spool is reduced down to a lower configurable limit (dubbed “Restart Sending Level,” an advisory message is sent to the connected agents instructing them to start sending the appropriate message types. Also shown is an auxiliary bridge1315that can be used to send similar messages as the agent bridge1310when, for example, the agent bridge1310is disabled or unreachable, to offload message load from the agent bridge1310, or to send messages to particular topics and/or data consumers based on an associated message type (e.g., as allocated by an affinity service). Scenarios in which the agent bridge can be configured to send advisory messages include the following. For example, after an agent has connected to the agent bridge and validated its credentials, the agent bridge sends the agent the most recent valid advisory message. In some examples, the advisory message is only sent to newly-connected agents. Advisory messages can be sent under varying system conditions. For example, the bridge can send advisory messages immediately after an agent has connected and credentials have been validated. In response, the bridge sends to the newly-connected agent the most recent valid advisory message. In this case, the message is only sent to the newly connected agent. The bridge can also send advisory messages to multiple, or all, connected agents when one or more topics message count exceeds the stop sending level or drops below the restart sending level. The agents1340depicted inFIG.13have multiple spoolers1350(e.g., one spooler per plugin1360,1361). The agents1340listen for advisory messages from the bridge1310. The agents can send or skip over message types based on the advisory messages. FIGS.14,15A and15B, and16A and16Bare sequence diagrams that illustrate message flows: under normal operation, during the sending of a stop sending advisory message, and during the sending of a restart sending advisory message, respectively. FIG.14is a sequence diagram1400that illustrates an example message flow during normal operation of an agent with two active plugins, dubbed WEL (a Windows event log plugin) and FIM (a file integrity monitoring plugin), as can be practiced in some examples of the disclosed technology. The agents are connected to the bridge and successfully sending messages that are consumed by agent data consumers. As will be readily understood by one of ordinary skill in the art, the example messages flows are not limited to WEL and FIM plugins, but can be readily adapted for use with other suitable agent plugins. FIGS.15A and15Bdepict a sequence diagram1500that illustrates an example message flow during a stop sending operation, as can be practiced in some examples of the disclosed technology. An agent with two active plugins, WEL and FIM, is connected to the bridge. In this example, the FIM Server (an agent data consumer) is taken down for maintenance. The bridge determines that the FIM message topic has reached the Stop Sending Level, and in response sends Advisory messages instructing the connected agents to stop sending FIM messages. FIGS.16A and16Bdepict a sequence diagram1600that illustrates an example message flow during a restart sending operation, as can be practiced in some examples of the disclosed technology. An agent with two active plugins, WEL and FIM, is connected to the bridge. In this example, the FIM server has been restarted, and starts to receive messages from the FIM message topic. As a result, the bridge determines that the count of messages in the FIM topic has dropped below the restart sending level, and in response sends advisory messages to agents instructing them to restart sending FIM messages. In some examples of the disclosed technology, the bridge is configured so that it will not stop completing agent connection requests in the event message topics in message broker are blocked. In some examples of the disclosed technology, it may be undesirable to bridge message queue level oscillate between the stop sending level and the restart sending level. In some examples, so a closed loop lag controller is employed to use a recent sample history and a computed threshold that is exceeded before the advisory messages are sent. In examples including multiple spools, a number of tools can be used to manage the multiple spools. For example, specific plugin spools can be allocated a predefined percentage of the overall spool size. In some examples, learning techniques are used to observe the proportions in which message flows are occurring, and to adjust spool allocation based on actual proportions. In some examples, a combination of predefined percentage and learning techniques can be applied to manage multiple spool allocation. In some examples, the bridge has additional support for a mix of agents that do or do not support message delivery fairness techniques. In some examples, agents wait for a slight delay between completing a connection to the bridge and/or waiting for an initial connect advisory message before, before the agent begins reading data from its associated spools. In some examples of the disclosed technology, a single common spool is used for messages from all plugins of an agent. If advisory messages are received from the bridge, the agent can write messages that the bridge has advised against sending to temporary spool files. These temporary spool files contain messages of a single type, in order in which they were read from the main spool. When the bridge sends advisory messages to start sending, the agent temporarily stops reading data from the main spool and reads the messages in the temporary spool files, sending the temporary spool file messages to the Bridge. Once all the temporary spool files are read, the temporary spool files are erased and the Agent resumes reading from the main spool. By prioritizing the reading of the temporary spool files, the agent allows for keeping the message sequencing temporarily correct for each message type. XI. Example Affinity Service FIG.17is a block diagram illustrating an example computing environment1700in which an affinity service can be deployed according to the disclosed technology. For example, the disclosed affinity services can be used with the agents110-112, agent platform server150, and agent data consumers190discussed above with respect toFIG.1. A number of agents1710,1711, and1712are illustrated inFIG.17. Similar to the agents110-112described regardingFIG.1, each of the agents1710-1712can includes a local agent process that can manage and communicate with a number of plugins, including a file integrity monitoring (FIM) plugin and a log plugin (e.g., a Windows event log (WEL) plugin) that are configured to extend the functionality of the respective agent. Each of the agents1710-1712communicates with other components in the system depicted in the computing environment1700via an agent platform server1750. As shown, the agent platform server1750includes an agent bridge1760for sending messages to and from agents (e.g., agents1710-1712). The agent bridge1760can send messages over a computer network to agents executing on other computers, using inter-process and/or inter-thread communication to agents executing on the same computer as the communication bridge, or by using other suitable communication means. The illustrated agent platform server1750also includes a message broker1770with one or more message queues for temporarily storing messages received from and sent to, for example, the agent bridge1760, an agent manager1780, an affinity service1785, and agent data consumers1790. The agent platform server1750coordinates operation of the agents by sending and receiving messages using the message broker1770. In some examples, the agent platform server1750is configured to send an advisory message to adjust the rate at which messages are sent by one or more agents. As shown inFIG.17, the affinity service1785resides as a component of the agent platform server1750(e.g., as a standalone process executing on the agent platform server1750), while in other examples, the affinity service is hosted in an alternate location (e.g., as a thread or other component of the agent manager1780). In some examples of the disclosed technology, for example, in large networks with multiple agent platform servers1750and multiple agent data consumers1790, the affinity service1785would be external to the agent platform server and centralized to improve communications with all instances of the agent platform server and agent data consumers. As shown inFIG.17, the agent data consumers1790include multiple log servers (1790-1and1790-2) and multiple FIM servers (1795-1,1795-2, and1795-3). In some examples the multiple log servers and/or the multiple FIM servers are hosted on separate virtual machines on the same physical hardware (e.g., a computing server). In some examples, the multiple log servers and/or the multiple FIM servers are hosted on separate physical machines in the same computer network environment. In some examples, multiple log servers and/or the multiple FIM servers are hosted on separate physical machines in different computing environments. The affinity service1785provides mappings to the message broker1770and/or agent bridge1760in order to direct message flow from the agents (e.g., agents1710-1712) to one of the multiple log servers and/or multiple FIM servers. The affinity service1785can utilize UUIDs in order to identify the agents1710-1712and agent data consumers1790. In some examples, the affinity service1785maintains a table representing the associations between agents (e.g. agents1710-1712) and one or more of the agent data consumers1790). The agents can be assigned using a number of methodologies, including but not limited to assignments based on: round robin, load and/or capacity of one or more of the agent data consumers1790, geographic location of the agents and/or the agent data consumers, network topology (e.g., by physical subnets or virtual local area network (VLAN), function roles (e.g., a respective consumer and/or agent is deployed for product development, testing, staging, or production), version of an agent, and/or version of an agent data consumer. In some examples, the affinity service1785directs routing of messages from agents by intercepting an agent online message emitted by the agent manager1780. The agent online message is enhanced by providing the product server UUID assigned to the agent by the affinity service1785. In some examples, the affinity service1785maintains an affinity map that defines relationships between agents and agent data consumers. In some examples, the affinity service is configured to map each of the agents to a respective one of the data consumers. In some examples, the affinity service mapping is based at least in part on one or more of the following: a geographic location of one or more of the agents and/or the agent data consumers; topology of a network carrying communication between the agent data consumers, agent platform servers, and/or agent computing hosts; a functional role of one of the agents and/or one of the agent data consumers; a version of an agent; and/or a version of an agent data consumer. FIG.18is a sequence diagram1800illustrating an example of messages sent between an agent, bridge, manager, affinity service component, and data consumer, in an exemplary method of assigning an agent to a data consumer using the affinity service component, as can be performed in some examples of the disclosed technology. For example, hardware in the example computing environment discussed above regardingFIG.17can be used to implement the illustrated components. In the illustrated example, affinity assignment based on a round robin allocation is used, although other examples of allocation may be used in other examples. Upon initial agent discovery by an agent manager (in this example, an agent manager executing a thread for the affinity service), the affinity service hosted by agent manager will determine which of the data consumers is to be assigned to an agent. The agent manager issues an AgentAffinity message, which includes information about the assignment, including UUID and an AgentOnline message. In some examples, the affinity service sends the AgentAffinity message to a specific data consumer. In other examples, the AgentAffinity message is broadcast to all or a portion of the data consumers, and the non-addressed consumers drop or ignore the message (e.g., if the UUID does not match the data consumer's assigned UUID). In examples where the AgentAffinity message is for a specific instance of an application (e.g., a CCC tool), the CCC tool can create a new node entry for the newly-associated agent. Thus, consumers can ignore subsequent AgentOnline messages that are not stored for a previously-created node, and update the node if the AgentOnline message for an agent associated with a particular instance of a consumer. An example of an affinity map, before any agents have been assigned by an instance of the affinity service1785, is illustrated as a text listing1900inFIG.19. An example of an affinity map, after a number of agents have been assigned to agent data consumers using the affinity service1785, is illustrated a text listing2000inFIGS.20A-20B. As illustrated inFIGS.19-20B, affinity maps define relationships used to associate agents and agent data consumers and can be used, for example, to route messages from the agents to the agent data consumers. Examples of hierarchical information that can appear in an affinity map include a map definition, including a unique name for the affinity map, a map type (e.g., a map type could be for one type of agent, but other agents and/or components can have different assigned types, a region (e.g., a geographical region or role), a version identifier, a valid time (e.g., a field describing valid time period for the affinity map). Additional information can include server definitions, including server type, identification of a method used to assign affinity, a capabilities list (e.g., describing capabilities specified for an agent to be associated, and/or a description of capabilities for the associated agent data consumer), a unique name, list of agents, and/or a list of associated agent UUIDs. XII. Example Method of Controlling Message Flow in a Computer Network FIG.21illustrates an example method of controlling message flow in a computer network that couples a plurality of agents, a plurality of agent data consumers, and an agent message bridge configured to send messages between the agents and the agent data consumers, as can be performed in certain examples of the disclosed technology. For example, the computing environment1700discussed above regardingFIG.17can be used to implement the illustrated method. At process block2100, an agent bridge receives a set of messages from one or more agents, at least some of the messages including a message type for the type of data carried by the message. After the agent bridge receives the messages, the method proceeds to process block2110. At process block2110, a set of messages are queued in a spooler. For example, messages containing agent data to be sent to the agent bridge can be spooled including an indication of a respective message type for each of the messages. In some examples, the spooler includes distinct spools for queuing the set of messages, and each message of the set of messages is placed in a selected one of the distinct spools based on the type of the message. At process block2120, one or more of the agents receives an indication that sending of one or more, but not all of the messages in the spooler should be delayed for at least one indicated message type. For example, messages of a type to be sent to an agent data consumer that is experiencing network difficulties, high loading, reconfiguration, or other such conditions can be indicated in order to delay sending of agent messages of the indicated type to the agent data consumer. In some examples, the indication is received based on a queue depth reported by a message broker. In some examples, the indication is received from the agent bridge responsive to determining that one or more of the agent message consumers are not receiving messages. At process block2130, at least one message of another type than the type indicated at process block2120is sent to an agent data consumer via the message bridge. Thus, messages that are not indicated to be delayed using the spooler can be sent, for example, to a different agent data consumer than the consumer associated with the indicated message type. In some examples, the selected agent data consumers to which to send messages are determined using an affinity service. In some examples of the illustrated method, the agent bridge later sends an indication that the sending of the delayed messages queued in the spooler can be resumed, and the respective agents proceed to send messages based on spooler data to one or more agent data consumers for the indicated message type. In some examples, the illustrated method further includes mapping each of a plurality of the agents to a respective one of the agent data consumers, the mapping being based on at least one or more of the following: a geographic location of one or more of the agents and/or the agent data consumers; topology of a network carrying communication between the agent data consumers, agent platform servers, and/or agent computing hosts; a functional role of one of the agents and/or one of the agent data consumers; a version of an agent; and/or a version of an agent data consumer. In some examples, the agent bridge monitors a plurality of message topics or queues, each of the message topics or queues having a distinct type, each of the message topics or queues being configured to temporarily store messages received from the agents. When the number of messages queued in a first message topic or queue of the plurality of message topics or queues exceeds a predefined stop sending level, the bridge sends an advisory message from the agent bridge to the agents indicating that messages of the corresponding type should not be sent. After sending the stop sending advisory message, when the number of messages queued in the first message topic or queue reaches a predefined restart sending level, the agent bridge sends an advisory message from the message bridge to the agents indicating that sending of messages of the corresponding type can be resumed. XIII. Example Computing Environment FIG.22illustrates a generalized example of a suitable computing environment2200in which described embodiments, techniques, and technologies, including reporting agents and monitor servers, can be implemented. For example, the computing environment2200can implement any of the agents, agent platform servers, and agent data consumers, as described herein. The computing environment2200is not intended to suggest any limitation as to scope of use or functionality of the technology, as the technology may be implemented in diverse general-purpose or special-purpose computing environments. For example, the disclosed technology may be implemented with other computer system configurations, including hand held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. The disclosed technology may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. With reference toFIG.22, the computing environment2200includes at least one central processing unit2210and memory2220. InFIG.22, this most basic configuration2230is included within a dashed line. The central processing unit2210executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power and as such, multiple processors can be running simultaneously. The memory2220may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. The memory2220stores software2280, images, and video that can, for example, implement the technologies described herein. A computing environment may have additional features. For example, the computing environment2200includes storage2240, one or more input devices2250, one or more output devices2260, and one or more communication connections2270. An interconnection mechanism (not shown) such as a bus, a controller, or a network, interconnects the components of the computing environment2200. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment2200, and coordinates activities of the components of the computing environment2200. The storage2240may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, or any other medium which can be used to store information and that can be accessed within the computing environment2200. The storage2240stores instructions for the software2280, plugin data, and messages, which can be used to implement technologies described herein. The input device(s)2250may be a touch input device, such as a keyboard, keypad, mouse, touch screen display, pen, or trackball, a voice input device, a scanning device, or another device, that provides input to the computing environment2200. For audio, the input device(s)2250may be a sound card or similar device that accepts audio input in analog or digital form, or a CD-ROM reader that provides audio samples to the computing environment2200. The output device(s)2260may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment2200. The communication connection(s)2270enable communication over a communication medium (e.g., a connecting network) to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed graphics information, video, or other data in a modulated data signal. The communication connection(s)2270are not limited to wired connections (e.g., megabit or gigabit Ethernet, Infiniband, Fibre Channel over electrical or fiber optic connections) but also include wireless technologies (e.g., RF connections via Bluetooth, WiFi (IEEE 802.11a/b/n), WiMax, cellular, satellite, laser, infrared) and other suitable communication connections for providing a network connection for the disclosed agents, bridges, and agent data consumers. In a virtual host environment, the communication(s) connections can be a virtualized network connection provided by the virtual host. Some embodiments of the disclosed methods can be performed using computer-executable instructions implementing all or a portion of the disclosed technology in a computing cloud2290. For example, agents can be executing vulnerability scanning functions in the computing environment while agent platform (e.g., bridge) and agent data consumer service can be performed on servers located in the computing cloud2290. Computer-readable media are any available media that can be accessed within a computing environment2200. By way of example, and not limitation, with the computing environment2200, computer-readable media include memory2220and/or storage2240. As should be readily understood, the term computer-readable storage media includes the media for data storage such as memory2220and storage2240, and not transmission media such as modulated data signals. In view of the many possible embodiments to which the principles of the disclosed subject matter may be applied, it should be recognized that the illustrated embodiments are only preferred examples and should not be taken as limiting the scope of the scope of the claims to those preferred examples. Rather, the scope of the claimed subject matter is defined by the following claims. We therefore claim as our invention all that comes within the scope of these claims.
116,923
11863461
DETAIL DESCRIPTION OF EMBODIMENTS The present disclosure is described below based on embodiments, but the present disclosure is not limited solely to these embodiments. In the following detailed description of the present disclosure, some specific details are set forth in detail. One skilled in the art may thoroughly understand the present disclosure even without these specific details. Well-known methods, procedures, flows, elements and circuits are not described in detail herein so as not to obscure the essence of the present disclosure. Furthermore, those of ordinary skill in the art will appreciate that the drawings provided herein are for illustrative purposes and are not necessarily drawn to scale. Unless the context clearly requires otherwise, throughout the description, the words “comprise”, “comprising”, and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” In the description of the present disclosure, it is to be understood that the terms “first”, “second”, and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present disclosure, “a plurality” means two or more unless otherwise specified. Generally speaking, arbitration schemes for NOC data channels in the existing art include two cases, i.e., a fixed-priority arbitration scheme and a round-robin priority arbitration scheme, in which arbitration of the data channels refers to determine priority of data according to arbitration, and thus determining the data obtaining usage right of a data channel. An arbitration scheme typically includes multiple levels of arbitration. As an implementation, all data to be transmitted is arbitrated in a first level arbitration. The lower the level of arbitration, the more data channels are involved, and the highest level of arbitration involves only one arbitration channel through which data can reach the destination. The arbitration scheme shown inFIG.1includes three levels of arbitration, i.e., a first level arbitration involving 4 data channels, channel1, channel2, channel3, and channel4; a second level arbitration involving 2 data channels, channel5 and channel6; and a third level arbitration involving 1 data channel, channel7. As another implementation, each level of arbitration involves a same number of channels, and the data participating in a next level of arbitration includes data output from data channels of a previous level and data newly input. As shown inFIG.2, the first level arbitration involves data channel channel1, the second level arbitration involves data channel channel2, the third level arbitration involves data channel channel3, and the fourth level arbitration involves data channel channel4. Either in the arbitration scheme shown inFIG.1or in the arbitration scheme shown inFIG.2, both the fixed-priority arbitration scheme and the round-robin priority arbitration scheme are present. A first case, i.e., the fixed-priority arbitration scheme, in which priorities of data sources are fixed, is suitable for arbitration with a specific priority order, but cannot implement dynamic adjustment of the priorities. For example, as shown inFIG.1, assuming that 8 data sources are included, namely, a data0 data source, a data1 data source, a data2 data source, a data3 data source, a data4 data source, a data5 data source, a data6 data source, and a data7 data source, which are transmitted to the destination after three levels of arbitration, since the data channels are limited, more than one data pieces may compete for usage right of one data channel. For example, data0 and data1 are arbitrated by an arbiter (ARB)1in the first level arbitration to determine the priorities and compete for usage right of channel1; data2 and data3 are arbitrated by ARB2 in the first level arbitration to determine the priorities and compete for usage right of channel2; data4 and data5 are arbitrated by ARB3 in the first level arbitration to determine the priorities and compete for usage right of channel3; and data6 and data7 are arbitrated by ARB4 in the first level arbitration to determine the priorities and compete for usage right of channel4. Output data from channel1 and output data from channel2 are further arbitrated by ARB5 in the second level arbitration to determine the priorities and compete for usage right of channel5, that is, data competing for usage right of channel5 includes data0, data1, data2 and data3. Output data from channel3 and output data from channel4 are further arbitrated by ARB6 in the second level arbitration to determine the priorities and compete for usage right of channel6, that is, data competing for usage right of channel6 includes data4, data5, data6 and data7. Output data from channel5 and output data from channel6 are further arbitrated by ARB7 in the third level arbitration to determine the priorities and compete for usage right of channel7, that is, data competing for usage right of channel7 includes data0, data1, data2, data3, data4, data5, data6 and data7. By analogy, details will not be described in the present disclosure, until the data is transmitted to the destination. With the fixed-priority arbitration scheme, priorities of data sources corresponding to individual data pieces can be set in advance, and on the assumption that in ARB1, the priority of the data0 data source is set to be higher than that of the data1 data source, when data from the data0 data source and data from the data1 data source are simultaneously input, the data from the data0 data source obtains usage right of channel1, and the priority cannot be changed. The first case is described below with reference toFIG.2. As described above, since the data channels are limited, more than one data pieces may compete for usage right of one data channel. For example, data0 and data1 are arbitrated by ARB1 to determine the priorities and compete for usage right of channel1. Output data from channel1 and data2 need to be further arbitrated by ARB2 and compete for usage right of channel2. Output data from channel2 and data3 need to be further arbitrated by ARB3 and compete for usage right of channel3. By analogy, details will not be described in the present disclosure, until the data is transmitted to the destination. With the fixed-priority arbitration scheme, priorities of data sources corresponding to individual data pieces can be set in advance, and on the assumption that in ARB1, the priority of the data0 data source is set to be higher than that of the data1 data source, when data from the data0 data source and data from the data1 data source are simultaneously input, the data from the data0 data source obtains usage right of channel1, and the priority cannot be changed. A second case, i.e., the round-robin priority arbitration scheme, can implement dynamic adjustment of the priorities of the data sources, but cannot relieve an imbalance in data transmission delay. For example, as can be seen fromFIG.1, the data sources data0 to data7 are encoded according to a code order shown in Table 1, and each data source in Table 1 has a code length of 3 bits, specifically: TABLE 1Data sourceCodedata0000data1001data2010data3011data4100data5101data6110data7111 The above Table 1 shows codes of the data sources, where an initial priority order of the data sources is data0>data1>data2>data3>data4>data5>data6>data7, that is, the corresponding code priority order is 000>001>010>011>100>101>110>111, based on which the round-robin priority arbitration is performed. Assuming that, taking the third level arbitration as an example, data input in ARB7 through channel5 may originate from data sources data0, data1, data2 and data3, and data input through channel6 may originate from data4, data5, data6 and data7, then according to the round-robin arbitration scheme, the arbitration process includes: 1. A first data arbitration: assuming that data0 and data4 compete for usage right of channel7, it is determined that data0 obtains the usage right according to the initial priority order, and due to the round-robin, the priority of data0 becomes the lowest in a next arbitration, that is, the priority order of the data sources is data1>data2>data3>data4>data5>data6>data7>data0, that is, the corresponding code priority order is 001>010>011>100>101>110>111>000; 2. A second data arbitration: assuming that data1 and data4 continue to compete for usage right of channel7, similarly, data1 obtains the usage right, and due to the round-robin, the priority of data1 becomes the lowest in a next arbitration, that is, the priority order of the data sources is data2>data3>data4>data5>data6>data7>data0>data1, that is, the corresponding code priority order is 010>011>100>101>110>111>000>001; 3. A third data arbitration: assuming that data2 and data4 continue to compete for usage right of channel7, similarly, data2 obtains the usage right, and due to the round-robin, the priority of data2 becomes the lowest in a next arbitration, that is, the priority order of the data sources is data3>data4>data5>data6>data7>data0>data1>data2, that is, the corresponding code priority order is 011>100>101>110>111>000>001>010; 4. A fourth data arbitration: assuming that data3 and data4 continue to compete for usage right of channel7, similarly, data3 obtains the usage right, and due to the round-robin, the priority of data3 becomes the lowest in a next arbitration, that is, the priority order of the data sources is data4>data5>data6>data7>data0>data1>data2>data3, that is, the corresponding code priority order is 100>101>110>111>000>001>010>011; 5. A fifth data arbitration: assuming that data0 and data4 continue to compete for usage right of channel7, similarly, data4 obtains the usage right, and due to the round-robin, the priority of data4 becomes the lowest in a next arbitration, that is, the priority order of the data sources is data5>data6>data7>data0>data1>data2>data3>data4, that is, the corresponding code priority order is 101>110>111>000>001>010>011>100; 6. A sixth data arbitration: assuming that data0 and data5 continue to compete for usage right of channel7, similarly, data5 obtains the usage right, and due to the round-robin, the priority of data5 becomes the lowest in a next arbitration, that is, the priority order of the data sources is data6>data7>data0>data1>data2>data3>data4>data5, that is, the corresponding code priority order is 110>111>000>001>010>011>100>101; 7. A seventh data arbitration: assuming that data0 and data6 continue to compete for usage right of channel7, similarly, data6 obtains the usage right, and due to the round-robin, the priority of data6 becomes the lowest in a next arbitration, that is, the priority order of the data sources is data7>data0>data1>data2>data3>data4>data5>data6, that is, the corresponding code priority order is 111>000>001>010>011>100>101>110; 8. An eighth data arbitration: assuming that data0 and data7 continue to compete for usage right of channel7, similarly, data7 obtains the usage right, and due to the round-robin, the priority of data7 becomes the lowest in a next arbitration, that is, the priority order of the data sources is data0>data1>data2>data3>data4>data5>data6>data7, that is, the corresponding code priority order is 000>001>010>011>100>101>110>111. According to the above scheme, from the first data arbitration to the fourth data arbitration, data4, data5, data6 and data7 output from channel6 are blocked in the process, and from the fifth data arbitration to the eighth data arbitration, data0, data1, data2 and data3 output from channel5 are blocked in the process, causing imbalance in data transmission delay. In addition to causing the imbalance in data transmission delay, the second case may further result in imbalance in the number of data sources that reach the destination. As shown inFIG.2, during the process of transmitting data to the destination, data0 and data1 compete for usage right of channel1, and a ratio of the data competing with data0 to data0 in this competition is 1:1, data2 and output data from channel1 compete for usage right of channel2, and a ratio of the data competing with data2 to data2 in this competition is 2:1, data3 and output data from channel2 compete for usage right of channel3, and a ratio of the data competing with data3 to data3 in this competition is 3:1, data4 and output data from channel3 compete for usage right of channel4, and a ratio of the data competing with data4 to data4 in this competition is 4:1. Therefore, in the last competition, a larger proportion of data4 reaches the destination than others, causing imbalance in the number of data sources that reach the destination. Due to such imbalance, the balance of the arbitration scheme is affected, and the requirement for priority adjustment cannot be met. In the present disclosure, the data channel may also be referred to as a transmission channel, which is not limited in the present disclosure. The inventor of the present disclosure has found that a root cause of the imbalance in data transmission delay and the imbalance in the number of data sources reaching the destination in the round-robin priority arbitration scheme is that: although a same data source has different priority sequences in different arbitrations, the priority sequence number of the data source changes linearly (for example, the priority sequence number of a certain data source gradually increases or decreases as the number of arbitrations increases), and such linear change is too regular, which will inevitably cause non-uniform delay after multiple times of arbitration, as well as imbalance in the number of data sources reaching the destination. In view of this, as an aspect of the present disclosure, there is provided a data processing method which, as shown inFIG.3, includes operations S310to S320. At operation S310, determining a plurality of candidate data pieces, where the candidate data pieces are provided from corresponding data sources. At operation S320, determining a target data piece based on priorities of the data sources corresponding to the plurality of candidate data pieces in a current cycle, where a same data source has different priorities in different processing cycles, and priority sequence numbers of a same data source in different processing cycles satisfy a nonlinear relationship. It should be noted that the “priority sequence number” herein refers to a serial number of a data source in a sequence of data sources arranged according to the priorities. For example, in a first processing cycle, the priority sequence of the data sources is: data0>data1>data2>data3>data4>data5>data6>data7, then the priority sequence numbers of these 8 data sources are as shown in Table 2: TABLE 2Data sourceSequence numberdata01data12data23data34data45data56data67data78 For another example, in another processing cycle, the priority sequence of the data sources is: data3>data1>data0>data2>data6>data4>data5>data7, then the priority sequence numbers of these 8 data sources are as shown in Table 3: TABLE 3Data sourceSequence numberdata03data12data24data31data46data57data65data78 With the data processing method provided in the present disclosure, the priority sequence of a same data source varies irregularly in different processing cycles, and thus, in a plurality of successive processing cycles, the limited data output to a destination comes from a more random data sources, and the balance in the number of data sources that reach the destination is improved. Since the priority sequence of the same data source varies irregularly in different processing cycles, the source of the data transmitted through each channel in different cycles also changes so that the time delay during data transmission is more balanced. The following explains “priority sequence numbers of a same data source in different processing cycles satisfy a nonlinear relationship” by way of example. Assuming that in a first processing cycle, a priority sequence number of the data0 data source is 1, then in the following 4 processing cycles, the priority sequence numbers of the data0 data source may be 8, 3, 6 and 5, respectively. It can be seen that 1, 8, 3, 6 and 5 do not satisfy an increasing or decreasing linear relationship. As an optional implementation, for any data source, a data source with a priority sequence number 1 less than the priority sequence number of said any data source in different processing cycles is a different data source, and/or a data source with a priority sequence number 1 greater than the priority sequence number of the any data source in two adjacent processing cycles is also a different data source. The above rules are explained below by way of example. In the first processing cycle, the priorities of the plurality of data sources are data0>data1>data2>data3>data4>data5>data6>data 7. For the data4 data source, the data source with a priority sequence number 1 less than the priority sequence number of the data4 data source is data3, and the data source with a priority sequence number 1 greater than the priority sequence number of the data4 data source is data5. In the second processing cycle, the priorities of the plurality of data sources are data3>data1>data0>data2>data6>data4>data5>data7. For the data4 data source, the data source with a priority sequence number 1 less than the priority sequence number of the data4 data source is data6, and the data source with a priority sequence number 1 greater than the priority sequence number of the data4 data source is data5. In the above, two data transmission modes shown inFIGS.1and2are introduced, and how to implement the data processing method of the present disclosure when data transmission is performed according to the data transmission mode shown inFIG.1and how to implement the data processing method of the present disclosure when data transmission is performed according to the data transmission mode shown inFIG.2are respectively described below. Based on the arbitration scheme shown inFIG.1, as shown inFIG.4, in one processing cycle (which may be the first processing cycle or other processing cycles), the operation S320of the data processing method of the present disclosure may include operations S321to S326. At operation S321, determining the number of data sources in an Nthlevel arbitration. Optionally, each data source has a corresponding physical serial number, and in a first level arbitration, every two data sources with adjacent physical serial numbers compete for usage right of one data channel. N is variable and may take any natural number from 1 to M. In the implementation shown inFIG.1, M is 3. In a possible implementation, any level of arbitration (i.e., an Nthlevel arbitration) obtains data from the data sources of the two data channels, and at least 2 data sources participate in the Nthlevel arbitration. For example, as shown inFIG.1, ARB7 of the third level arbitration acquires data from data sources corresponding to channel5 and channel6, where output data from channel5 may be data0, data1, data2 and data3, and output data from channel6 may be data4, data5, data6 and data7, that is, the third level arbitration involves 8 data sources. Here, for each data source, the physical serial number is a fixed serial number that can be used to identify the data source. As an optional implementation, “0” in data0 is the “physical serial number” of the data0 data source, “1” in data1 is the “physical serial number” of the data1 data source, so on and so forth. It should be noted that in the data transmission mode shown inFIG.1, in the first level arbitration, the data0 data source and the data1 data source compete for usage right of channel1, the data2 data source and the data3 data source compete for usage right of channel2, the data4 data source and the data5 data source compete for usage right of channel3, and the data6 data source and the data7 data source compete for usage right of channel4. The physical serial number is merely used for identifying the data source, and does not limit the priority of the data source. At operation S322, generating codes from the number of data sources in the Nthlevel arbitration in a set encoding mode. In a possible implementation, a priority order of the codes is determined according to the codes. In a possible implementation, the codes are generated from the number of data sources in a binary mode, and the number of codes is equal to the number of data sources. For example, in the case of 8 data sources, 3-bit binary codes may be adopted, which may include 8 binary codes, i.e., 000, 001, 010, 011, 100, 101, 110, and 111. In the case of 16 data sources, 4-bit binary codes may be adopted, which may include 0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000, 1001, 1010, 1011, 1100, 1101, 1110 and 1111. In the present disclosure, as the number of data sources increases, the number of bits of each binary code also increases, which may be determined according to actual situations. At operation S323, assigning all the generated codes to all data sources in the Nthlevel arbitration according to a set priority order. Optionally, priority sequence numbers of any two data sources with adjacent physical serial number are not adjacent. For example, the physical serial number of the data0 data source is adjacent to the physical serial number of the data1 data source, but the priority sequence number of the data0 data source is not adjacent to the priority sequence number of the data1 data source. In the existing art as described above, priority sequence numbers of two data sources with adjacent physical serial number are adjacent, resulting in that in the round-robin priority arbitration scheme, the priority sequence number of the same data source in different processing cycles changes linearly. In the present disclosure, since the priority sequence numbers of any two data sources with adjacent physical serial number are not adjacent, it is achieved that the priority sequence number of the same data source in different processing cycles changes non-linearly in the round-robin priority arbitration scheme. In the present disclosure, the codes may be assigned to the data sources in the set order in two ways. In a first way, the plurality of data sources in the Nthlevel arbitration are divided into a plurality of data source groups each including at least one data source, and the operation of assigning all the generated codes to all data sources in the Nthlevel arbitration according to the set order (i.e., the operation S323) may include: determining, according to an order of the codes and the number of data sources in each data source group, codes corresponding to each data source group; and assigning the codes corresponding to the data source group to the data sources in the data source group one by one. For example, the codes are sorted from small to large according to the corresponding values, i.e., 000, 001, 010, 011, 100, 101, 110, and 111, the data sources include data0 data source, data1 data source, data2 data source, data3 data source, data4 data source, data5 data source, data6 data source, and data7 data source, in which data0, data1, data2, and data3 may be output from a channel A, data4, data5, data6, and data7 may be output from a channel B. For example, according to the channels where the data sources are output, the data sources are divided into two groups, i.e., a first group including data0, data1, data2, and data3, and a second group including data4, data5, data6, and data7. According to an order of the codes and the number of data sources in each data source group, the codes corresponding to the first group may be determined to be: 000, 001, 010, 011, and the codes corresponding to the second group may be determined to be: 100, 101, 110, 111. Then, the codes corresponding to the first group are firstly assigned to data0, data1, data2 and data3 in the first group one by one, and the codes corresponding to the second group are assigned to the data sources in the second group one by one, thereby obtaining the following Table 4: TABLE 4Data sourcedata0data1data2data3data4data5data6data7Code000001010011100101110111 It will be appreciated that when assigning the codes to the data sources in the set order, the codes may be further assigned to the data sources one by one in an order of the codes. When the Nthlevel arbitration includes a plurality of (e.g., 2) data source groups, the codes may be assigned to the data sources in the first group one by one in the order of codes, for example, from small to large, and after the codes are assigned to the first group, the remaining codes may be assigned to the data sources in the second group one by one in an order from small to large. The number of data source groups, the number of data sources in each data source group, and the manner in which the codes are assigned to the data sources according to the set order, are not limited in the present disclosure. In a second way, bit-to-bit swap may be performed on code bits of each code to generate a swapped code corresponding to the code, the operation of assigning all the generated codes to all data sources in the Nthlevel arbitration according to the set order (i.e., the operation S323) may include: determining, according to an order of the codes, the number of data sources in each data source group, and the swapped code corresponding to each code, swapped codes corresponding to each data source group; and assigning the swapped codes corresponding to the data source group to the data sources in the data source group one by one, so that priority sequence numbers of any two data sources with adjacent physical serial number are not adjacent. In a possible implementation, bit-to-bit swap is performed on code bits of the codes to generate the swapped codes. Specifically, centrosymmetric swap based on system is performed on 000, 001, 010, 011, 100, 101, 110, and 111 to generate the swapped codes, thereby obtaining the following Table 5: TABLE 5Code000001010011100101110111Swapped code000100010110001101011111 In a possible implementation, as described above, the data sources are divided into two groups, i.e., the first group including data0 data source, data1 data source, data2 data source, and data3 data source, and the second group including data4 data source, data5 data source, data6 data source, and data7 data source. According to an order of the codes, the number of data sources in each data source group, and the swapped code corresponding to each code, the swapped codes corresponding to each data source group may be determined. For example, the swapped codes corresponding to the first group are determined to be: 000, 100, 010, and 110, and the swapped codes corresponding to the second group are determined to be: 001, 101, 011, and 111. The swapped codes corresponding to the first group may be assigned to data0, data1, data2 and data3 in the first group one by one, and the swapped codes corresponding to the second group may be assigned to the data sources in the second group one by one, thereby obtaining the following Table 6: TABLE 6Swapped code000100010110001101011111Data sourcedata0data1data2data3data4data5data6data7 When assigning the codes to the data sources in the set order, the swapped codes corresponding to the codes may be assigned to the data sources one by one in an order of the codes. When a plurality of (e.g., 2) data source groups are included, the swapped codes corresponding to the codes may be assigned to the data sources in the first group one by one in the order of the codes, for example, from small to large; and after the swapped codes are assigned to the first group, the swapped codes corresponding to the remaining codes may be assigned to the data sources in the second group one by one. After the above operations that can be performed only in a single processing cycle, operations S324to S326to be performed in each processing cycle are described below with reference toFIGS.4and5. At operation S324, generating a target code for a data source corresponding to each candidate data piece according to the generated codes. For the arbiter, more than one candidate data pieces may be arbitrated at one time. For example, in the implementation shown inFIG.1, ARB7 may arbitrate between a candidate data piece supplied from the data0 data source and a candidate data piece supplied from the data7 data source. In the present disclosure, corresponding to the first way for implementing the operation S323, the operation S324may include: performing bit-to-bit swap on code bits of the code assigned to the data source corresponding to the candidate data piece to generate a swapped code; and determining the swapped code as the target code of the data source corresponding to the candidate data piece. In a possible implementation, after 000, 001, 010, 011, 100, 101, 110 and 111 are assigned to the data sources one by one in the order of data0, data1, data2, data3, data4, data5, data6 and data7, the code bits of the codes are swapped to generate the swapped codes, thereby obtaining the following Table 7: TABLE 7Data sourcedata0data1data2data3data4data5data6data7Code000001010011100101110111Swapped code000100010110001101011111 As can be seen from Table 7, the codes include: 000, 001, 010, 011, 100, 101, 110, and 111, and the swapped codes also includes: 000, 001, 010, 011, 100, 101, 110, and 111, in which some data sources have codes the same as the swapped codes, while other data sources have codes different from the swapped codes. Corresponding to the second way for implementing the operation S323, the operation S324may include: determining the swapped code of the data source corresponding to the candidate data piece as the target code of the data source corresponding to the candidate data piece. In a possible implementation, the swapped codes are as shown in Table 5, and the generated swapped codes are assigned to the data sources in an order one by one to obtain the same table as Table 6. At operation S325, arbitrating, according to the target code and a priority order corresponding to the target code, the data source corresponding to each candidate data piece to determine the target data piece. At operation S326, updating, according to an arbitration result, a priority order of all data sources at each level of arbitration. After operation S326, a same data source may have different priorities in different processing cycles, and priority sequence numbers of a same data source in different processing cycles satisfy a nonlinear relationship. In a possible implementation, the priority order corresponding to the target code may be determined according to the priority order of the codes, or may also be determined according to the target code itself, such as determined according to a numerical sequence of the target code, which is not limited in the present disclosure. In a possible implementation, the data source corresponding to the target code is arbitrated according to the target code and the priority order corresponding to the target code to determine the data source obtaining usage right of the data channel, and then a priority of the target code of the data source, which have obtained usage right of the channel, is reduced to the last in the priority order of all data sources (i.e., the priority sequence number becomes the largest). For example, assuming that the initially set priority order is 000>001>010>011>100>101>110>111, according to the above Table 6 or Table 7, the priority order of the data sources is determined to be data0>data4>data2>data6>data1>data5>data3>data7, and assuming that the data0 data source is arbitrated with the data4 data source, the data0 data source obtains the usage right of data channel, so the priority of the data0 data source becomes the lowest in a next arbitration, and the updated priority order is data4>data2>data6>data1>data5>data3>data7>data0. By analogy, details will not be described in the present disclosure. The data transmission process is described below through specific embodiments. Taking ARB7 arbitrating channel7 shown inFIG.1as an example, the respective data sources are encoded in the above order generated according to the binary mode, so that the data0 data source has a code 000, the data1 data source has a code 001, the data2 data source has a code 010, the data3 data source has a code 011, and the data source input from channel5 to ARB7 may be data0 data source, data1 data source, data2 data source, or data3 data source, and the data4 data source has a code 100, the data5 data source has a code 101, the data6 data source has a code 110, the data7 data source has a code 111, and the data source input from channel6 to ARB7 may be data4 data source, data5 data source, data6 data source, or data7 data source. Centrosymmetric swap is performed on the codes of the data sources to obtain the swapped codes thereof, in which the data0 data source has a swapped code 000, the data1 data source has a swapped code 100, the data2 data source has a swapped code 010, the data3 data source has a swapped code 110, the data4 data source has a swapped code 001, the data5 data source has a swapped code 101, the data6 data source has a swapped code 011, the data7 data source has a swapped code 111, and the initially set priority order is 000>001>010>011>100>101>110>111. A first data arbitration (i.e., the first processing cycle) is implemented specifically as follows: assuming that the data source input from channel5 is the data0 data source (i.e., data supplied from data0 data source is a candidate data piece) with a target code 000, and assuming that the data source input from channel6 is the data4 data source (i.e., data supplied from data4 data source is a candidate data piece) with a target code 001, since in the priority order, 000>001, it is determined that the data0 data source has a priority higher than the data4 data source, and thus obtains usage right of channel7. Accordingly, in the next processing cycle, the priority of the data0 data source is reduced to the lowest, and the priority order is updated to 001>010>011>100>101>110>111>000. Then, a second data arbitration (i.e., the second processing cycle) is implemented specifically as follows: assuming that the data source input from channel5 is the data2 data source (i.e., data supplied from data2 data source is a candidate data piece) with a target code 010, and assuming that the data source input from channel6 is the data4 data source (i.e., data supplied from data4 data source is a candidate data piece) with a target code 001, since in the priority order, 001>010, it is determined that the data4 data source has a priority higher than the data2 data source, and thus obtains usage right of channel7. In the next processing cycle, the priority of the data4 data source is reduced to the lowest, and the priority order is updated to 010>011>100>101>110>111>000>001. Then, a third data arbitration (i.e., the third processing cycle) is implemented specifically as follows: assuming that the data source input from channel5 is the data2 data source (i.e., data supplied from data2 data source is a candidate data piece) with a target code 010, and assuming that the data source input from channel6 is the data6 data source (i.e., data supplied from data6 data source is a candidate data piece) with a target code 011, since in the priority order, 010>011, it is determined that the data2 data source has a priority higher than the data6 data source, and thus obtains usage right of channel7. In the next processing cycle, the priority of the data2 data source is reduced to the lowest, and the priority order is updated to 011>100>101>110>111>000>001>010. Then, a fourth data arbitration (i.e., the fourth processing cycle) is implemented specifically as follows: assuming that the data source input from channel5 is the data1 data source (i.e., data supplied from data1 data source is a candidate data piece) with a target code 100, and assuming that the data source input from channel6 is the data6 data source (i.e., data supplied from data6 data source is a candidate data piece) with a target code 011, since in the priority order, 011>100, it is determined that the data6 data source has a priority higher than the data1 data source, and thus obtains usage right of channel7. In the next processing cycle, the priority of the data6 data source is reduced to the lowest, and the priority order is updated to 100>101>110>111>000>001>010>011. Then, a fifth data arbitration (i.e., the fifth processing cycle) is implemented specifically as follows: assuming that the data source input from channel5 is the data1 data source (i.e., data supplied from data1 data source is a candidate data piece) with a target code 100, and assuming that the data source input from channel6 is the data5 data source (i.e., data supplied from data5 data source is a candidate data piece) with a target code 101, since in the priority order, 100>101, it is determined that the data1 data source has a priority higher than the data5 data source, and thus obtains usage right of channel7. In the next processing cycle, the priority of the data1 data source is reduced to the lowest, and the priority order is updated to 101>110>111>000>001>010>011>100. Then, a sixth data arbitration (i.e., the sixth processing cycle) is implemented specifically as follows: assuming that the data source input from channel5 is the data3 data source (i.e., data supplied from data3 data source is a candidate data piece) with a target code 110, and assuming that the data source input from channel6 is the data5 data source (i.e., data supplied from data5 data source is a candidate data piece) with a target code 101, since in the priority order, 101>110, it is determined that the data5 data source has a priority higher than the data3 data source, and thus obtains usage right of channel7. In the next processing cycle, the priority of the data5 data source is reduced to the lowest, and the priority order is updated to 110>111>000>001>010>011>100>101. Then, a seventh data arbitration (i.e., the seventh processing cycle) is implemented specifically as follows: assuming that the data source input from channel5 is the data3 data source (i.e., data supplied from data3 data source is a candidate data piece) with a target code 110, and assuming that the data source input from channel6 is the data7 data source (i.e., data supplied from data7 data source is a candidate data piece) with a target code 111, since in the priority order, 110>111, it is determined that the data3 data source has a priority higher than the data7 data source, and thus obtains usage right of channel7. In the next processing cycle, the priority of the data3 data source is reduced to the lowest, and the priority order is updated to 111>000>001>010>011>100>101>110. Then, an eighth data arbitration (i.e., the eighth processing cycle) is implemented specifically as follows: assuming that the data source input from channel5 is the data0 data source (i.e., data supplied from data0 data source is a candidate data piece) with a target code 000, and assuming that the data source input from channel6 is the data7 data source (i.e., data supplied from data7 data source is a candidate data piece) with a target code 111, since in the priority order, 111>000, it is determined that the data7 data source has a priority higher than the data0 data source, and thus obtains usage right of channel7. In the next processing cycle, the priority of the data7 data source is reduced to the lowest, and the priority order is updated to 000>001>010>011>100>101>110>111. In the present disclosure, the following data arbitrations are performed in the same manner as described above, and are repeated here. Through the above solution, the data output from channel5 and the data output from channel6 obtain usage right of channel7 alternately so that the problem of imbalance in data transmission delay is solved, and as the number of data sources of channel5 and channel6 increases, the balance effect in data transmission delay becomes more obvious. A specific implementation of the data transmission method of the present disclosure based on the scheme shown inFIG.2is described below. Specifically, each of the plurality of candidate data pieces carries an identity document (ID) configured to identify a data source of the candidate data piece. In a possible implementation, assume that 2 candidate data pieces, i.e., a first data piece and a second data piece, are received, the first data piece and the second data piece each carry an ID configured to identify a data source of the first data piece and a data source of the second data piece, respectively. In a possible implementation, assuming that the received first data piece comes from the data2 data source shown inFIG.2, and the second data piece comes from the data1 data source output from channel1, an ID value of each data source is encoded according to a set binary encoding rule, and each data piece has a different ID value from each other. For example, the data sources are encoded according to 0 to n, where the data0 data source has an ID 000, the data1 data source has an ID 001, the data2 data source has an ID 010, the data3 data source has an ID 011, and the data4 data source has an ID 100. In a possible implementation, the number of bits of each binary code may be set according to actual situations, which is not limited in the present disclosure. In the present disclosure, as shown inFIG.6, operation S320may include operations S321′ to S323′. At operation S321′, determining a logical difference value between the ID carried by each of the plurality of candidate data pieces and a reference ID, where the reference ID has different values in different processing cycles. The reference ID is a value of the same system as the ID carried by each data piece. For example, when the ID of each data piece is encoded according to a binary encoding rule, the reference ID is a binary value. The logical difference value is a value determined from the reference ID and the ID carried by a data piece according to a subtraction of the system corresponding to the reference ID. For example, the logical difference value may be a value obtained by subtracting the reference ID from the ID carried by a data piece according to a binary subtraction. In a possible implementation, the reference ID is stored in a state register unit. Taking two data pieces as an example, logical difference values between the IDs carried by the first and second data pieces and the ID of the data piece obtaining usage right of a channel in a previous arbitration are respectively determined, where the ID of the data obtaining usage right of the channel is the reference ID stored in the state register unit. For example, assume that binary encoding is adopted, the first data piece comes from the data2 data source and carries an ID 010, the second data piece comes from the data3 data source and carries an ID 011, an ID of the data obtaining usage right of the channel in a previous arbitration is 011 (i.e., the reference ID is 011), then a logical difference value between the ID carried by the first data piece and the reference ID is 010−011=111, and a logical difference value between the ID carried by the second data piece and the reference ID is 011−011=000, where each logical difference value is a value obtained by subtraction of two binary values and calculated according to the binary subtraction. Since the reference ID has different values in different processing cycles, a data source with a same ID in different processing cycles may correspond to different logical difference values. At operation S322′, determining the priorities of the data sources corresponding to the plurality of candidate data pieces from the determined plurality of logical difference values. As described above, a data source with the same ID in different processing cycles may correspond to different logical difference values, which means that the data source with the same ID has different priorities in different processing cycles. As the reference ID changes, priority sequence numbers of a same data source in different processing cycles may satisfy a nonlinear relationship. At operation S323′, taking one of the plurality of candidate data pieces corresponding to the data source with a highest priority as the target data piece. In a possible implementation, the target data piece is one of the plurality of data pieces that obtains usage right of the channel. In a possible implementation, the operation of determining the priorities of the data sources corresponding to the plurality of candidate data pieces from the determined plurality of logical difference values (i.e., the operation S322′) may include: comparing the plurality of logical difference values; and determining a data source corresponding to a largest logical difference value of the plurality of logical difference values as the data source with the highest priority. For example, the ARB3 compares the logical difference value 001 between the ID of the first data piece and the reference ID with the logical difference value 010 between the ID of the second data piece and the reference ID, and obtains that 001<010. Therefore, the second data piece has a priority higher than the first data piece, and obtains usage right of channel4 (i.e., the data source of the second data piece has a priority higher than the data source of the first data piece). In the process of determining the largest logical difference value among the plurality of logical difference values, it is possible to directly compare the values as described above, or the plurality of logical difference values may be sorted from small to large or from large to small, to determine the largest logical difference value according to the sorting result, which is not limited in the present disclosure. In a possible implementation, the target data piece obtains usage right of the channel, and thus can be transmitted as priority. The target data piece has a priority higher than other data pieces. In the present disclosure, the target data piece in the plurality of data pieces may be determined according to the logical difference values, and since the logical difference value determined in each calculation is different, the determined target data piece is dynamically changed, and since each level of arbitration determines the data to be transmitted as priority, i.e., determines priorities in data transmission, through the above method, the requirement for priority adjustment can be satisfied when the data amount is unevenly distributed. In a possible implement, from a second processing cycle, the ID carried by the target data piece in a previous processing cycle is the reference ID of a current processing cycle. For example, ARB3 may save the ID value 011 of the data3 data source of the second data piece in the state register unit of ARB3 for a next arbitration of ARB3 (i.e., ARB3 enters a next processing cycle). In this manner, the reference ID is determined according to the ID of the target data piece currently obtaining usage right of the channel, the updated reference ID is used for determining the target data piece that obtains usage right of the channel in a next arbitration (i.e., the next processing cycle), and the data source of the target data piece determined in each arbitration (i.e., each processing cycle) is dynamically changed, thereby implementing dynamic adjustment of the priorities of the data sources. Further, since each level of arbitration determines the data to be transmitted as priority, i.e., determines priorities in data transmission, through the above method, the requirement for priority adjustment can be satisfied when the data amount is unevenly distributed. In a possible implementation, when the plurality of candidate data pieces include 2 candidate data pieces, the first data piece may be a data piece from the data0 data source, the data1 data source, the data2 data source, the data3 data source, or the data4 data source, and the second data piece may also be a data piece from the data0 data source, the data1 data source, the data2 data source, the data3 data source, or the data4 data source, which are not limited in the present disclosure, as long as the first data piece and the second data piece are different. In a possible embodiment, an initial value of the state register unit is 0, that is, in the first processing cycle, the reference ID may be 0. The data transmission process is described below through embodiments. Taking ARB3 arbitrating channel3 shown inFIG.2as an example, the respective data sources are encoded in the above order so that data0 and the data0 data source have an ID 000, data1 and the data1 data source have an ID 001, data2 and the data2 data source have an ID 010, data3 and the data3 data source have an ID 011. IDs of the data sources input into ARB3 from channel2 may be 000, 001, and 010, and the ID of the data3 input into ARB3 may be ID 011, and an initial value of the state register unit inside ARB3 is 000. A first data arbitration (i.e., the first processing cycle) is implemented as follows: assuming that channel2 inputs a data piece with an ID 010 and a data piece supplied from the data3 data source with an ID 011, it is determined that the logical difference value between the IDs is (010-000)<(011-000), thus the data piece supplied from the data3 data source with the ID 011 obtains usage right of channel3, and the value of the state register unit in ARB3 is updated to 011. Then, a second data arbitration (i.e., the second processing cycle) is implemented as follows: channel2 inputs a data piece with an ID 010, and the data3 data source with the data source ID 011 continues to input the data piece with the data source ID 011, since the value of the state register unit in ARB3 is updated to 011, it is determined that the logical difference value between the IDs is (010-011)>(011-011), thus the data piece supplied from the data source with the ID 010 input from channel2 obtains usage right of channel3, while the value of the state register unit in ARB3 is updated to 010. Then, a third data arbitration (i.e., the third processing cycle) is implemented as follows: assuming that channel2 inputs a data piece supplied from a data source with an ID 000, and the data3 data source supplies a data piece with an ID 011, since the value of the state register unit in ARB3 is updated to 010, it is determined that the logical difference value between the IDs is (000-010)>(011-010), thus the data piece supplied from the data source with the ID 000 input from channel2 obtains usage right of channel3, while the value of the state register unit in ARB3 is updated to 000. A fourth data arbitration (i.e., the fourth processing cycle) is implemented as follows: assuming that channel2 inputs a data piece supplied from a data source with an ID 001, and the data3 data source supplies a data piece with an ID 011, since the value of the state register unit in ARB3 is updated to 000, it is determined that the logical difference value between the IDs is (001-000)<(011-000), thus the data piece with the ID 011 supplied from the data3 data source obtains usage right of channel3, while the value of the state register unit in ARB3 is updated to 011. By analogy, details will not be described in the present disclosure. In some implements, when the arbiter receives 3 data pieces, for example, 3 data pieces with IDs 000, 001, and 010, respectively, and the initial value of the reference ID is 000, the logical difference value between each ID and the reference ID can be determined separately, for example, the logical difference values are 000, 001, and 010, respectively. It may be determined that the maximum logical difference value is 010, and the data piece with the ID 010 obtains usage right of the transmission channel (in the current arbitration, the data piece with the ID 010 has a higher priority than the other two data pieces), thus the data piece with the ID 010 is transmitted, and the value of the reference ID is updated to 010. A plurality of data pieces, for example, 3 data pieces with IDs 000, 001 and 010 respectively, are further received, and logical difference values between these 3 data pieces and the reference ID (010) are determined, for example, to be 110, 111 and 000 respectively, where the maximum logical difference value is 111, and the data piece with the ID 001 obtains usage right of the transmission channel (in the current arbitration, the data piece with the ID 001 has a higher priority than the other two data pieces), thus the data piece with the ID 001 is transmitted, and the value of the reference ID is updated to 001. A plurality of data pieces, for example, 3 data pieces with IDs 000, 001 and 010 respectively, are further received, and logical difference values between these 3 data pieces and the reference ID (001) are determined, for example, to be 111, 000 and 001 respectively, where the maximum logical difference value is 111, and the data piece with the ID 000 obtains usage right of the transmission channel (in the current arbitration, the data piece with the ID 000 has a higher priority than the other two data pieces), thus the data piece with the ID 000 is transmitted, and the value of the reference ID is updated to 000. By analogy, details are not described here. In this manner, dynamic adjustment of priorities of a plurality of data pieces can be implemented so that the plurality of data pieces can obtain usage right of the transmission channel in a more balanced manner. After determining the target data piece, the method further includes operation S330. At operation S330, transmitting the target data piece. In the present disclosure, the arbiter may be a component module of the NOC. FIG.7is a block diagram illustrating an implementation of a data processing apparatus according to the present disclosure. As shown inFIG.7, the data processing apparatus includes: a candidate data determination unit710and a target data determination unit720. The candidate data determination unit710is configured to determine a plurality of candidate data pieces, where the candidate data pieces are provided from corresponding data sources. The target data determination unit720is configured to determine a target data piece based on priorities of the data sources corresponding to the plurality of candidate data pieces in a current cycle, where a same data source has different priorities in different processing cycles, and priority sequence numbers of a same data source in different processing cycles satisfy a nonlinear relationship. The apparatus of the present disclosure is used for implementing the method of the present disclosure, and since the principle and beneficial effects of the method have been described in detail above, details are not repeated here. In order to implement the arbitration scheme shown inFIG.1, optionally, as shown inFIG.8, the target data determination unit720may include a number determination subunit721, a code generation subunit722, an assignment subunit723, a target code generation subunit724, an arbitration subunit725, and an order update subunit726. The number determination subunit721is configured to determine the number of data sources in an Nthlevel arbitration, where N is a natural number and 1 and M is the total number of arbitration levels in the current processing cycle. The code generation subunit722is configured to generate codes from the number of data sources in the Nthlevel arbitration in a set encoding mode. The assignment subunit723is configured to assign all the generated codes to all data sources in the Nthlevel arbitration according to a set priority order. The target code generation subunit724is configured to generate a target code for a data source corresponding to each candidate data piece according to the generated codes. The arbitration subunit725is configured to arbitrate, according to the target code and a priority order corresponding to the target code, the data source corresponding to each candidate data piece to determine the target data piece. The order update subunit726is configured to update, according to an arbitration result, a priority order of all data sources at each level of arbitration. In order to implement the arbitration scheme shown inFIG.2, optionally, as shown inFIG.9, the target data determination unit720may include a logical difference value determination subunit721′, a priority determination subunit722′, and a target data determination subunit723′. The logical difference value determination subunit721′ is configured to determine a logical difference value between the ID carried by each of the plurality of candidate data pieces and a reference ID, where the reference ID has different values in different processing cycles. The priority determination subunit722′ is configured to determine the priorities of the data sources corresponding to the plurality of candidate data pieces from the determined plurality of logical difference values. The target data determination subunit723′ is configured to take one of the plurality of candidate data pieces corresponding to the data source with a highest priority as the target data piece. The apparatus may further include a data transmission unit730configured to transmit the target data piece. FIG.10is a schematic structural diagram of an electronic device according to the present disclosure. As shown inFIG.10, the electronic device of the present disclosure includes processing cores11to1L and a network on chip14. The processing cores11-1L are all connected to the network on chip14. The network on chip14is configured for data interaction among the L processing cores and between the cores and outside. The electronic device of the present disclosure can implement the data processing method according to the present disclosure. However, in the present disclosure, the specific form and position for implementing the method are not particularly limited. For example, the method may be implemented by the network on chip14, or may be implemented by at least one of the plurality of processing cores. The method may be implemented by software, or may be implemented by hardware. When the method is implemented by software, instructions may be stored in the network on chip14, and the network on chip14implements the method according to the instructions. When the method is implemented by hardware, corresponding hardware may be configured in the network on chip14to implement the method. Apparently, when the method is implemented by software, instructions may be stored in at least one of the plurality of processing cores, and the processing core storing the instructions implements the method according to the instructions. When the method is implemented by hardware, corresponding hardware may be configured in at least one of the plurality of processing cores to implement the method. As will be appreciated by one skilled in the art, aspects of the disclosed embodiments may be embodied in a system, a method or a computer program product. Therefore, various aspects of embodiments of the present disclosure may take the form of: an entirely hardware implementation, an entirely software implementation (including firmware, resident software, microcode, etc.) or an implementation combining software and hardware aspects that may be generally referred to herein as a “circuit,” a “module” or a “system.” Furthermore, various aspects of embodiments of the present disclosure may take the form of: a computer program product embodied in one or more computer-readable media having a computer-readable program code embodied thereon. Any combination of one or more computer-readable media may be utilized. A computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium. A computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination thereof. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present disclosure, a computer-readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, an apparatus, or a device. A computer-readable signal medium may include a propagated data signal with a computer-readable program code embodied therein, for example, in a baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to: electromagnetic or optical signals, or any suitable combination thereof. The computer-readable signal medium may be any of the following computer-readable media: a computer-readable storage medium and may communicate, propagate or transmit a program for use by or in connection with an instruction execution system, an apparatus, or a device. The program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including, but not limited to, wireless, wireline, optical fiber cable, RF, and the like, or any suitable combination of the foregoing. The computer program code for carrying out operations of various aspects of the embodiments of the present disclosure may be written in any combination of one or more programming languages, including: object-oriented programming languages such as Java, Smalltalk, C++, and the like; and conventional procedural programming languages, such as the “C” programming language, or the like. The program code may be executed entirely or partially on a user computer, as a stand-alone software package; partially on a user computer and partially on a remote computer, or entirely on a remote computer or server. In the latter case, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or connected to an external computer (for example, through the Internet provided by an Internet service provider). The flowchart illustrations and/or block diagrams of the method, the apparatus (system) and the computer program product according to embodiments of the present disclosure describe various aspects of embodiments of the present disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, a special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus or other devices to operate in a particular manner such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions for implementing the functions/acts specified in the in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions executed on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The descriptions above are only preferred embodiments of the present disclosure, and are not intended to limit the present disclosure. For those skilled in the art, the present disclosure may have various changes and variations. Any amendments, equivalent substitutions, improvements, or the like within the principle of the present disclosure are all included in the scope of the protection defined by the appended claims of the present disclosure.
65,515
11863462
DESCRIPTION OF EMBODIMENTS Embodiments of the present invention will be described below with reference to the drawings. In the description of the drawings, the same parts are designated by the same reference signs and the description thereof will be omitted. First Embodiment (Network Configuration Including MEC Server) FIG.1shows an example of a network configuration (network system) that includes an MEC server to which resources are allocated according to the present embodiment. The illustrated network configuration includes a cloud server1, MEC server groups3, and a plurality of mobile terminals5. An MEC server group3is located between the cloud server1and a mobile terminal5, having a hierarchical structure according to the area. The illustrated mobile terminal5moves between areas and connects to an MEC server group closest to the mobile terminal5via a base station (radio base station). It is noted that the network configuration is not limited to that shown inFIG.1, and various configurations can be considered. The cloud server1, for example, aggregates information processed in real time by the MEC server group3to generate big data, and perform large-scale analysis and the like. More specifically, the cloud server1performs overall processing in which the real-time requirements are less stringent than the low-latency processing in the MEC server group3located at the edge. Each MEC server group3includes a plurality of virtualized MEC servers. In the present embodiment, the resources of at least one physical server (not shown) are divided into a plurality of MEC servers (MEC server group3) and used. Each MEC server runs an OS and applications and can be used as if it were an independent computer. The mobile terminal5is a terminal (mobile terminal) that performs wireless communication, such as a smart phone or a feature phone. (Configuration of Resource Allocation Device) FIG.2is a functional block diagram showing a configuration of a resource allocation device2of the present embodiment. The resource allocation device2virtualizes the resources of the physical server and allocates the resources to the MEC server group3. The illustrated resource allocation device2includes a location acquisition unit21, a speed/route acquisition unit22, a course estimation unit23, a determination unit24, an execution unit25, a resource monitor unit26, a determination monitor unit27, and a storage unit28. The location acquisition unit21acquires network information about the location of the mobile terminal5from the network device4(NW device), and stores (saves) the network information in the storage unit28. The network device4is, for example, a base station, a device on the core network side or the like. The network information is, for example, connection information between the mobile terminal5and the base station. The connection information indicates, for every unit time, which terminal5is present in which location (cell of the base station). FIG.3shows the location of the mobile terminal5at time t=0 (a.u.). Here, an MEC-specific service in which six mobile terminals a, b, c, d, e and f transmit information to each MEC server group in each area (area A-1-α, area A-1-β, area A-1-γ, and area A-1-δ under area A-1-0), and the MEC server group performs real-time processing will be described as an example. Each area consists of squares1-9. The location acquisition unit21uses the network information to acquire the location of each mobile terminal5for every unit time as shown inFIG.8. The speed/route acquisition unit22acquires the movement speed and the movement route of each mobile terminal5for every unit time based on the network information stored in the storage unit28, and stores the acquired movement speed and movement route in the storage unit28. For example, it is assumed that the mobile terminals a, b, c, d, e and f move from the state at time t=0 (a.u.) shown inFIG.3to the state at time t=2 (a.u.) shown inFIG.4shows that the mobile terminals a and b move from the squares9to7via the square8in the area A-1-δ, the terminals c and d move from the squares9to1via the square5in the area A-1-δ, and the terminals e and f move from the squares9to3via the square6in the area A-1-δ. It is assumed that each mobile terminal moves at the same speed, which is 1 square/a.u. (arbitrary time unit). The course estimation unit23estimates the course of each mobile terminal5based on the network information, and estimates the probability that each mobile terminal5is located in each area at the time of prediction. Here, the course estimation unit23estimates the course of each mobile terminal using the movement speed and the movement route acquired by the speed/route acquisition unit22. Specifically, the course estimation unit23estimates the course of the mobile terminal5for every unit time using the movement speed and the movement route of the mobile terminal5acquired from the storage unit28, and stores the estimation result in the storage unit28. FIG.5shows an example of a course estimation result at a predicted future time t=3 (a.u.). The illustrated example indicates that the mobile terminals a and b are in the area A-1-γ at a rate of 90%, and in the area A-1-δ at a rate of 10% at time t=3 (a.u.). The method of estimating the course is not particularly specified, but the course may be predicted in an artificial intelligence manner (for example, using a learned model generated by machine learning) after the accumulated past movement speed and movement route are learned, or may be estimated with reference to a predetermined movement plan of the mobile terminal5. The determination unit24calculates, for each area, the number of mobile terminals5in the area using the probability included in the estimation result and determines whether the maximum value of an overcommit ratio for each area calculated based on the number of mobile terminals5exceeds the upper limit. FIG.6shows an example of a determination result for each area at time t=3 (a.u.). The determination result includes the number of terminals in the area (expected value), resources to be committed, and an overcommit ratio. The determination unit24calculates, for each area, the number of terminals in the area by summing up the percentage of each mobile terminal5in the area at time t=3 (a.u.). For example, in the case of the course estimation result shown inFIG.5, the determination unit24calculates the number of terminals in the area A-1-α as 0.75+0.75=1.5, and calculates the number of terminals in the area A-1-β as 0.1+0.1+0.9+0.9=2. Then, the determination unit24uses the calculated number of terminals in each area rounded up to the nearest whole number as the resources to be committed. For example, the determination unit24uses “2”, which is rounded up from the number of terminals “1.5” in the area A-1-α, as the resources to be committed. Since the resource commitment is performed by a discrete value (integer) in units of the number of mobile terminals5, here, the value obtained by rounding up a fraction of the number of terminals in the area is used as resources to be committed. It is assumed that the resources required for one mobile terminal5are equal. Then, the determination unit24calculates, for each area, the overcommit ratio by the following equation. Overcommit Ratio=Resources to be committed/Existing resources The existing resources are the resources currently allocated to the MEC server group, and are stored in the storage unit28by the resource monitor unit26. The determination unit24acquires the existing resources of each MEC server group corresponding to each area from the storage unit28, and calculates the overcommit ratio. In the example shown inFIG.6, it is assumed that each MEC server group3in each area (area A-1-α, area A-1-β, area A-1-γ, and area A-1-δ) has resources for two mobile terminals. In this case, the determination unit24sets the overcommit ratio of the area A-1-α to 2/2=1, and sets the overcommit ratio of the area A-1-δ to 1/2=0.5. Then, the determination unit24compares the maximum value of the calculated overcommit ratio with the upper limit of the overcommit ratio stored in the storage unit28, and when the maximum value of the overcommit ratio is equal to or less than the upper limit, the determination result including the calculated overcommit ratio is stored in the storage unit28. On the other hand, when the maximum value of the overcommit ratio exceeds the upper limit, the determination unit24stores the determination result including the calculated overcommit ratio in the storage unit28, and changes the resource allocation method of the MEC server group3so that the overcommit ratio does not exceed the upper limit. Thus, the existing resources in the area where the overcommit ratio exceeds the upper limit may be changed, and the overcommit ratio after the change may become equal to or less than the upper limit. In the example shown inFIG.6, the maximum value of the overcommit ratio is “1”. The upper limit of the overcommit ratio is stored in the storage unit28, and is “3” here. In this case, the determination unit24determines that the maximum value “1” of the overcommit ratio is equal to or less than the upper limit “3”, and stores the determination result including the calculated overcommit ratio in the storage unit28. When the maximum value of the overcommit ratio is equal to or less than the upper limit, the execution unit25executes the allocation or release of resources to the MEC server group3located in each area, and when the maximum value of the overcommit ratio exceeds the upper limit, the execution unit25refrains from executing the allocation or release of resources. That is, the execution unit25compares the maximum value of the overcommit ratio of the determination result stored in the storage unit28with the upper limit, and executes the allocation or release of resources according to the comparison result. The resource monitor unit26monitors the utilization status of the resources of the MEC server group3, and stores the utilization status of the resources in the storage unit28. Resources include, for example, CPU, memory, CPU, and disk capacity. The determination unit24refers to the resource utilization status stored in the storage unit28, and acquires the existing resources necessary for calculating the overcommit ratio. Further, when the calculated overcommit ratio exceeds the upper limit, the determination unit24changes the allocation of resources with reference to the resource utilization status stored in the storage unit28. When the maximum value of the overcommit ratio exceeds the upper limit, the determination monitor unit27outputs an alert (first alert). Specifically, the determination monitor unit27refers to the determination result and the upper limit of the overcommit ratio stored in the storage unit28, and compares the maximum value of the overcommit ratio included in the determination result with the upper limit. When the maximum value of the overcommit ratio exceeds the upper limit, the determination monitor unit27generates an alert and transmits the alert to an administrator terminal6. An alert is a warning that indicates that an event in which the overcommit ratio exceeds the upper limit may occur, and that the performance of the functions, services and the like of the target MEC server group3may deteriorate. When the maximum value of the overcommit ratio is equal to or less than the upper limit, the determination monitor unit27terminates the processing without generating an alert. For example, when each MEC server group3in each area (area A-1-α, area A-1-β, area A-1-γ, and area A-1-δ) has resources for two mobile terminals, and the resources to be committed in each area are the resources for four, two, two, and one mobile terminal, respectively, the overcommit ratio for each area is 2, 1, 1, and 0.5, respectively. Here, the upper limit of the overcommit ratio is “1”. In this case, since the maximum value “2” of the overcommit ratio included in the determination result exceeds the upper limit “1”, the determination monitor unit27transmits an alert to the administrator terminal6. It is noted that the determination monitor unit27may include an interface that allows the occurrence of an alert to be monitored or acquired from the outside. The administrator terminal6is an external terminal used by a resource administrator such as a maintenance person. The administrator terminal6receives the alert transmitted from the resource allocation device2, and displays the alert on a display or the like to present the alert to the resource administrator. The resource administrator detects the alert and takes measures to avoid the occurrence of the alert, such as increasing the resources of the MEC server group3, and changing the functions/services provided by the MEC server group3. (Operation of Resource Allocation Device) FIG.7is a sequence diagram showing the operation of the resource allocation device2according to the present embodiment. The resource monitor unit26monitors the utilization status of the resources of the MEC server group3(S11), and stores (saves) the utilization status of the resources in the storage unit28(S12). It is noted that the resource monitor unit26constantly monitors the resource utilization status. Thus, the storage unit28stores the resource utilization status of the MEC server group3in chronological order. The location acquisition unit21acquires network information about the location of the mobile terminal5from the network device4(S13), and stores the network information in the storage unit28(S14). The network information is, for example, connection information to a base station that indicates in which base station cell the mobile terminal5is located. It is noted that the location acquisition unit21acquires the network information of each mobile terminal5from the network device4at a predetermined timing (predetermined time interval). Thus, the storage unit28stores the network information about the location of each mobile terminal5in chronological order. The speed/route acquisition unit22refers to the network information of each mobile terminal5stored in the storage unit28in chronological order (S15), acquires the movement speed and the movement route of each mobile terminal5for every unit time, and stores the movement speed and the movement route in the storage unit28(S16). The course estimation unit23refers to the movement speed and the movement route of each mobile terminal5stored in the storage unit28(S17), and stores the probability that each mobile terminal is located in each area at the time of prediction in the storage unit28as the course estimation result of each mobile terminal5(S18). The determination unit24refers to the course estimation result, the resource utilization status, the upper limit of the overcommit ratio and the like stored in the storage unit28(S19). The determination unit24calculates, for each area, the number (expected value) of mobile terminals in the area using the probability of the course estimation result, determines whether the maximum value of the overcommit ratio for each area calculated based on the calculated number of mobile terminals exceeds the upper limit, and stores the determination result in the storage unit28(S20). The detailed processing of the determination unit24will be described later. The execution unit25refers to the determination result stored in the storage unit28(S21), and when the maximum value of the overcommit ratio is equal to or less than the upper limit, executes the allocation or release of the resources of the MEC server group3(S22). The detailed processing of the execution unit25will be described later. It is noted that when the maximum value of the overcommit ratio exceeds the upper limit, the execution unit25terminates the processing without executing the allocation or release of the resources of the MEC server group3. In this case, since the determination monitor unit27constantly monitors the determination result stored in the storage unit28, when the determination monitor unit27detects that the determination result in which the maximum value of the overcommit ratio exceeds the upper limit is stored in the storage unit28, the determination monitor unit27generates an alert and transmits the alert to the administrator terminal6. The detailed processing of the determination monitor unit27will be described later. (Operation of Determination Unit) FIG.8is a flowchart specifically showing the processing of the determination unit24in S19and S20shown inFIG.7. The determination unit24acquires the course estimation result from the storage unit28(S31). The course estimation result indicates, for example, as shown inFIG.5, the probability that each mobile terminal is located in each area at the time of prediction. The determination unit24calculates, for each area, the number (expected value) of the mobile terminals in the area using the course estimation result and determines the resources to be committed based on the number of the mobile terminals in the area (S32). In the example shown inFIG.6, the determination unit24rounds up the number of mobile terminals in the area to the nearest whole number to determine the resources to be committed. The determination unit24acquires the resource utilization status and the upper limit of the overcommit ratio from the storage unit28(S33). The determination unit24calculates the overcommit ratio for each area by dividing the “resources to be committed” by the “existing resources” of the MEC server group3that is the target of the resource utilization status. The determination unit24determines whether the maximum value of the overcommit ratio for each area is equal to or less than the upper limit (S34). When the maximum value is equal to or less than the upper limit (S34: YES), the determination unit24stores the determination result including the calculated overcommit ratio in the storage unit28(S35). When the maximum value exceeds the upper limit (S34: NO), the determination unit24stores the determination result including the calculated overcommit ratio in the storage unit28(S36). Then, the determination unit24changes the method of allocating the resources of the MEC server group3so that the overcommit ratio does not exceed the upper limit (S37). For example, the determination unit24does not overcommit to the same CPU, but commits to another CPU. When the number of times of executing S37is less than the predetermined number of times (S38: NO), the determination unit24returns to S34, and repeats the subsequent processing. When the number of times of executing S37reaches the predetermined number of times (S38: YES), the determination unit24determines that it is impossible to allocate the resources of the MEC server group3so as not to exceed the upper limit of the overcommit ratio, and terminates the processing. (Operation of Execution Unit) FIG.9is a flowchart specifically showing the processing of the execution unit25in S21and S22shown inFIG.7. The execution unit25acquires a determination result including the overcommit ratio for each area from the storage unit28(S41). When the maximum value of the overcommit ratio is equal to or less than the upper limit of the overcommit ratio stored in the storage unit28(S42: YES), the execution unit25executes the allocation or release of the resources to the MEC server group3. When the maximum value of the overcommit ratio exceeds the upper limit (S42: NO), the execution unit25terminates the processing without executing the allocation or release of the resources. (Operation of Determination Monitor Unit) FIG.10is a flowchart specifically showing the processing of the determination monitor unit27. The determination monitor unit27acquires a determination result including the overcommit ratio for each area from the storage unit28(S51). When the maximum value of the overcommit ratio exceeds the upper limit of the overcommit ratio stored in the storage unit28(S45: YES), the determination monitor unit27generates an alert (S53). An alert is a warning that indicates that an event in which the overcommit ratio exceeds the upper limit may occur, and that the performance of the functions, services and the like of the target MEC server group3may deteriorate. The determination monitor unit27transmits the generated alert to the administrator terminal6(S54). In addition, the determination monitor unit27may include an interface that allows the occurrence of an alert to be monitored or acquired from the outside. When the maximum value of the overcommit ratio is equal to or less than the upper limit (S52: NO), the determination monitor unit27terminates the processing. It is noted that, in the present embodiment, when the maximum value of the overcommit ratio is equal to or less than the upper limit as a result of the determination unit24changing the allocation of resources in S37ofFIG.8, the processing is terminated without creating an alert. The resource allocation device2of the present embodiment described above includes: the course estimation unit23that estimates a course of each mobile terminal5based on network information about a location of the mobile terminal5acquired from the network device4, and estimates a probability that each mobile terminal5is located in each area at the time of prediction; the determination unit24that calculates, for each area, the number of mobile terminals5in the area using the probability and determines whether the maximum value of an overcommit ratio for each area calculated based on the number of mobile terminals in the area exceeds an upper limit; and the execution unit25that executes the allocation or release of resources to the MEC server group3(virtual server) located in each area when the maximum value of the overcommit ratio is equal to or less than the upper limit, and refrains from executing the allocation or release of the resources to the MEC server group3when the maximum value of the overcommit ratio exceeds the upper limit. As described above, in the present embodiment, instead of overcommitting uniformly at all times, the overcommitment is controlled based on the course estimation result of the mobile terminal5calculated for each probability by utilizing the network information. In addition, in the present embodiment, by utilizing the network information to control the timing of resource release, the resource allocation status can be adjusted to the actual status of the mobile terminal, and the apparent increase in the overcommit ratio can be avoided. Therefore, in the present embodiment, it is possible to control overcommitment according to the movement status of a mobile terminal and utilizes resources effectively. In addition, in the present embodiment, when the MEC-specific functions and services provided by the resources of the MEC server group3are operated, the overcommitment of shared resources such as CPU and memory, and that overcommitment can be dynamically controlled by utilizing the network information. Thus, in the present embodiment, it is possible to efficiently utilize resources that are limited in quantity. In addition, in the present embodiment, by improving the accuracy of the course estimation result of the mobile terminal5and the resource allocation determination result based on the course estimation result, the accommodation rate of the mobile terminal5can be efficiently improved from the design stage of the system. Further, in the present embodiment, the geographical hierarchical structure of the MEC server group3can be utilized to associate the real-time resource utilization status of the MEC server group3and the network information about the location of the mobile terminal5with the current resource allocation. Moreover, in the present embodiment, by estimating the course of the mobile terminal5based on the accumulated network information about the location of the mobile terminal5and performing resource allocation in consideration of the association, it is possible to efficiently utilize limited resources. As described above, in the present embodiment, in a system in which the MEC server group3having a hierarchical structure according to a regional division, the mobile terminal5, and the cloud server are connected via the network, the resources of the MEC server group3can be efficiently allocated to the functions and services provided by the MEC server group3. Second Embodiment In the present embodiment, handover information is used as the network information acquired by the location acquisition unit21of the resource allocation device2. The handover information is one type of connection information between the mobile terminal and the base station. By utilizing this handover information, it is possible to specify the resources of the MEC server group in which area are to be used at the timing when the mobile terminal is connected to the base station in the area, regardless of the absolute location of the mobile terminal. Further, as compared with the case in which the information of the terminal alone such as GPS is used, the resources to be used by the MEC server group3can be determined and the resources not to be used can be released at an early stage. Here, it is assumed that different base stations have jurisdiction over different areas. In the case of the estimation result shown inFIG.5, the candidate areas for the mobile terminal c at time t=3 (a.u.) are the area A-1-α, area A-1-β, area A-1-γ and area A-1-δ. At time t=3 (a.u.), for example, when the mobile terminal c moves to the area A-1-α, the mobile terminal c and the MEC server group located in the area A-1-α must be connected via the base station having jurisdiction over the area A-1-α. At time t=2 (a.u.), the mobile terminal c is located in the area A-1-δ, and is connected to the MEC server group in the area via the base station in the area. Therefore, a handover is performed between the base station having jurisdiction over the area A-1-δ and the base station having jurisdiction over the area A-1-α between the time t=2 (a.u.) and the time t=3 (a.u.). That is, the location acquisition unit21subscribes (receives) the handover information for carrying out this handover from the network device4, thereby, the resources to be used can be determined before the mobile terminal actually connects to the MEC server group, and the resources that are not used can be released. When information acquired only by the terminal (e.g., GPS) is used, not only the allocation or release of resources cannot be determined unless the absolute location of the mobile terminal changes, but also if the acquired absolute location of the mobile terminal is incorrect, resources may not be allocated properly. In the present embodiment, handover information (network information) that mediates the connection between the mobile terminal and the MEC server group is utilized in order to grasp the location of the mobile terminal and estimate the course of the mobile terminal, rather than information on the terminal alone such as GPS. Thus, in the present embodiment, it is possible to specify the resources of the MEC server group3in which area are to be used when the mobile terminal is connected to the base station in the area where the mobile terminal is located, and it is possible to determine the resources to be used and release the resources not to be used at an early stage. Therefore, in the present embodiment, it is possible to further improve the resource utilization efficiency by resource allocation including overcommitment. Third Embodiment In the present embodiment, a case where the first embodiment is applied to the resource management of the MEC server group in the FaaS (Function as a Service) will be described. The resource allocation device of the present embodiment grasps in advance the requirements of the function/service and the resources to be used when accommodating the MEC-specific FaaS (Function as a Service) in the MEC server group. Then, the resource allocation device grasps, for each area, whether the requirements of the FaaS are satisfied based on the resource allocation including the overcommitment, in light of the resource utilization status. The resource administrator operates the MEC server group in the area where the requirements of the FaaS are satisfied with the current resources. For the MEC server groups in the areas where the requirements of the FaaS are not satisfied, including performance degradation, the resource administrator takes measures such as increasing resources and changing specifications for the functions and services provided by the MEC server group. The determination of whether the FaaS requirements are satisfied is performed by the determination monitor unit27of the resource allocation device2. Specifically, the determination monitor unit27determines whether the MEC server group satisfies the requirements of the FaaS by using at least one of the number of times the alert (first alert) is output when the maximum value of the overcommit ratio exceeds the upper limit, and the number of times the resource allocation is changed when the maximum value of the overcommit ratio exceeds the upper limit (FIG.8: S37), and outputs a second alert if the requirements are not satisfied. For example, when the number of the first alerts exceeds a first threshold, or the number of times the resource allocation is changed exceeds a second threshold, the determination monitor unit27determines that the requirements of the FaaS are not satisfied. The number of the first alerts and the number of times the resource allocation is changed may be the cumulative number of times or may be the number of times per predetermined unit time. When the requirements are not satisfied, the determination monitor unit27generates a second alert indicating that the requirements of the FaaS are not satisfied, and transmits the second alert to the administrator terminal6. In addition, the determination monitor unit27may include an interface that allows the occurrence of the second alert to be monitored or acquired from the outside. In the present embodiment, from the perspective of the FaaS that is not aware of physical resources to which the function is to be deployed, when accommodating in the MEC server group the MEC-specific FaaS that needs to be aware of the area to be provided and the physical resources to which the function is to be deployed, it can be used to improve the accommodation rate of the FaaS to the MEC server group and to efficiently augment and maintain the facilities. This is because the entire resource is considered as having regional characteristics and real-time changes in utilization rate, rather than a uniform pool of resources, and by using network information, these changes are considered predictable along with the possibility of the FaaS performance degradation. (Hardware Configuration of Resource Allocation Device) It is noted that a general-purpose computer system as shown inFIG.11can be used for the resource allocation device2described above, for example. The illustrated computer system includes a CPU (Central Processing Unit, processor)901, a memory902, a storage903(HDD: Hard Disk Drive, SSD: Solid State Drive), a communication device904, an input device905, and an output device906. The memory902and the storage903are storage devices. In this computer system, the CPU901executes a predetermined program loaded on the memory902, thereby realizing the function of the resource allocation device2. Further, the resource allocation device2may be implemented by a single computer or a plurality of computers. Moreover, the resource allocation device2may be a virtual machine implemented in a computer. The program of the resource allocation device2can be stored in a computer-readable recording medium such as an HDD, SSD, USB (Universal Serial Bus) memory, CD (Compact Disc) and DVD (Digital Versatile Disc), or can be distributed via a network. Other Embodiments It is noted that the present invention is not limited to the above embodiments, and many modifications are possible within the scope of the gist thereof. For example, the determination unit24of the above embodiments calculates the “number of terminals in the area” (expected value) shown inFIG.6by summing up the percentage (course estimation result) of each mobile terminal5in the area. However, the determination unit24may calculate the overcommit ratio using a safety value obtained by multiplying the “number of terminals in the area” by a predetermined safety coefficient. In addition, the determination unit24may calculate the overcommit ratio using a calculated value determined by applying a predetermined formula unique to the user to the “number of terminals in the area”. In addition, the determination unit24may fixedly use every time or dynamically change any of the expected value of the above embodiment, the safety value obtained by multiplying the expected value by a predetermined safety coefficient, and the calculated value obtained by applying a mathematical formula to the expected value. In the case of dynamic change, for example, the determination unit24may improve the resource utilization rate by using the difference between the determination result of the past resource allocation and the actual resource utilization status, or may choose the safer option to reduce the resource utilization rate. That is, the determination unit24may dynamically change the calculation method of the “number of terminals in the area” for calculating the overcommit ratio by using the feedback control. Therefore, the determination unit24of the present invention may use as the number of terminals in the area, at least one of an expected value obtained by summing up the probabilities, a safety value obtained by multiplying the expected value by a safety coefficient, and a calculated value obtained by applying a predetermined formula to the expected value. REFERENCE SIGNS LIST 1Cloud server2Resource allocation device21Location acquisition unit22Speed/route acquisition unit23Course estimation unit24Determination unit25Execution unit26Resource monitor unit27Determination monitor unit28Storage unit3MEC server group4Network device5Mobile terminal6Administrator terminal
34,460
11863463
DETAILED DESCRIPTION Some embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all, embodiments of the invention are shown. Indeed, various embodiments of the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout. As used herein, the terms “data,” “content,” “information,” and similar terms may be used interchangeably to refer to data capable of being transmitted, received and/or stored in accordance with embodiments of the present invention. Thus, use of any such terms should not be taken to limit the spirit and scope of embodiments of the present invention. Additionally, as used herein, the term ‘circuitry’ refers to (a) hardware-only circuit implementations (e.g., implementations in analog circuitry and/or digital circuitry); (b) combinations of circuits and computer program product(s) comprising software and/or firmware instructions stored on one or more computer readable memories that work together to cause an apparatus to perform one or more functions described herein; and (c) circuits, such as, for example, a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation even if the software or firmware is not physically present. This definition of ‘circuitry’ applies to all uses of this term herein, including in any claims. As a further example, as used herein, the term ‘circuitry’ also includes an implementation comprising one or more processors and/or portion(s) thereof and accompanying software and/or firmware. As another example, the term ‘circuitry’ as used herein also includes, for example, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, other network device, and/or other computing device. As defined herein, a “computer-readable storage medium,” which refers to a non-transitory physical storage medium (e.g., volatile or non-volatile memory device), can be differentiated from a “computer-readable transmission medium,” which refers to an electromagnetic signal. As used herein, a “request data object” or “request object” is any data object that contains a request from a user or other entity for access to and/or use of system resources and an indication of the requirements and/or other parameters associated with the request. As used herein, a “network response asset” is a finite network asset that may be paired with a request data object and is capable of providing network or other system resources in response to the request data object, and/or causing, through the interaction with other systems, the deployment of network and/or other system resources to fulfill the requirements and other parameters of a request data object. Turning now to the Figures,FIG.1shows an example system environment100in which implementations involving the efficient pairing and authorization of request data objects and network response assets in accordance with an example embodiment of the present invention may be performed. The depiction of environment100is not intended to limit or otherwise confine the embodiments described and contemplated herein to any particular configuration of elements or systems, nor is it intended to exclude any alternative configurations or systems for the set of configurations and systems that can be used in connection with embodiments of the present invention. Rather,FIG.1, and the environment100disclosed therein is merely presented to provide an example basis and context for the facilitation of some of the features, aspects, and uses of the methods, apparatuses, and computer program products disclosed and contemplated herein. It will be understood that while many of the aspects and components presented inFIG.1are shown as discrete, separate elements, other configurations may be used in connection with the methods, apparatuses, and computer programs described herein, including configurations that combine, omit, and/or add aspects and/or components. Embodiments implemented in a system environment such as system environment100advantageously provide for the pairing of request data objects and network response assets through, for example, the receipt of request data objects, the extraction of request parameters from each request data object, the identification of a set of network response assets, the determination of a set of network response asset characteristics, the selection of a network response asset based at least in part on the request parameters and network response asset characteristics, and the transmission of a portion of a request data object to the network response asset. Some such embodiments leverage a hardware and software arrangement or environment for request data object-to-network response asset pairing in accordance with the present invention. As shown inFIG.1, an object-asset pairing system102includes an online pairing system module102A which is configured to receive, process, transform, transmit, communicate with, and evaluate request data objects, network response assets, and data and systems associated therewith via a web server, such as pairing system server102B. The pairing system server102B is connected to any of a number of public and/or private networks, including but not limited to the Internet, the public telephone network, and/or networks associated with particular communication systems or protocols, and may include at least one memory for storing at least application and communication programs. As shown inFIG.1, object-asset pairing system102also includes a pairing database102C that may be used to store information associated with request data objects, network response assets, and/or information related to the pairing thereof, which can be accessed by the pairing system module102A and/or the pairing system server102B. WhileFIG.1depicts pairing system database102C as a single structure, it will be appreciated that pairing system database102C may additionally or alternatively be implemented to allow the storage in a distributed fashion and/or at facilities that are physically remote from the each other and/or the other components of object-asset pairing system102. Request data objects and/or additional information to be associated with one or more request data objects may originate from a client system such as request object system104. A user of request object system104may use a request object device104B, such as a laptop computer, desktop computer, or mobile device, for example, to interface with a request object module104A to generate a request data object and/or information to be included in a request data object, such as instruction associated with the request data object, intermediate and/or target destinations associated with the request object, and/or other information to be convey from a user as part of a request for a response to be conveyed to an object-asset pairing system, such as object-asset pairing system102. In some example implementations, such as those that arise in contexts and situations where users seek to have goods, materials, and/or other resources delivered from one location to another, a request object system such as request object system104may take the form of, or be incorporated into, a user's mobile device which is configured to accept request information, such as an order for food from a restaurant, and transmit that information in the form of a request data object to an object-asset pairing system. While only one request object system104is depicted inFIG.1in the interest of clarity, it will be appreciated that numerous other such systems may be present in system environment100, permitting numerous users to develop and transmit request data objects to the object-asset pairing system102 As shown inFIG.1, system environment100also includes response asset system106, which comprises a response asset module106A and a response device106B. While only one response asset system106is depicted inFIG.1in the interest of clarity, it will be appreciated that numerous other such systems may be present in system environment100, permitting numerous, distributed network response assets to be paired with request data objects and fulfill the requests contained therein. Response asset device may comprise and/or incorporate a laptop computer, desktop computer, mobile device, or the like, for example, and is configured to interface with a response asset module106A to interact with object-asset pairing system102to fulfill the request(s) associated with one or more request data objects that have been paired with the network response asset. The response asset system106is also capable of communicating with object-asset pairing system102to provide information that the object-asset pairing system102may need when determining whether to pair a particular network response asset with a particular request data object. For example, response asset system106may, such as via the capabilities of response asset device106B ascertain the location of response asset system106through the use of a global positioning system (GPS) interface, cellular location protocols, and/or other location protocols that involve triangulating and/or otherwise determining a position of response asset device106B and/or other components associated with response asset system106. In some example implementations, such as those that arise in contexts or situations involving the delivery of goods, materials, and/or other resources, for example, the response asset system may include and/or be incorporated into a vehicle. Overall, and as depicted in system environment100, object-asset pairing system102engages in machine-to-machine communication with request object system104and response asset system106, via a network, to facilitate timely processing and pairing of request data objects such that request data objects received from request object system104are paired to a network response asset, and a system associated with that network response asset, such as response asset system106, can be activated to fulfill the parameters of the requests contained in one or more received request data objects. Based upon the parameters associated with a request data object and the characteristics of the available network response assets (and the systems related to such network response assets), a request data object may be paired with an authorized network response asset to facilitate the fulfillment of the request contained in the request data object. In this regard, a request data object may be paired with a network response asset by an apparatus200as depicted inFIG.2. The apparatus may be embodied by the object-asset pairing system102, or any of the components shown or otherwise contemplated therein, or any of the other devices discussed with respect toFIG.1, and/or devices that may be incorporated or otherwise associated with environment100. Alternatively, the apparatus200may be embodied by another computing device, external to such devices. For example, the apparatus may be embodied by a personal computer, a computer workstation, a server or the like, or by any of various mobile computing devices, such as a mobile terminal, e.g., a smartphone, a tablet computer, etc. Regardless of the manner in which the apparatus200is embodied, the apparatus of an example embodiment is configured to include or otherwise be in communication with a processor202and a memory device204and optionally the user interface206and/or a communication interface208. In some embodiments, the processor (and/or co-processors or any other processing circuitry assisting or otherwise associated with the processor) may be in communication with the memory device via a bus for passing information among components of the apparatus. The memory device may be non-transitory and may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory device may be an electronic storage device (e.g., a computer readable storage medium) comprising gates configured to store data (e.g., bits) that may be retrievable by a machine (e.g., a computing device like the processor). The memory device may be configured to store information, data, content, applications, instructions, or the like for enabling the apparatus to carry out various functions in accordance with an example embodiment of the present invention. For example, the memory device could be configured to buffer input data for processing by the processor. Additionally or alternatively, the memory device could be configured to store instructions for execution by the processor. As described above, the apparatus200may be embodied by a computing device. However, in some embodiments, the apparatus may be embodied as a chip or chip set. In other words, the apparatus may comprise one or more physical packages (e.g., chips) including materials, components and/or wires on a structural assembly (e.g., a baseboard). The structural assembly may provide physical strength, conservation of size, and/or limitation of electrical interaction for component circuitry included thereon. The apparatus may therefore, in some cases, be configured to implement an embodiment of the present invention on a single chip or as a single “system on a chip.” As such, in some cases, a chip or chipset may constitute means for performing one or more operations for providing the functionalities described herein. The processor202may be embodied in a number of different ways. For example, the processor may be embodied as one or more of various hardware processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing element with or without an accompanying DSP, or various other processing circuitry including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. As such, in some embodiments, the processor may include one or more processing cores configured to perform independently. A multi-core processor may enable multiprocessing within a single physical package. Additionally or alternatively, the processor may include one or more processors configured in tandem via the bus to enable independent execution of instructions, pipelining and/or multithreading. In an example embodiment, the processor202may be configured to execute instructions stored in the memory device204or otherwise accessible to the processor. Alternatively or additionally, the processor may be configured to execute hard coded functionality. As such, whether configured by hardware or software methods, or by a combination thereof, the processor may represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to an embodiment of the present invention while configured accordingly. Thus, for example, when the processor is embodied as an ASIC, FPGA or the like, the processor may be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, when the processor is embodied as an executor of software instructions, the instructions may specifically configure the processor to perform the algorithms and/or operations described herein when the instructions are executed. However, in some cases, the processor may be a processor of a specific device (e.g., a pass-through display or a mobile terminal) configured to employ an embodiment of the present invention by further configuration of the processor by instructions for performing the algorithms and/or operations described herein. The processor may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processor. In some embodiments, the apparatus200may optionally include a user interface206that may, in turn, be in communication with the processor202to provide output to the user and, in some embodiments, to receive an indication of a user input. As such, the user interface may include a display and, in some embodiments, may also include a keyboard, a mouse, a joystick, a touch screen, touch areas, soft keys, a microphone, a speaker, or other input/output mechanisms. Alternatively or additionally, the processor may comprise user interface circuitry configured to control at least some functions of one or more user interface elements such as a display and, in some embodiments, a speaker, ringer, microphone and/or the like. The processor and/or user interface circuitry comprising the processor may be configured to control one or more functions of one or more user interface elements through computer program instructions (e.g., software and/or firmware) stored on a memory accessible to the processor (e.g., memory device204, and/or the like). The apparatus200may optionally also include the communication interface208. The communication interface may be any means such as a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to receive and/or transmit data from/to a network and/or any other device or module in communication with the apparatus. In this regard, the communication interface may include, for example, an antenna (or multiple antennas) and supporting hardware and/or software for enabling communications with a wireless communication network. Additionally or alternatively, the communication interface may include the circuitry for interacting with the antenna(s) to cause transmission of signals via the antenna(s) or to handle receipt of signals received via the antenna(s). In some environments, the communication interface may alternatively or also support wired communication. As such, for example, the communication interface may include a communication modem and/or other hardware/software for supporting communication via cable, digital subscriber line (DSL), universal serial bus (USB) or other mechanisms. Some example implementations of the embodiments described herein are particularly advantageous in contexts and situations that involve the deployment of network resources to satisfy requests that require the acquisition and physical movement of goods, materials, and/or other resources, such as the delivery to a customer of food from a restaurant. As such, some example implementations are directed to providing for delivery dispatching (i.e., the deployment of response asset systems associated with network response assets, wherein such response asset systems are capable of transporting materials from one place to another, such as when a response asset system is incorporated into vehicle, for example) which minimizes delivery time while accounting for driver fairness and/or a relatively even distribution and deployment of system resources (such as the equitable distribution of request data object in the form of delivery requests amongst vehicle-based response asset systems). In such example implementations, the system (i.e., the object-asset pairing system and/or related components) attempts to optimize the number of orders or other requests dispatched to each response asset system (and/or driver or other operator related thereto) while trying to ensure that all deliveries are made properly and in a minimum delivery time. In some such example implementations, in making a dispatching decision, the system (i.e., the object-asset pairing system or related components), which may be referred to in such contexts as a “dispatcher” attempts to determine which vehicle-based response asset system and/or driver to dispatch and when to dispatch them. With regard to when to dispatch a driver for an order, the order should be picked up timely for delivery, as the restaurant and the end customer want the order delivered quickly. However, a driver should not arrive too early to pick up an order, thus causing the driver to wait for the order to be ready to take for delivery. Thus, in determining when to dispatch an order, lead time may be built in to the order for a predicted “make-time”. For example, some orders may be prepared quickly (e.g. immediately) while other orders may take 20 to 30 minutes to prepare. Consequently, when an order is placed, based on the make time, the system may be able to discern or estimate how many orders will be ready to dispatch in the next 15 to 20 minutes, or within another relevant time window. However, make times may be dynamic and/or otherwise inconsistent based on a number of variables, such as delays introduced during peak periods or additional time required for larger orders. Thus, some implementations of the present invention provide for adjustments based on make time to minimize the amount of time a driver waits for the order to be completed in the restaurant, yet still minimize the delivery time to the end customer. In some embodiments, predicted make times may be increased or decreased based on current circumstances. In some embodiments, make time predications may be based on historical order notifications and driver notifications. Example implementations contemplate the use of a plurality of modes and approaches to making dispatching decisions. For example, one approach may involve causing the system to cycle through order deliveries that are deliverable (e.g., orders ready to be dispatched so that a driver arrives at the restaurant approximately when the order is ready) from oldest to newest and attempting to match the delivery with an available driver, based on distance and/or other driver criteria. Some implementations may use other criteria in making dispatching decisions, for example, determining whether a driver already has an order and/or how many orders a driver has already delivered. Using such criteria may assist in providing driver “fairness”, that is, ensuring that each driver gets dispatched an equitable number of deliveries. In some implementations, distance calculations may be determined based on linear distance between points or may be determined based on a drive time between locations. However, constantly trying to determine drive times for every driver to every restaurant may be overwhelming and/or otherwise exceed the computing capabilities of one or more systems, particularly in larger and/or more complex network environments. Consequently, some implementations use a “beacon” system that may be used in determining the times necessary to proceed from one location to another. In some implementations, of such a beacon system, a grid may be overlaid on a map of a relevant geographic area, and each grid point may be a beacon. Based on the geographical features, some of the grid point beacons may then be removed, such as beacons positioned in a body of water, for example. Travel times from the precalculated “beacon” locations to each potential location (such as a restaurant location and/or other location associated with the system and/or request data objects received by the system) may be determined and stored in a database, and updated on a regular basis. The travel time for a driver may then be determined by finding the closest beacon to the driver and using that beacon's precalculated drive time to estimate the drive time for the driver. Said differently, to reduce required bandwidth and to increase efficiency (e.g., by requiring fewer location and distance calculation), travel time calculations can be efficiently estimated by calculating the distance between the beacon or virtual beacon and an intermediate destination and/or target destination. Thus, by maintaining travel times with respect to beacons, travel times for drivers can be estimated based on a determination as to which beacon is closest to the particular driver. The closest beacon is then usable as a proxy by which an estimated travel time for the driver can be calculated. FIGS.3-8present an example process flow300and several portions thereof that may be performed by the apparatus200ofFIG.2and/or by the apparatus200in conjunction with additional components, including but not limited to those discussed or otherwise contemplated in connection withFIG.1. In this regard, the apparatus200and any related system is configured to perform the functions described herein, and includes components, such as the processor202, the memory204, the user interface206, the communication interface208or the like, for doing so. As discussed throughout this disclosure, embodiments of the invention are generally directed to the optimal pairing of request objects with network response assets that are capable of (and authorized to) meet the requirements and/or other parameters of the request object. Many example implementations of such embodiments involve receiving a request data object, extracting a set of request parameters from the request data object; identifying a set of network response assets; determining a set of response asset characteristics for each response asset, selecting a response asset based on the request parameters and the response asset characteristics, and transmitting a portion of the request data object to the response asset. As discussed herein, some example implementations of embodiments of the invention are particularly advantageous in contexts where request data objects are associated with time-sensitive requests, the fulfillment of which require the physical interaction with systems or other entities at particular geographic locations. Some such contexts include, but are not limited to, systems that support the ordering and delivery of food or other goods and services by automatically optimizing the pairing of requests with response asset systems that are associated with delivery drivers or other similar entities. FIG.3depicts a portion of a process flow300. Some example implementations of process flow300contemplate, and can take into account, situations where the profile of an optimized pairing of request objects to response assets may shift over a relatively short time basis, and may be adjusted based on the particular priorities that are deemed optimal in a given instance and/or the quantity and characteristics of the particular request objects and response assets available at any given time. One such example implementation is depicted inFIG.3, which contemplates two primary modes: a “normal mode” which is preferred and particularly advantageous in most instances, and a “limited capacity mode”, which may be advantageous in situations where the relative demands and responsive resources available at a given time result in the system being at or near capacity or otherwise busy. As shown inFIG.3, one portion of process300involves determining and selecting the primary mode in which the system will operate when pairing requests and authorized response assets. Process300commences at block302, which indicates that a dispatch cycle (namely, the pairing of unpaired requests received during a particular time window to potential network response assets) is initiated for every active service at a predetermined time interval. Some example implementations of process300in general, and block302in particular, contemplate a system that is capable of managing request object-to-response asset pairing in numerous geographically distinct regions, referred to herein as “services”, which may be active at any given time. It will be appreciated that the predetermined time interval may be any time interval, and may be based on the volume of request objects received and the timing parameters associated with the request objects. In many example implementations, the time interval is on the order of seconds, such that incoming request objects are rapidly paired with their respective network response asset. As shown inFIG.3, process300continues to block304. At block304, the system commences determining which dispatch mode it should operate in when pairing request objects with response assets. As such, implementations of block304and the related elements inFIG.3and elsewhere contemplate that one of the initial tasks necessary to determining an optimized pairing of request objects to response assets may include ascertaining the framework and criteria against which pairings will be evaluated. At block306, the system determines whether an alternate mode, such as a limited capacity or “busy” mode, is available for a particular service. In some example implementations, such a limited capacity mode is unavailable, unnecessary, and/or otherwise unimplemented in a given region. In such regions, it may be determined at block306that such a mode is not available, and the process300proceeds to block308, wherein the system enters and/or is otherwise set to a normal operations mode. In locations and situations where an alternate mode is available, the process300is depicted as proceeding to block310, wherein the ratio of request data objects to response assets is determined. In some example implementations of block310, this ratio is determined by ascertaining the number of unpaired (e.g., undispatched) request objects, ascertaining the number of available network response assets, and dividing the number of unpaired request objects by the number of available response assets. As shown inFIG.3, the process300then proceeds to block312, wherein the ratio computed in block310is compared to a predetermined threshold. In some example implementations, the predetermined threshold is selected to reflect conditions wherein the system may be considered to be at or near capacity, namely, the conditions under which the ability of the available response assets to properly fulfill further requests becomes limited or otherwise compromised. In instances where the ratio calculated in block310is determined at block312to meet or exceed the predetermined threshold, process300proceeds to block314, where the system enters a limited capacity mode, which is described in more detail in connection with the process portion800shown inFIG.8. In instances where the ratio calculated in block310is determined at block312to be less than the predetermined threshold, process300proceeds to block308, where the system enters a normal mode, which is discussed in more detail in connection with the process portions shown inFIGS.4-7. FIG.4depicts an example process portion400, which is a portion of the process300presented inFIG.3. As shown inFIG.4, process portion400may be incorporated into process300via block308(and, in some implementations, block832, which is discussed in connection withFIG.8), and may be particularly advantageous in example implementations where the system is configured to operate in a “normal” mode of operations. Overall, process portion400is directed at selecting a submode that reflects the particular criteria and/or operating principles that will be used to ascertain whether a request object-to-response asset pairing is acceptable and/or optimal. As shown inFIG.4, process portion400includes block402, wherein unpaired or otherwise undispatched request data objects are fetched for pairing and other processing. As shown in block404, a submode is selected for each such unpaired request data object. InFIG.4, three submodes are depicted. Submode1, depicted at block410, generally involves criteria and processes aimed at automatically and efficiently grouping request data objects with a single response asset, such that the system prioritizes grouping undispatched requests together with other undispatched requests (and/or other previously paired/dispatched requests) over ensuring that a greater portion of the available response assets are assigned to a request object at any given time. Submode1, and/or similarly constructed submodes, may be particularly advantageous in contexts where a system associated with a response asset must physically move between multiple locations to fulfill a request. In the case of a food delivery, the use of Submode1may lead to an increase in instances where a single delivery driver is tasked with the collection and delivery of a number of orders from restaurants that are located near each other and are scheduled to be ready for delivery at similar times. In instances where Submode1is selected, process portion400proceeds to block502of process portion500, as discussed in connection withFIG.5. Submode2, shown at block408inFIG.4, generally involves criteria and processes aimed at ensuring request objects and response assets are paired in a manner that reduces the time that any particular response asset is idle. Implementations of Submode2may be particularly advantageous in situations where there is an ample supply of response assets and the timing constraints associated with a significant number of request objects are such that any actual and/or perceived delay associated with grouping of multiple request objects may be deemed unacceptable. In instances where Submode2is selected, process portion400proceeds to block602and process portion600, as discussed in connection withFIG.6. Submode3, shown at block406, generally involves criteria and processes aimed at ensuring request objects and response assets are paired in a manner that prioritizes grouping in instances where multiple request objects involve the same location, but otherwise seeks to reduce the time that any particular response asset is idle. In example implementations involving deliveries of goods and services, multiple request objects may require a response asset to acquire goods from a particular location, such as a particular restaurant. In Submode3, the system will attempt (in accordance with other criteria discussed and otherwise contemplated in connection with process portion700inFIG.7) to group orders associated with a single restaurant with one response asset, and otherwise seek to assign idle response assets to unpaired request objects. In instances where Submode3is selected, process portion400proceeds to block702and process700, as discussed in connection withFIG.7. It will be appreciated that while three Submodes are depicted and described in connection withFIG.4and process portion400, these Submodes are merely examples of submodes and/or other indicia of pairing and optimization goals and parameters that may be advantageous in particular implementations of embodiments of the invention. As such, implementations that involve more and/or fewer submodes, including but not limited to implementations that do not contemplate the selection from amongst submodes, may also be performed without departing from the spirit and scope of embodiments of the invention. FIG.5depicts a process portion500associated with Submode1discussed in connection with block410ofFIG.4. As shown inFIG.5, process portion500commences at block502when process300transitions from block410(or block618, as discussed herein with respect toFIG.6). As discussed in connection with Submode1and block410, block502indicates that process portion500is configured to attempt to group request objects that involve locations that are identical or nearby each other. In the context of dispatching drivers to physically pick up and deliver goods and services, such as food orders, implementations of process portion500seek to assign multiple orders to a single driver such that the driver is able to stop at a number of restaurants that are near each other, pick up the various requested items within a particular time window, and then proceed to the respective delivery locations and effectuate on-time deliveries. As depicted by block504, process portion500includes fetching all previously assigned request data objects that meet a particular response state condition, and identifying the response asset associated with each such object. In implementations of block504that arise in system environments similar to system environment100, the object-asset pairing system100may fetch the relevant request data objects by having pairing system module102A and/or pairing system server102B query the pairing system database102C or another data repository and processing the returned query results to identify the desired data. In implementations involving physical deliveries, upon receiving an unpaired/undispatched request, block504involves fetching all other deliveries which have been dispatched and/or otherwise assigned to a particular driver, but for which the driver has not yet picked up the order. As such, block504contemplates detecting a delivery state in which a driver may be en route to a restaurant or other location, but, because the driver has not yet picked up the food or other goods, the driver may be able to be efficiently rerouted to pick up another delivery and maintain the ability to successfully perform all of the necessary steps associated with each of the multiple deliveries. Once the potentially relevant request data objects are acquired and the potential response assets identified, process portion500proceeds to block506, which is the starting point of a set of qualification criteria against which all of the identified response assets are compared. It will be appreciated that whileFIG.5and process portion500depict a series of criteria (shown at blocks508-520), alterations to the criteria may be made (including but not limited to the addition, subtraction and/or modification of the criteria) without departing from the spirit and scope of embodiments of the invention. Moreover, and regardless of the precise criteria applied and/or the mechanism used to establish the criteria, example implementations of blocks508-520may accomplish the evaluation of each response asset through the interaction of system components, such as the apparatus200, and/or other components, such as object-asset pairing system102, and/or querying a database that houses the relevant information, such as pairing system database102C. In other example implementations, information used to evaluate a particular response asset may be obtained by seeking and/or receiving an indication associated with the response asset's characteristics by establishing communication with a user device and/or application module associated with the response asset, such as, for example, the response asset module106A and the response asset device106B which are shown inFIG.1as part of a response asset system106. As shown at block508, one of the qualification criteria in process portion500involves determining the response asset's availability. Some implementations of block508contemplate determining whether a given response asset is available to be paired with and satisfy the requirements associated with a request object. For example, implementations of block508may include ascertaining whether a particular response asset is in use, online or offline, and whether systems associated with the network response asset are inside or outside of a particular geographic region, or otherwise presenting indicia of availability, and/or otherwise ascertaining the availability status of the response asset. If the particular response asset is deemed unavailable, process portion500proceeds from block508to block524, where it is determined if there are any additional response assets to evaluate for potential assignment. In instances where a particular response asset is determined to be available in block508, process portion500proceeds to block510, where it is determined whether the particular response asset is under a grouping limit. Implementations of block510recognize that, depending on the specific characteristics of the technical environment associated with any particular implementation, there is a limit on the number of request objects that can be grouped with a response asset while still maintaining the successful and accurate satisfaction of the requests contained in the grouped request data objects. Particularly in the case of non-uniform, distributed systems, the grouping capacity of any particular response asset may vary from response asset to response asset. In example situations that arise in the context of response assets that are associated with systems that are tasked with traveling to particular geographic locations to acquire materials and/or other goods for delivery to another geographic location, the ability of a particular response asset system to properly acquire, track, and transport goods associated with multiple orders may be a function of the capacity of the vehicle associated with the response asset, the ability demonstrated by agents and/or other users associated with the response asset system, and/or other factors that bear on the response asset's ability to successfully satisfy the requirements associated with multiple concurrent request data objects. If the particular response asset is determined to be at or over the grouping limit set for the particular response asset, process portion500proceeds from block510to block524, where it is determined if there are any other response assets to evaluate for potential assignment. In situations where a particular response asset is determined to be under the grouping limit for that response asset, process portion500proceeds to block512, which includes determining a grouping range status associated with the particular response asset under evaluation. In addition to determining a limit on a particular response asset's ability to accommodate groupings over a certain size, as performed in block510, process portion500in general and block512particular contemplate the existence of a limit on the distance that a system associated with a particular response asset may be able to travel in connection with any particular grouping of request objects while maintaining the ability to satisfy the requirements of each request data object within the group. Some example implementations, such as those that arise in the context of requests that require traveling to multiple geographic locations to obtain and deliver goods and/or other materials needed to satisfy a request, contemplate at least two destinations associated a particular request object. The first, an intermediate destination, is often the location at which goods, materials, and/or other resources necessary to fulfill the requirements of the request object are acquired. The second location, a target destination, is often the location at which the goods, materials, and/or other resources are to be delivered. Such example implementations recognize that, in most instances, there is a practical limit on the distance that a system associated with a response asset can effectively travel between intermediate locations while optimally and/or efficiently satisfying the grouped requests. For example, if a delivery driver is tasked with a group of orders that require him to pick up food items from a group of restaurants that are in relatively close proximity to each other, adding an order that requires the driver to drive across town to pick up another order would likely impair the ability of the driver to successfully complete the other deliveries. However, if the potential additional order involved picking up an order from the same restaurant as one already in the driver's group, or a restaurant located reasonably nearby, it may be advantageous to task the driver with picking up the additional order in the course of satisfying the other grouped orders. Implementations of block512contemplate that the precise distance established as a response asset's grouping range may vary based on the characteristics of the particular system associated with the response asset, the geographic area or region in which the response asset operates, the time of day, traffic and/or infrastructure conditions, and other factors. If the particular response asset is determined to be in a condition such that the assigning the unassigned request data object to the response asset would put the group associated with the response asset at or over the grouping range limit set for the particular response asset, process portion500proceeds from block512to block524, where it is determined if there are any other response assets to evaluate for potential assignment. In situations where it is determined that adding the currently unassigned request data object to the response asset under evaluation would allow the response asset to stay within the grouping range limit set for that response asset, process portion500proceeds to block514, which includes determining an additional grouping distance status associated with the response asset and the unassigned request object. While implementations of block512are primarily focused on the proximity of intermediate destinations (such as locations at which goods and/or other materials associated with fulfilling the request objects grouped with that response asset must be acquired), implementations of block514are primarily focused on the target destinations associated with the request objects, namely, the locations designated as delivery points for the request data objects. For example, in the context of a delivery service, it is generally efficient to group orders where the intermediate locations associated with resource acquisition are within a certain range of each other, and where the requested target destinations are within a certain range of each other, such that the system associated with the response asset is not required to travel far distances for each step in satisfying a request object. As such, implementations of block514involve determining whether the addition of a particular request object would require a system associated with a response asset to travel a distance that exceeds a predetermined limit. If the particular response asset is determined to be in a condition such that the assigning the unassigned request data object to the response asset would put the group of request data objects associated with the response asset at or over the additional grouping distance limit set for the particular response asset, process portion500proceeds from block514to block524, where it is determined if there are any other response assets to evaluate for potential assignment. In situations where the response asset is able to accommodate the additional request object without exceeding the additional grouping distance limit, process portion500proceeds to block516, which includes determining a grouping timing status. Implementations of block516recognize that grouping request data objects such that a single response asset is responsible for ensuring that all of the steps associated with satisfying each request data object may be particularly advantageous when certain timing parameters associated with the request data objects are aligned. For example, in implementations that arise in the context of acquiring goods and/or other materials and delivering them to a target destination associated with a request object, grouping a number of request objects with a particular response asset may be optimal or otherwise efficient when the goods and/or other materials are likely to be available for pickup at or near the same time. For example, in the context of grouped food deliveries, it is generally preferable for a system associated with a response asset to pick up orders from multiple nearby restaurants if the orders are scheduled to be available at or near the same time. On the other hand, if one or more orders are set to be available substantially earlier or later than the other grouped orders, the ability to efficiently pick up all the orders and deliver them in a timely manner that otherwise satisfies the requirements associated with each grouped request object may be compromised. Any of a number of approaches to determining when an order is likely to be available, including but not limited to the approaches to ascertaining “make time” and similar timing parameters discussed elsewhere herein may be used in implementations of block516. If the particular response asset is determined to be in a condition such that the assigning the unassigned request data object to the response asset would put the group associated with the response asset at or over the grouping timing limit set for the particular response asset, process portion500proceeds from block516to block524, where it is determined if there are any other response assets to evaluate for potential assignment. In situations where it is determined that assigning a particular request object to a response asset would not violate the group timing limit of the response asset, process portion500proceeds to block518, which includes determining a grouping delay status associated with the request data object. In some implementations of process300, some intermediate locations (i.e., restaurants and/or other locations where goods, materials and/or other resources must be acquired to fulfill the requirements associated with the request data object) can be designated as fast or slow. In such implementations, intermediate locations that are designated as “fast” are generally incompatible with grouping, based at least in part, for example, on the timing constraints and other factors associated with such intermediate locations and the requests involving those locations. In such implementations, in order to permit grouping of multiple request data objects with a single response asset, the request data object must but such (and the request data objects already associated with a particular response asset) that the intermediate location is designated as slow. If the particular request object and/or response asset is determined to be in a condition such that the assigning the unassigned request data object to the response asset would cause a group associated with the response asset to violate the grouping delay status set for the particular response asset, process portion500proceeds from block516to block524, where it is determined if there are any other response assets to evaluate for potential assignment. In situations where it is determined that assigning a particular request object to a response asset would not violate the grouping delay status set of the particular response asset, process portion500proceeds to block520, which includes determining a grouping authorization status. In some example implementations of process portion500in general and block520in particular, an intermediate location (such as a restaurant in the context of food delivery systems) may not permit request data objects that involve that particular intermediate location to be grouped with any other request data objects. Consequently, in implementations of block520, regardless of whether a request data object would be otherwise suitable for grouping, and regardless of whether a particular response asset would be otherwise capable of accommodating the request object as part of a grouped set of request objects, a determination that grouping is not authorized for a particular request data object causes process portion500to proceed to block524, where it is determined if there are any other response assets to evaluate for potential assignment. In situations where it is determined that a particular request object may be grouped with others, process portion500proceeds to block522, which includes dispatching the response asset. In implementations of block522, having ascertained that the request object and the response asset may be grouped, the request data object is assigned to the response asset, and portion of the request data object is transmitted to the network response asset and systems associated therewith. In some example implementations, the portion of the request data object includes an identification of an intermediate destination, instructions associated with the intermediate destination, an identification of a target destination, and/or any additional information that may be necessary to satisfy the requirements of the request data object. As shown inFIG.5, process portion500also includes block524, which causes process portion500to iteratively evaluate potential response assets for pairing with a request object until either a response asset is identified or the list of potential response assets is exhausted. In situations where the list of potential response assets is exhausted, process portion500proceeds to block622, which is depicted as part of process portion600inFIG.6. FIG.6depicts a process portion500associated with Submode2discussed in connection with block408ofFIG.4. As shown inFIG.6, process portion600commences at block602when process portion400transitions from block408(or block724, as discussed herein with respect toFIG.7). As discussed in connection with Submode2and block408, block602indicates that process portion600is configured to attempt to pair incoming, unassigned request data objects with response assets that are not assigned a request object and/or are otherwise idle. In the context of dispatching drivers to physically pick up and deliver goods and services, such as food orders, implementations of process portion600tend to result in situations where drivers deliver one order at a time, and do not tend to group orders that share nearby intermediate and/or target destinations. As depicted by block604, process portion600includes fetching all available response assets. In implementations of block604that arise in system environments similar to system environment100, the object-asset pairing system100may fetch the relevant response assets by having pairing system module102A and/or pairing system server102B query the pairing system database102C or another data repository and processing the returned query results to identify the desired data. In implementations involving physical deliveries, upon receiving an unpaired/undispatched request, block604involves fetching all potential delivery systems for further processing to ascertain how a received request data object should be paired. Once the available response assets have been identified, process portion600proceeds to block606, which is the starting point of a set of qualification criteria against which all of the identified response assets are compared. It will be appreciated that whileFIG.6and process portion600depict a series of criteria (shown at blocks606-616), alterations to the criteria may be made (including but not limited to the addition, subtraction and/or modification of the criteria) without departing from the spirit and scope of embodiments of the invention. Moreover, and regardless of precise criteria applied and/or the mechanism used to establish the criteria, example implementations of blocks606-616may accomplish the evaluation of each response asset through the interaction of system components, such as the apparatus200, and/or other components, such as object-asset pairing system102, and/or querying a database that houses the relevant information, such as pairing system database102C. In other example implementations, information used to evaluate a particular response asset may be obtained by seeking and/or receiving an indication associated with the response asset's characteristics by establishing communication with a user device and/or application module associated with the response asset, such as, for example, the response asset module106A and the response asset device106B which are shown inFIG.1as part of a response asset system106. In general, blocks606-616involve applying a series of criteria to conditionally exclude and/or disqualify potential response assets from being paired with a request data object, ranking the qualified response assets, and selecting a response asset to be paired with the particular request data object. As shown at block606, process portion600includes excluding response assets based on distance and activity. As noted above, implementations of process portion600are advantageous in situations where it is deemed preferable to ensure that all possible response assets are active at any given time. Consequently, in implementations of block606, response assets that have already been paired with a request data object and have not completed the satisfaction of that request data object are excluded. Some example implementations of block606contemplate response assets that are near-idle, in the sense that they have nearly completed addressing their currently-paired request data object, and are anticipated to be in an idle condition within a relatively short period of time. For example, in the context of the delivery of food or other goods and/or materials, a response asset whose associated system may be at or near a target destination (such as a customer drop-off location) may be deemed near-idle, and not excluded in a particular implementation of block606. Once certain of the response assets have been excluded at block606, process portion600proceeds to block608, which includes conditionally excluding non-idle response assets. As noted above with respect to block606, some implementations of process portion600contemplate a near-idle status of occupied response assets that are anticipated to return to an idle state within a relatively short period of time. At block608, process portion600determines whether the particular service in which process600is implemented supports the pairing of incoming request data objects with response assets that are in a near-idle status. In situations where such pairing is not supported, near-idle response assets are excluded, and process portion600proceeds to block610. As shown in block610, process portion600includes excluding response assets that are unauthorized at a specific location. In some situations, a response asset and/or a system associated therewith may not be authorized to interact with systems and/or enter locations that are associated with a particular request object. For example, in the context of the delivery of good and/or other materials, an intermediate location associated with the request object, (such as a restaurant or other location at which goods and/or other materials must be acquired in order to satisfy the requirements of the request object) may have rules that bar individuals or entities associated with particular response assets from entering the premises. In another example, controlled-access buildings and/or other facilities may require advanced authorization of response assets and systems associated therewith, such that response assets that are not authorized will be unable to cause the full satisfaction of the requirements of the request object. As shown in block612, process portion600includes excluding response assets based on a utilization time limit. In some example implementations, response assets and/or systems associated therewith have limits (such as daily and/or weekly maxima) on the time during which they can be used. As such, implementations of block612involve determining whether pairing a request object with a particular response asset would cause that response asset to violate a time limit associated with that response asset during the course of satisfying the requirements associated with the request object. As shown in block614, process portion600includes ranking the remaining response assets. Any of a number of criteria may be used to rank the remaining response assets, including but not limited to how many times a response asset has been dispatched over the course of a particular time period, a predetermined schedule, past performance data associated with the response asset; the location of a system associated with the response asset with respect to locations associated with the request object, an estimate of the time it would likely take for a particular response asset to meet the requirements of the request object, and the like. After applying the ranking criteria to the response assets, process portion600proceeds to block616, which includes determining whether any response assets remain, such that the request object may be paired with a response asset. In situations where there are remaining response assets, process portion600proceeds to block624, where the request object is paired with the highest-ranked response asset. In situations where there is not an available response asset, process portion600proceeds to block618 As shown in block618, process portion600contemplates multiple submodes that may involve the application of different criteria when attempting to pair request objects with response assets. In the example depicted by process portion600, if the submode associated with the particular request object is Submode2, as discussed herein, process portion600proceeds to block502, where the request object may be further processed and potentially grouped with a response asset. If the submode associated with the request object is not Submode2, process portion600transitions to block620, wherein the request object is not dispatched, as there is no suitable response asset capable of handling the request object. FIG.6also depicts block622. As noted herein with respect to process portion500generally and block524in particular, process300proceeds from block524to block622in situations where a request object was unable to be grouped in the course of process portion500. Block622includes determining whether the submode associated with the particular request object is Submode1, as discussed in connection withFIGS.4and5. In situations where the submode associated with the request object is Submode1, process portion600proceeds from block622to block602, for attempted pairing with an idle or near-idle response asset. In instances where the submode associated with the incoming request object at block622is not Submode1, process portion600proceeds to block620, wherein the request object is not dispatched, as there is no suitable response asset capable of handling the request object. FIG.7depicts a process portion700associated with Submode3discussed in connection with block406ofFIG.4. As shown inFIG.7, process portion700commences at block702when process300transitions from block406. As discussed in connection with Submode3and block406, block702indicates that process portion700is configured to attempt to group request objects that involve intermediate locations that are identical each other. In the context of dispatching drivers to physically pick up and deliver goods and services, such as food orders, implementations of process portion700seek to assign multiple orders to a single driver such that the driver is able to stop at a particular restaurant, pick up all of the various items that are scheduled to be ready within a particular time window, and then proceed to the respective delivery locations and effectuate on-time deliveries. As depicted by block704, process portion700includes fetching all previously assigned request data objects that meet a particular response state condition, and identifying the response asset associated with each such object. In implementations of block704that arise in system environments similar to system environment100, the object-asset pairing system100may fetch the relevant request data objects by having pairing system module102A and/or pairing system server102B query the pairing system database102C or another data repository and processing the returned query results to identify the desired data. In implementations involving physical deliveries, upon receiving an unpaired/undispatched request, block704involves fetching all other deliveries which have been dispatched and/or otherwise assigned to a particular driver, but for which the driver has not yet picked up the order. As such, block704contemplates detecting a delivery state in which a driver may be en route to a particular restaurant or other location, but, because the driver has not yet picked up the food or other goods, the driver may be able to be efficiently notified to pick up another delivery at the same restaurant and maintain the ability to successfully perform all of the necessary steps associated with each of the multiple deliveries. Once the potentially relevant request data objects are acquired and the potential response assets identified, process portion700proceeds to block706, which is the starting point of a set of qualification criteria against which all of the identified response assets are compared. It will be appreciated that whileFIG.7and process portion700depict a series of criteria (shown at blocks708-720), alterations to the criteria may be made (including but not limited to the addition, subtraction and/or modification of the criteria) without departing from the spirit and scope of embodiments of the invention. Moreover, and regardless of precise criteria applied and/or the mechanism used to establish the criteria, example implementations of blocks708-720may accomplish the evaluation of each response asset through the interaction of system components, such as the apparatus200, and/or other components, such as object-asset pairing system102, and/or querying a database that houses the relevant information, such as pairing system database102C. In other example implementations, information used to evaluate a particular response asset may be obtained by seeking and/or receiving an indication associated with the response asset's characteristics by establishing communication with a user device and/or application module associated with the response asset, such as, for example, the response asset module106A and the response asset device106B which are shown inFIG.1as part of a response asset system106. It will be appreciated that the blocks708,710, and714-722are similar to blocks508,510, and514-522, such that the discussions of those blocks are applicable to their respective counterparts, and that any implementation of block508,510, and514-522may be used in implementations of their respective counterparts in process portion700. Block712, which includes determining an intermediate location parameter associated with a request object and a response asset, is similar to block512inFIG.5, but differs in the sense that instead of setting a grouping range limit on the intermediate locations which may be suitably close to each other to facilitate grouping, process portion700in general, and block712in particular, require that the intermediate location associated with a request object that is currently assigned to a response asset under evaluation and the intermediate location associated with an unpaired request object be identical. Block724, also slightly differs from its counterpart at block524inFIG.5in the sense that, upon determining that the list of potential response assets to be evaluated in process portion700is exhausted, process portion700proceeds to block602, where the request object may be paired with an idle response asset. FIG.8depicts a process portion800associated with limited capacity or “busy” mode referenced and discussed in connection with block314and other portions ofFIG.3. As shown inFIG.8, process portion800commences at block801when process300transitions from block314. In general, process portion800is aimed at addressing system overload and near-overload conditions by attempting to pair request data objects with the network response asset that is most likely to address and satisfy the requirements of the request in the minimum time. In the context of dispatching drivers or other systems to physically pick up and deliver goods and services, such as food orders, implementations of process portion800seek rapidly identify eligible response assets, calculate the time it is likely to take systems associated with the eligible response assets to respond to the request, and select the response asset that is most likely to be able to effectuate the fulfillment of the request in the minimum of time. As depicted by block801, process portion800includes fetching all unassigned request data objects and all available response assets. In implementations of block801that arise in system environments similar to system environment100, the object-asset pairing system100may fetch the relevant request data objects by having pairing system module102A and/or pairing system server102B query the pairing system database102C or another data repository and processing the returned query results to identify the desired data. In implementations involving physical deliveries, upon receiving an unpaired/undispatched request, block801also involves fetching all available response assets for evaluation and potential pairing. As depicted by block802, process portion800includes, for each unpaired request object and each available response asset, calculating a response time for intermediate destination. In implementations of block802that arise in the context of physically acquiring and delivering resources, goods, and/or other materials, the system calculates, such as through the use of the beacon-based approach described herein, for example, the estimated time necessary for a system associated with a response asset to move from its current location to the intermediate destination, where the required resources, goods, and/or other are to be acquired and moved to the target destination. Upon completion of the response time calculation in block802, process portion800proceeds to block804, where it commences, for each unpaired request object, a series of steps806-832, in an effort to assign the unpaired request object to a response asset. It will be appreciated that whileFIG.8and process portion800depict a series of criteria (shown at blocks806-832), alterations to the criteria may be made (including but not limited to the addition, subtraction and/or modification of the criteria) without departing from the spirit and scope of embodiments of the invention. Moreover, and regardless of precise criteria applied and/or the mechanism used to establish the criteria, example implementations of blocks806-832may accomplish the evaluation of each response asset through the interaction of system components, such as the apparatus200, and/or other components, such as object-asset pairing system102, and/or querying a database that houses the relevant information, such as pairing system database102C. In other example implementations, information used to evaluate a particular response asset may be obtained by seeking and/or receiving an indication associated with the response asset's characteristics by establishing communication with a user device and/or application module associated with the response asset, such as, for example, the response asset module106A and the response asset device106B which are shown inFIG.1as part of a response asset system106. As shown at block806, process portion800includes fetching a list of eligible response assets. An example approach to stepping through an example set of criteria is shown inFIG.8at blocks808-824, which are described in more detail below. After the list of eligible response assets is fetched at block806, process portion800transitions to block826, which includes verifying whether any response assets exist. In some situations, the list of eligible response assets may include a number of response assets that may be paired with a request object. However, depending the characteristics of the request object and the available response assets, as well as the general state of the system environment, it is possible that there may not be any available response assets at a given time for a given request object. As shown at block828, in situations where there are eligible response assets, the response asset associated with the shortest response time is selected, and process portion800subsequently transitions to block830, where the request object is paired with the response asset and the response asset is dispatched to address the request object. As shown at block832, in situations where there are no eligible response assets, process portion800contemplates switch to a “normal” operation mode, and proceeding to block402to further process the request object and attempt to pair the request object with a response asset. As shown inFIG.8, block808-824present an example approach to stepping through an example set of criteria. It will be appreciated that many of the criteria set out in blocks808-824are similar to criteria and other aspects discussed in connection withFIGS.3-7. Consequently, it will be appreciated that any of the criteria disclosed or otherwise contemplated in connection with the process portions set out inFIGS.3-7, and any approaches to implementing such criteria, may be used in example implementations of process portion800in general and blocks808-824in particular. As shown at block808, each available response asset is evaluated to develop a list of eligible response assets that may be used in connection with blocks826-832of process portion800. As shown at block810, process portion800includes determining a resource acquisition status of a response asset. Some implementations of block810consider that response assets that are already engaged in satisfying a request object to the point that all materials necessary to satisfy the request have been acquired are ineligible to be paired with other request objects until completing the tasks associated with its current request object. For example, if a system associated with a response asset that has been paired with a request object that seeks a food delivery, implementations of block810may include determining whether the response asset system has already acquired the requested items from the intermediate destination and/or is en route to the target destination. If so, as shown inFIG.8, process portion800proceeds to block822, where the response asset is indicated as being ineligible. As shown in block812, each response asset is also evaluated to determine whether any particular response asset is disqualified from and/or otherwise unauthorized to proceed to the intermediate destination associated with the unpaired request object. As such, block812is similar to block610, and any approach to implementing block610may be used in implementations of block812. As shown inFIG.8, a disqualified response asset is indicated to be ineligible at block822. As shown in block814, process portion800includes determining an asset dispatch status. In some example implementations of block814, a response asset is evaluated to ascertain whether the response asset and/or a system associated with the response asset is idle and/or near idle. In some example implementations, the response asset is also evaluated to determine whether it is currently paired with a request object that shares an intermediate destination with the unpaired request object. As shown in block814, if the response asset is not idle and/or is not already paired with a request object that shares an intermediate destination with the unpaired request object, the response asset is deemed ineligible. As shown in blocks816,818, and820, each remaining response asset is evaluated to determine the grouping limit status of the response asset, a distance threshold status for the response asset, and a time threshold status for the response asset. Each of these blocks816,818, and820correspond to one or more of blocks510,512,514,516, and their respective counterparts in process portion700, and are used to determine whether a request object can be grouped with any request objects that have previously been paired to the response asset without violating the limits imposed on the response asset. As shown inFIG.8, response assets that successfully meet all of the applied criteria are deemed eligible to be paired with the unpaired request object, and are added to a list of eligible response assets for processing in accordance with blocks826-832. FIG.9is a block diagram illustrating a set of operations900performed, such as by the apparatus ofFIG.2, in accordance with an example embodiment of the present invention. It will be appreciated that the apparatus200, through the operation of the processor202, memory204, user interface206, communication interface208, and/or any other components used in connection therewith, are capable of performing each of the operations described herein with respect toFIG.9and depicted therein. As shown at block902, the apparatus200is capable of receiving a request data object. Any of the request data objects discussed or otherwise contemplated herein may be used in connection with implementations of the apparatus200and block902. As discussed herein, many advantageous implementations of the apparatus200and other embodiments of the invention described herein arise in the context of request data objects that seek to cause the movement of goods, materials, and/or other resources from one physical location to another. As such, in some implementations, the request data object may take the form of a data object that conforms to a particular structure and/or format that is designed to be passed to a specific system for processing and pairing such request data objects with network response assets and the systems associated therewith. Regardless of the precise format of the request data object, the request data object contains at least a request from a user or other entity, and an indication of the requirements and/or other parameters associated with the request. As shown at block904, the apparatus200is also capable of extracting from the request data object a set of request parameters. Upon receipt of the request data object, the apparatus may parse and/or otherwise process the request data object to obtain the request parameters contained within the request data object. In some implementations, the set of request parameters includes a first location identification, wherein the first location identification is associated with a requested intermediate destination. As discussed herein, an intermediate location may include, for example a location at which resources, such as good and/or other materials, must be obtained in order to fulfill the requirements of the request data object. Any approach to identifying a location may be used, including but not limited to, providing GPS and/or other coordinate, address information, and/or other indications of a specific location. In some example implementations of block904, the set of request parameters also includes a set of request instructions associated with the requested intermediate destination. For example, the request instructions may include a list of resources, goods, and/or other materials to be acquired at the intermediate location, and/or other instructions regarding the acquisition thereof. In some example implementations, the set of request parameters also includes a second location identification, wherein the second location identification is associated with a requested target destination. As discussed herein, a requested target destination may be location at which resources associated with a request, such as goods and/or other materials, are to be delivered. In some example implementations, the target location may be a location associated with the individual or entity responsible for creating the request data object and/or causing the request data object to be created. In some example implementations, the set of request parameters further comprise a time constraint associated with the request data object. In some example implementations, a request data object may be time sensitive, in the sense that fulfillment of the requirements of a request must occur within a particular time limit. For example, in situations that involve the movement of resources, such as sensitive or needed materials, and/or perishable items, delay in fulfillment of request data object requirements may be associated with failure conditions or other negative consequences. By extracting a time constraint from a request data object, the apparatus200is capable of taking into account the time constraint (and any other parameter discussed or contemplated herein) when attempting to pair the request data object with a network response asset. As shown in block906, the apparatus is capable of identifying a set of response assets. As discussed throughout this disclosure, any of a number of approaches to identifying a set of network response assets, including but not limited to querying a database that contains identifications of network response assets, such as pairing system database102C, establishing communication with systems and/or mobile devices associated with network response assets, such as response asset system106and response asset device106B, or any other approach sufficient to fetch, determine, and/or identify a set of network response assets. As shown in block908, the apparatus is also capable of, for each network response asset, determining a set of network response asset characteristics. In some example implementations, the set of network response asset characteristics comprises a third location identification, wherein the third location identification comprises a triangulation location of a mobile device associated with the network response asset. It will be appreciated that a triangulation location may be any physical location that is ascertained through the use of a triangulation protocol, such as GPS, cellular-based location services, or the like. In many implementations of block908, the triangulation location may be that of a mobile device, such as response asset device106B, which is capable of ascertaining its location and passing that location to the apparatus. In some example implementations, it is particularly advantageous for the triangulation location of the mobile device associated with the network response asset to be an indication of a real-time or a near real-time geographic position of mobile device. In some such example implementations, the mobile device may be used to track the location of a system associated with a network response asset as the system moves to fulfill the requirements of a request data object. In some example implementations, the set of network response asset characteristics further comprises a response state of the network response asset and/or an authorization status of the network response asset. For example, a response state may indicate whether a network response asset is idle, paired with one or request data objects, and/or the status of the fulfillment of the requirements of one or more request data objects. In implementations that involve an authorization status, the authorization status may, for example, provide an indication regarding whether a particular network response asset is authorized to be paired with a particular request data object, and/or otherwise authorized to interact with systems and/or other entities associated with one or more intermediate destinations and/or target destinations. Moreover, it will be appreciated that any of the characteristics of a network response asset described or otherwise contemplated herein, particularly with reference to those presented in connection withFIGS.3-8, may be included in the set of response asset characteristics. As shown in block910, the apparatus is also capable of selecting a network response asset based at least in part on the set of request parameters and at least one network response asset characteristic. Any of the approaches described or contemplated herein for selecting and/or pairing a network response asset, including but not limited to those presented in connection withFIGS.3-8, may be used in example implementations of block910. For example, selecting a network response asset comprises determining an ordered list of network response assets and select the highest ordered network response asset. As shown in block912, the apparatus is also capable of transmitting to the network response asset a portion of the request data object. In some implementations, the portion of the request data object includes the request parameters, such as the intermediate destination, any instructions associated therewith, and/or the target destination. However it will be appreciated that any information associated with a request data object may be transmitted to the network response asset in connection with implementations of block912. Moreover, in some example implementations, transmitting a portion of the request data comprises causing a communication channel to be opened with the mobile device associated with the network response asset; and causing an indication of the request data to be displayed on a user interface of the mobile device. As described above,FIGS.3,4,5,6,7,8and9illustrate flowcharts of an apparatus, such as apparatus200, a method, and a computer program product according to example embodiments of the invention. It will be understood that each block of the flowcharts, and combinations of blocks in the flowcharts, may be implemented by various means, such as hardware, firmware, processor, circuitry, and/or other devices associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described above may be embodied by computer program instructions. In this regard, the computer program instructions which embody the procedures described above may be stored by the memory device204of an apparatus employing an embodiment of the present invention and executed by the processor202of the apparatus. As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus (e.g., hardware) to produce a machine, such that the resulting computer or other programmable apparatus implements the functions specified in the flowchart blocks. These computer program instructions may also be stored in a computer-readable memory that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture the execution of which implements the function specified in the flowchart blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flowchart blocks. Accordingly, blocks of the flowcharts support combinations of means for performing the specified functions and combinations of operations for performing the specified functions for performing the specified functions. It will also be understood that one or more blocks of the flowcharts, and combinations of blocks in the flowcharts, can be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and computer instructions. In some embodiments, certain ones of the operations above may be modified or further amplified. Furthermore, in some embodiments, additional optional operations may be included. Modifications, additions, or amplifications to the operations above may be performed in any order and in any combination. Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
88,800
11863464
DETAILED DESCRIPTION Overview The present disclosure describes management of individual sub-tunnels of a Resource Reservation Protocol (RSVP) tunnel. In the following description, for purposes of explanation, numerous examples and specific details are set forth to provide a thorough understanding of the present disclosure. It will be evident, however, to one skilled in the art that the present disclosure as expressed in the claims may include some or all of the features in these examples, alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein. Each of a plurality of sub-tunnels of a given RSVP tunnel are monitored and the number and/or bandwidth attributes of the sub-tunnels may be individually adjusted based on network traffic conditions. An aggregate bandwidth of the plurality of sub-tunnels is calculated for a given time interval. A data point for the aggregate bandwidth is recorded in an aggregate bandwidth data set for a plurality of time intervals. A desired number of sub-tunnels may be determined based on the collection of aggregate bandwidth data set. The number of sub-tunnels and/or bandwidth of each sub-tunnel may be adjusted based on the aggregate bandwidth data set. The term “sub-tunnel,” as used herein refers to a network tunnel of a plurality of tunnels established between two adjacent nodes in a network. The plurality of tunnels are monitored as a group and each sub-tunnel has one or more attributes that are controlled to achieve desired operational characteristics for the network tunnel. The plurality of sub-tunnels are logically associated together as a group in some embodiments. The plurality of sub-tunnels are provided in the same layer of an Open Systems Interconnection (OSI) model. The term “bandwidth,” as used herein, refers to network capacity. More particularly, “bandwidth” refers to the amount of data that can be transferred through a network device or collection of network devices over a given time interval. In some embodiments, bandwidth refers to the average bit rate that a network device or collection of network devices can sustain over a given time interval. In some embodiments, bandwidth refers to the peak bit rate that a network device or collection of network devices can sustain over a given time interval. It is understood that network resources of a network device or a collection of network devices may be adjusted to increase or decrease bandwidth. The number of sub-tunnels may be reduced in some instances. For a given adjustment interval, the aggregate bandwidth of all the sub-tunnels is determined for intervals in the adjustment interval. If the aggregate bandwidth for every interval in the given adjustment interval is less than a difference between the maximum aggregate value for the sub-tunnels and a bandwidth for a single sub-tunnel, the number of sub-tunnels may be reduced to improve resource efficiency. The bandwidth for one or more of the sub-tunnels may be reallocated based on a least-fill technique. If the current number of sub-tunnels cannot accommodate the maximum aggregate bandwidth using the least-fill technique, the number of sub-tunnels may be increased and the bandwidth redistributed among the sub-tunnels. According to the least-fill technique, a maximum allowed value is defined (e.g., by a network administrator) for sub-tunnel bandwidth. Based on the aggregate bandwidth data set, the maximum bandwidth usage for a given adjustment interval and for each sub-tunnel is determined. If the highest bandwidth usage for any of the sub-tunnels exceeds the maximum allowed value, an iterative process is performed to redistribute the excess bandwidth. To redistribute the excess bandwidth, the total excess bandwidth for the adjustment interval is calculated. The plurality of sub-tunnels are sorted according to maximum bandwidth usage (e.g., lowest-to-highest) for the given adjustment interval. The excess bandwidth is equally distributed over values corresponding to the maximum bandwidths of the sub-tunnels for a successively increasing subset of the sub-tunnels. For instance, the excess bandwidth is added to the lowest maximum bandwidth among the sub-tunnels. If the result of the addition still exceeds the maximal allowed bandwidth, then the excess bandwidth is distributed among the two lowest output bandwidths of the sub-tunnels and the result is compared with the maximal allowed bandwidth. This process is repeated until either (i) the redistributed bandwidth for every sub-tunnel is below the maximal allowed bandwidth, or (ii) an additional sub-tunnel is added to accommodate the excess bandwidth if distribution of the excess bandwidth does not produce a result below the maximal allowed bandwidth. Modifications involving collection size and/or types of data collections may be included in the foregoing techniques to improve speed and/or reduce the amount of data stored. As a result of the foregoing techniques, bandwidth usage efficiency of a tunnel is improved. Moreover, oversubscription and/or saturation of the sub-tunnels is reduced relative to at least some previous solutions. System Architecture FIG.1illustrates an environment100in which a network tunnel102conveys network traffic between network devices according to one or more embodiments. The network tunnel102conveys network traffic comprising data packets between a tunnel source-end or head-end (TSE)104and a tunnel endpoint (TEP)106. The network tunnel102is established and operates according to a Resource Reservation Protocol (RSVP) or protocol related to RSVP (e.g., RSVP—Traffic Engineering). The network tunnel102is established over one or more networks108between the TSE104and the TEP106. In some embodiments, the TSE104may be a headend or source network device and the TEP106may be a tail-end or destination network device. The network tunnel102operates according to RSVP, which is a Multi-Protocol Label Switching protocol. RSVP network tunnels are provided on a source router and installed into a Routing Information Base (RIB) after the corresponding Multi-protocol Label Switching (MPLS) LSPs have been established successfully. In some embodiments, the network tunnel102may be used for communications involving nodes other than the TSE104and the TEP106that are local to the tunnel destination node. The network tunnel102includes a plurality of sub-tunnels110-1,110-2, . . .110-N (collectively “sub-tunnels110”) for conveying network traffic between the TSE104and the TEP106. Each of the sub-tunnels110is established, controlled, and/or maintained according to RSVP. In some embodiments, the network tunnel102is a logical tunnel and the plurality of sub-tunnels110are each individual network tunnels operated according to RSVP. Load-balancing may be implemented in the network tunnel102to manage network traffic and bandwidth utilization. In some implementations of the network tunnel102, each of the plurality of sub-tunnels110may have an equal bandwidth and may be established based on current demand. However, a non-uniform distribution of network traffic among the sub-tunnels110may cause oversubscription to one or more of the sub-tunnels110. Moreover, when an amount of network traffic on one or more sub-tunnels110is smaller than a size of the sub-tunnel, the network may become unnecessarily saturated, preventing or limiting allocation of bandwidth for other network tunnels than the network tunnel102. According to one or more embodiments herein, network bandwidth of the sub-tunnels110may be individually monitored and selectively adjusted based on resource utilization of the sub-tunnels110. During a rebalance time interval, The TSE104monitors the sub-tunnels110individually or collectively to collect first resource utilization information. The number of sub-tunnels110may be adjusted (e.g., increased, decreased) based on the first resource utilization information collected. In some instances, the number of sub-tunnels may be increased during or at the end of an adjustment interval. In some instances, the number of sub-tunnels may be decreased during or at the end of a rebalance interval. In some embodiments, the number of sub-tunnels110may be increased for an adjustment interval as a result of on a determination that the current number of sub-tunnels110is insufficient to satisfy the aggregate resource utilization. In some embodiments, the number of sub-tunnels110may be decreased for a rebalance interval as a result of a determination that the aggregate resource utilization could be satisfied using fewer sub-tunnels than the current number of sub-tunnels110. In some implementations, there may be other network tunnels established between the TSE104and the TEP106that are not considered as being sub-tunnels110. In such implementations, the other network tunnels are not monitored and controlled in connection with the sub-tunnels110, as described herein. The rebalance interval observed by the TSE104may be set or selectively adjusted by an authorized user, such as a network administrator. During a rebalance time interval, the TSE104may monitor the sub-tunnels110to collect second resource utilization information. Resources of the sub-tunnels110may be allocated or adjusted at the end of the adjustment interval based on the second resource utilization information. According to the features described herein, utilization of network resources at the individual sub-tunnel level may be monitored. Network resources for individual sub-tunnels110may be reserved and/or allocated based on one or more parameters, such as network resource utilization. The TSE104may adjust the number of sub-tunnels110operating at a given time based on data collected during the rebalance interval. The TSE104and/or the TEP106may mediate communications between computing devices. The TSE104may be communicatively coupled with a first set of host devices112. The TEP106may be communicatively coupled with a second set of host devices114. Non-limiting examples of host devices include servers, laptops, desktops, and mobile devices (e.g., smart phones, tablet computers). The TSE104may convey network traffic originating from one of the host devices112over one of the sub-tunnels110to the TEP106, which may provide the network traffic to one of the host devices114based on identifying information associated with the network traffic. Each of the sub-tunnels110may have additional identifying information that the TSE104and the TEP106use to distinguish the individual sub-tunnels110. The tunnel identifier may be included in the routing and/or reachability information advertised to the host devices112and/or114. FIG.2illustrates a diagram200representing time intervals during which data sets are collected and analyzed by a network device according to one or more embodiments. The diagram200includes rebalance intervals202in which the TSE obtain first resource utilization data associated with the sub-tunnels110. The diagram200includes a set of adjustment intervals204-1,204-2, . . .204-N (collectively “adjustment intervals204”) during which the TSE104obtains second resource utilization data associated with the sub-tunnels110. An individual adjustment interval204is a shorter time interval than the rebalance interval202. The first resource utilization data includes data regarding the aggregate bandwidth collectively utilized by the plurality of sub-tunnels110during the rebalance interval202-1. The second resource utilization data includes data regarding the bandwidth utilized by the individual sub-tunnels110during the adjustment interval204-N1. In the adjustment, the TEP determines whether to adjust network resource utilization for one or more of the sub-tunnels110based on the first resource utilization data. The TEP also determines whether to adjust the number of sub-tunnels110based on the second resource utilization data in the adjustment. Further description of the adjustment is described with respect toFIGS.4through7Band elsewhere herein. The rebalance interval202includes a plurality N of the adjustment intervals204. The TSE104and/or the TEP106may store values in memory controlling a length of time of the rebalance interval202and/or a length of time of each of the adjustment intervals204. In some embodiments, temporally adjacent adjustment intervals204are consecutive without an intervening break—for instance, the adjustment interval204-2begins when the adjustment interval204-1ends. Adjustments of one or more attributes of the sub-tunnels110may be performed at an end of each adjustment interval204based on resource utilization for the preceding rebalance interval202and/or the preceding adjustment interval204. At an end of each adjustment interval204, the first and second resource utilization data are assessed to determine whether to adjust attributes of the network tunnel102, as discussed with respect toFIGS.3,4, and elsewhere herein. A tunnel node (e.g., the TSE104) collects first resource utilization data regarding network resources utilized for the plurality of sub-tunnels110for a rebalance interval202-1. The tunnel node (e.g., the TSE104) also collects second resource utilization data regarding network resources utilized for the plurality of sub-tunnels110for an adjustment interval204. After the adjustment interval204-N1, the tunnel node evaluates the first resource utilization data for the rebalance interval202-1and evaluates the second resource utilization data for the adjustment interval204-N1. The TSE104collects the first resource utilization data for the rebalance interval202-2that includes the adjustment interval204-N2after the adjustment interval204-N1. The TSE104also collect the second resource utilization data for the adjustment interval204-N2after the adjustment interval204-N1. At or near the end of the adjustment interval204-N2, the first and second resource utilization data are evaluated by the tunnel node (e.g., TSE104, TEP106) to determine whether to adjust parameters of the tunnel102. FIG.3shows an environment300in which resource utilization data for a plurality of sub-tunnels is collected and analyzed over one or more time intervals according to one or more embodiments. A network tunnel302that includes a plurality of sub-tunnels304-1,304-2, . . .304-N is established for conveying network traffic as described with respect toFIG.1and elsewhere herein. The tunnel302is established between a tunnel node or point306(e.g., the TSE104) and another tunnel node or point (e.g., the TEP106) for conveying network traffic. The tunnel node306collects sets of network resource utilization data308-1,308-2, . . .308-N (collectively “resource utilization data308”) for an individual adjustment interval204described with respect toFIG.2. The sets of network resource utilization data308-1,308-2, . . .308-N respectively correspond to network resource utilization of the sub-tunnels304-1,304-2, . . .304-N. The network resource utilization data308may indicate the network bandwidth utilized by the sub-tunnels304over the adjustment interval. In some embodiments, the resource utilization data308may include data points corresponding to network resource utilization data at given times during the adjustment interval. For instance, the resource utilization data308-1may include data points X1, X2, . . . XNeach indicating network bandwidth utilization for the sub-tunnel304-1at given times t1, t2, . . . tNin an adjustment interval. The term “network bandwidth utilization,” as used herein, refers to the network throughput or amount of network data successfully transferred through a sub-tunnel or a tunnel in a given time interval. For instance, network bandwidth utilization may refer to a peak throughput of data conveyed through a tunnel or sub-tunnel over a given time interval t or may refer to a total throughput of data conveyed through a tunnel or sub-tunnel over the given time interval t. For a given adjustment interval, the tunnel node306processes the resource utilization data308and determines whether to rebalance the sub-tunnels304based on a result of the processing. More particularly, the tunnel node306updates a set of aggregate network resource utilization data310for a current rebalance interval that includes the current adjustment interval for which the sets of resource utilization data308was obtained. For an adjustment interval, the tunnel node306determines the set of aggregate network resource utilization310for the sub-tunnels304for individual times t1, t2, . . . tNduring the adjustment interval. By way of non-limiting example, the tunnel node306calculates an aggregate resource utilization Ut1of a given set of data points X11, X21, . . . XN1indicating respective resource utilization for sub-tunnels304-1,304-2, . . .304-N for a first given time or time interval t1in the adjustment time interval204-N1. The tunnel node306sets the aggregate resource utilization Ut1as the maximum aggregate resource utilization UMAXfor the adjustment time interval. The tunnel node306then calculates an aggregate resource utilization Ute of a given set of data points X12, X22, . . . XN2indicating respective resource utilization for sub-tunnels304-1,304-2, . . .304-N for a second given time or time interval t2in the adjustment time interval204-N1. If the aggregate resource utilization Ut2is greater than the aggregate resource utilization Ut1, the aggregate resource utilizations Ute is set as the new maximum aggregate resource utilization UMAXfor the aggregate time interval. Otherwise, the aggregate resource utilization Ut1remains the maximum aggregate resource utilization UMAXand the tunnel node306calculates an aggregate resource utilization Ut3of a given set of data points X13, X23, . . . XN3for a third given time or time interval t3in the adjustment time interval204-N1. As a result of processing the network resource utilization data308for the adjustment time interval204-N1, the tunnel node306updates the aggregate network utilization data310to include the maximum aggregate resource utilization UMAXfor the adjustment interval204-N1. In some embodiments, the tunnel node306may remove or exclude from consideration maximum aggregate resource utilization for adjustment intervals that are not within the current rebalance interval202-1, such as a maximum aggregate resource utilization UMAXfor an adjustment interval204-0preceding the adjustment interval204-1. The tunnel node306performs an assessment312of the updated maximum aggregate resource utilization UMAXand determines, based on a result of the assessment312, whether to perform a rebalance procedure314. In the assessment312, the tunnel node306determines a minimal number of sub-tunnels304over the entire rebalance interval202-1that can satisfy the maximum aggregate resource utilization UMAX. In some situations, the tunnel node306may reduce the number of sub-tunnels304as a result of a determination that a smaller number of sub-tunnels can accommodate the maximum aggregate resource utilization UMAX. In some situations, the tunnel node306may increase the number of sub-tunnels304as a result of a determination that the current number of sub-tunnels is insufficient to accommodate the maximum aggregate resource utilization UMAX. Incrementing the number of sub-tunnels304may be performed at the end of an adjustment interval (e.g., the resource adjustment316) in at least some embodiments. Further description regarding the rebalance procedure314is provided with respect toFIG.4infra. The tunnel node306also evaluates each set of network resource utilization data308and determines whether to perform a resource adjustment316involving the sub-tunnels304. The tunnel node306performs a comparison318between network resource utilization during a current adjustment interval for each of the sub-tunnels304with a defined utilization threshold320. The tunnel node306updates excess utilization data322to indicate amounts of the resource utilization that exceed the defined utilization threshold320for the adjustment interval. The tunnel node306performs the resource adjustment316for the sub-tunnels304based on the excess utilization data322. In some embodiments, performance of the resource adjustment316is performed subsequent to the rebalance procedure314. Further description of the resource adjustment316is described with respect toFIGS.4through6Einfra. FIG.4shows a method400for managing operation of a plurality of sub-tunnels of a network tunnel according to one or more embodiments. The method400may be performed by an appropriate device, system, or entity described herein, such as the tunnel node306. The method400includes collecting402network resource utilization data regarding network resource utilization by the sub-tunnels304of the network tunnel302. The resource utilization data is collected for individual sub-tunnels304over a current adjustment interval, such as the adjustment interval204-N1described with respect toFIG.2. For example, the tunnel node306may collect, for an adjustment interval (e.g., 5 minutes), a set of resource utilization data308for each of the sub-tunnels304. The method400includes updating, at404, aggregate resource utilization data based on the resource utilization data collected in402. The aggregate resource utilization data includes data indicating an aggregate of the resources collectively used by the sub-tunnels304for each of the adjustment intervals204-1,204-2,204-3, . . .204-N1. The tunnel node306may select an aggregate value of a sub-interval to use to represent the aggregate resource utilization data for each adjustment interval204. By way of non-limiting example, for a given adjustment interval204, the tunnel node306may select a highest collective resource utilization, an average collective resource utilization, or a median collective resource utilization. Updating, in404, involves including the selected collective resource utilization to the aggregate resource utilization data310. The method400also includes determining, at406, whether the resource utilization of the sub-tunnels304implemented for the adjustment interval (e.g., adjustment interval204-1inFIG.2) exceeds the aggregate bandwidth. More specifically, the peak aggregate resource utilization UPEAKof the plurality of sub-tunnels110may be compared with the aggregate bandwidth of the current number of sub-tunnels304. As a result of a determination that the current number of sub-tunnels is insufficient to accommodate the peak aggregate resource utilization UPEAK, the method400proceeds to increasing, at408, the number of sub-tunnels for an adjustment interval. Increasing the number of sub-tunnels in408may include determining a number of new sub-tunnels304to add to the network tunnel302to bring the collective resource utilization of the sub-tunnels304below, or within the defined range of, the peak resource capacity. The number of sub-tunnels304is increased, in408, by the number determined. The peak aggregate resource utilization UPEAKinvolved in406and408is the peak collective resource utilization for the sub-tunnels304observed during an adjustment period. To increase the number of sub-tunnels304in408, the tunnel node306may, in some embodiments, consider the effect of adding a new sub-tunnel at a lowest or minimum capacity setting and incrementally increase the capacity of the new sub-tunnel until the peak aggregate resource utilization UPEAKis below, or within the defined range of, the maximum resource capacity. If a maximum capacity of the new sub-tunnel is insufficient to bring the collective resource utilization below, or within the defined range of, the maximum resource capacity, the tunnel node306may consider the effect of adding a second new sub-tunnel to the sub-tunnels304. The tunnel node306incrementing the capacity of the first and/or second new sub-tunnels until the maximum collective resource utilization for the rebalance interval is below, or within the defined range of, the maximum resource capacity for the interval. As a result of increasing the number of sub-tunnels in408, the method400proceeds to adjusting, at414, resource utilization of the sub-tunnels. As a result of a determination, in406, that the resource utilization for an adjustment period does not exceed the available aggregate bandwidth of the sub-tunnels304, the method400proceeds to410. At410, the method400includes determining whether the number of current sub-tunnels304is superfluous for the rebalance interval. For instance, the tunnel node306may evaluate whether the collective resource utilization capacity (e.g., bandwidth) of the sub-tunnels304for a given adjustment interval is superfluous. The tunnel node306may determine that the current number N of sub-tunnels304is superfluous if a number M of the sub-tunnels less than a number N would satisfy the collective resource utilization for the rebalance interval. The determination in410may involve evaluating whether the maximum aggregate resource utilization of the sub-tunnels304during the rebalance interval would be within the maximum resource capacity of a fewer number of sub-tunnels304—for instance, (N−1) sub-tunnels. The tunnel node306may, for instance, evaluate whether a collective maximum utilization capacity of a fewer number of the sub-tunnels304exceeds a peak aggregate resource utilization during the rebalance interval. If so, the tunnel node306may determine how many sub-tunnels are sufficient to accommodate the collective maximum utilization capacity during the rebalance interval. The tunnel node306, for example, may determine a new number of sub-tunnels304to be implemented for the next adjustment interval according to the following Equation 1: N=UMaxCT[1] where N is the number of tunnels to be implemented, UMAXis the maximum resource utilization of the tunnel during the rebalance interval, and CTis a resource capacity of a single tunnel. The number N of tunnels may be subtracted from the current number of sub-tunnels304to determine the number of tunnels to be removed or discontinued for the next adjustment interval. Subsequent to the determination in410or decreasing the number of sub-tunnels in412, the method400proceeds to412. At412, the method400includes decreasing the number of sub-tunnels304. The tunnel node306may select a sub-tunnel to be discontinued having the highest resource utilization among the sub-tunnels304for the adjustment interval in some embodiments. The tunnel node306may wait to discontinue or remove the selected sub-tunnel after a defined interval of time. For instance, the tunnel node306may wait for a user-defined time interval (e.g., 1-hour) or a defined number of adjustment intervals204before the selected sub-tunnel is removed or discontinued. During the defined interval of time, the tunnel node306may reincorporate the selected sub-tunnel into the plurality of sub-tunnels304and begin reusing the selected sub-tunnel to satisfy demand for resource utilization. For instance, the tunnel node306may convey network traffic using the selected sub-tunnel to accommodate a spike in demand for network resources during the defined interval of time. Because the process of initiating a new sub-tunnel involves computational resources, waiting for a defined interval of time helps to conserve use of computational resources that would otherwise be directed to reinitiating a sub-tunnel that has already been deleted. Subsequent to the determination in410or decreasing the number of sub-tunnels in412, the method400proceeds to414. At414, the method400involves adjusting the resource utilization of the sub-tunnels304. Adjusting, in414, may include redistributing network traffic among the sub-tunnels304and may include adjusting bandwidth allocated among the sub-tunnels304. Redistributing network traffic may be performed according to a known procedure or technique, such as equal-cost multi-path (ECMP) routing. Adjusting bandwidth allocated among the sub-tunnels304may include implementing an automatic bandwidth adjustment procedure, as implemented in connection with Multi-Path Label Switching techniques. Subsequent to adjusting resource utilization in414, the method400proceeds to416. At416, the method400includes determining whether the network resource utilization of any individual sub-tunnel304during the current or preceding adjustment interval exceeds a defined utilization threshold. The defined utilization threshold may be a user-defined value set by an authorized user, such as a network administrator. If the resource utilization of an individual sub-tunnel304exceeds the defined utilization threshold, the method400proceeds to performing, at418, a least-fill technique for resource utilization. Further description of the resource adjustment procedure is provided with respect toFIGS.5through6Einfra. Subsequent to the determination in416or performance of the least-fill technique in418, the method400proceeds back to402. The method400may be performed once for a given adjustment interval. FIG.5shows a method500for performing a least-fill technique to adjust resource utilization of a plurality of sub-tunnels according to one or more embodiments. The method500may be performed by any appropriate device, system, or entity described herein, such as the tunnel node306. The method500may be performed as at least part of performance of the adjustment procedure in412of the method400. Performance of the method500is described in connection withFIGS.6A through6E. The method500, as well as the diagrams shown inFIGS.6A through6E, involves performance of the least-fill technique referenced infra herein. FIG.6Ashows a diagram600A indicating resource utilization by each of the sub-tunnels304. The resource utilization shown in the diagram600A illustrates, for example, network bandwidth utilization or throughput for individual sub-tunnels in a given time interval during an adjustment interval (e.g., adjustment intervals204ofFIG.2). The values shown in the diagram600A may be in data throughput per unit time, such as gigabits per second (Gb/s). The values selected for resource utilization in the method500may be, by way of non-limiting example, the maximum resource utilization, the average resource utilization, or a result of a statistical calculation of resource utilization for the individual sub-tunnels during the adjustment interval. The diagram600A includes a resource usage602of a first sub-tunnel, a resource usage604of a second sub-tunnel, a resource usage606of a third sub-tunnel, a resource usage608of a fourth sub-tunnel, and a resource usage610of a fifth sub-tunnel. The diagram600A also includes a defined utilization threshold612having a value of 5 (e.g., 5 Gb/s). As shown inFIG.6A, the tunnel node306detects the resource utilization604and610of the second and fifth sub-tunnels exceed the defined threshold612as described with respect to410of the method400(seeFIG.4). The method500includes sorting, at502, the current sub-tunnels304by order of increasing resource utilization.FIG.6Bshows a diagram600B showing the sub-tunnels304sorted by resource utilization during the adjustment interval. Specifically, the tunnel node306sorts the sub-tunnels according to the following order: the third sub-tunnel has a resource utilization606of 1, the fourth sub-tunnel has a resource utilization608of 2, the first sub-tunnel has a resource utilization602of 3, the second sub-tunnel has a resource utilization604of 7, and the fifth sub-tunnel has a resource utilization610of 9. The method500includes determining, at504, an aggregate excess resource utilization of the sub-tunnels304for the adjustment interval. Determining in504may include calculating the sum of the excess resource utilizations by the individual sub-tunnels304. Referring toFIG.6B, the tunnel node306may determine that the second sub-tunnel has an excess resource utilization of 2 and the fifth sub-tunnel has an excess resource utilization of 4. The tunnel node306calculates that the aggregate excess utilization for the sub-tunnels is 6. At506, the method500includes setting an initial number of sub-tunnels for distribution of excess resource utilization. The initial number of sub-tunnels may be set in506based on a result of the rebalancing procedure performed at408of the method400. In some instances, the initial number of sub-tunnels may be different than the number of sub-tunnels304for the current adjustment interval. The method500includes determining, at508, a distribution for the excess resource utilization among the initial number of sub-tunnels. Determining the distribution in508is, in some implementations, an iterative process in which the excess resource utilization determined in504is successively distributed among the sub-tunnels having a resource utilization less than or equal to the next highest resource utilization among the sub-tunnels. FIG.7shows a method700for distributing the excess resource utilization among the sub-tunnels according to one or more embodiments. The method700is performed as part of or in connection with determining a distribution in508of the method500. The method700includes determining, at702, a proposed distribution of the excess resource utilization among a subset of the sub-tunnels304. During an initial iteration of the method700, the subset of the sub-tunnels304is set to the sub-tunnel having the lowest resource utilization among the sub-tunnels304. FIG.6Cshows a diagram600C of a proposed excess resource utilization allocated to a sub-tunnel according to one or more embodiments. More particularly, the diagram600C illustrates a proposed excess resource utilization618allocated to a third sub-tunnel606, which is the sub-tunnel having the lowest resource utilization, as shown in the diagram600B. The proposed excess resource utilization618corresponds to the excess resource utilization determined in504of the method500. For an implementation in which one or more new sub-tunnels are added as a result of the rebalancing procedure408, the new sub-tunnel(s) is/are considered as having a zero-resource utilization for purposes of the distribution in508. For an implementation in which one or more new sub-tunnels are removed or discontinued as a result of the rebalancing procedure408, the resource utilization for the removed sub-tunnel(s) is added to the proposed excess resource utilization618and the removed sub-tunnel is removed from consideration as a candidate in the distribution process of508. The method700includes assessing704whether the proposed excess resource is less than a next highest resource utilization. More particularly, the tunnel node306performs a comparison of the combined resource utilization of the excess resource utilization618and the resource utilization of the lowest resource utilization among the sub-tunnels—the third sub-tunnel606in this instance. As a result of determining that the resource utilization for the third sub-tunnel606and the excess resource utilization618is not less than the resource utilization for the next highest resource utilization (the resource utilization of the fourth sub-tunnel608in this case), the method700proceeds to706. With reference toFIG.6C, for instance, the resource utilization of seven (7) for the third sub-tunnel606and the excess resource utilization618is greater than resource utilization of two (2) for the fourth sub-tunnel608. At706, the method involves adding the sub-tunnel having the next lowest resource utilization to the subset of the sub-tunnels for which the distribution was determined in702. In the diagram600C, for instance, the tunnel node306may add the resource utilization of the fourth sub-tunnel608to the subset of sub-tunnels consisting of the third sub-tunnel306. The method700may include determining, at708, whether the size of the subset of sub-tunnels is greater than or equal to the number of sub-tunnels304. If, at708, the tunnel node306determines that the size of the subset of sub-tunnels is less than the number of sub-tunnels304, then the method700proceeds back to702for the next iteration. At702, the method700includes determining a proposed distribution of the excess resource utilization618among a subset of the sub-tunnels304, which now includes the resource utilizations of the third sub-tunnel606and the resource utilization of the fourth sub-tunnel608. FIG.6Dshows a diagram600D of a proposed excess resource utilization allocated to a plurality of sub-tunnels according to one or more embodiments. More particularly, the diagram600D shows a proposed excess resource utilization618allocated to the third sub-tunnel606and the fourth sub-tunnel608, which comprise the subset of sub-tunnels. The distribution shown inFIG.6D, the excess resource utilization618is distributed to create equal resource utilization among the subset of sub-tunnels, such that the resource utilizations for the third and fourth sub-tunnels606and608are equal. The tunnel node306may calculate the equalized resource utilization amounts according to the following Equation 2: US=UE+∑x=1N⁢UxN[2] wherein USis the equalized resource utilization, UEis the excess resource utilization618, UXis the resource utilization of the individual sub-tunnels of the subset of sub-tunnels, and N is the number of sub-tunnels in the subset of sub-tunnels. In the diagram600D and according to Equation 2, a proposed resource utilization620for the third sub-tunnel606and a proposed resource utilization622for the fourth sub-tunnel608are equalized at 4.5. Using Equation 2, the sum of the excess resource utilization618and the resource utilizations606and608is 9, which is divided by the number of sub-tunnels (2) to achieve the equalized resource utilization value of 4.5. The method700proceeds again to assessing, at704, whether the proposed distribution determined in702is less than the next highest resource utilization among the sub-tunnels304. Referring toFIG.6D, the proposed resource utilizations620and622exceed the next highest resource utilization, which is the resource utilization for the first sub-tunnel602. As a result, the method700proceeds again to706, wherein the sub-tunnel having the next highest resource utilization—the first sub-tunnel602—is added to the subset of sub-tunnels. The method700proceeds back to determining, at702, another proposed distribution of resource utilization among the subset of sub-tunnels.FIG.6Eshows a diagram600E a diagram of a proposed excess resource utilization allocated to a plurality of sub-tunnels according to one or more embodiments. More specifically, the diagram600E illustrates a proposed distribution of the excess resource utilization618among the third sub-tunnel606, the fourth sub-tunnel608, and the first sub-tunnel602, which comprise the subset of sub-tunnels to be considered. The proposed distribution in the diagram600E includes a proposed utilization624for the third sub-tunnel606, a proposed utilization626for the fourth sub-tunnel608, and a proposed utilization628for the first sub-tunnel602. The proposed utilizations624,626, and628are determined at least in part based on Equation 2 supra. The method700proceeds to assessing, at704, whether the proposed distribution of resource utilization determined in702is less than the next highest resource utilization among the sub-tunnels304. Referring toFIG.6E, the proposed utilizations624,626, and628are less than the next highest resource utilization of 5 for the second sub-tunnel604. The method700proceeds to adjusting, at708, the resource utilization of the subset of sub-tunnels according to the distribution determined in702. In this instance, the excess resource utilizations614and616are equitably distributed among the first sub-tunnel602, the third sub-tunnel606, and the fourth sub-tunnel608, thereby reducing the load on the second sub-tunnel604and the fifth sub-tunnel610. As a result of completing the adjustment in708, the tunnel node306returns to402of the method400, wherein the tunnel node306collects network resource utilization data for the sub-tunnels402. Management of the sub-tunnels resource utilization described herein further includes improvements to collection of resource utilization data. More particularly, the collection and storage of network resource utilization data and updating aggregate resource utilization data includes improvements to data storage and retrieval. In the techniques described herein, a sub-tunnel historical resource utilization values may be stored for a defined amount of time (e.g., 1 hour) to determine maximum utilization values over that time. In connection with delaying discontinuation of a selected sub-tunnel, as described with respect toFIG.4and elsewhere herein, the resource utilization history of the selected sub-tunnel may also be preserved. Preserving the resource utilization data of a sub-tunnel selected for discontinuation or removal also involves significant data storage and computational resources. Several techniques are implemented to reduce the data storage and computational resources involved with preserving such resource utilization data. In some embodiments, the tunnel node306would store the historical resource utilization values in a data set and iterate over the data set to find a maximum value and remove values that are too old (e.g., exceed a defined time range). However, this approach may be too slow and may occupy a significant amount of data storage space. The tunnel node306, in some embodiments, implements two modifications that improve the efficiency of storing and retrieving the historical values for such operations. First, the tunnel node306implements two collections or sets of data: one collection includes a queue to remove values exceeding a defined time range prior to a current time, and another collection sorted by bandwidth to efficiently find a maximum utilization value over the range. Second, the tunnel node306establishes a maximum collection size by discarding utilization values that are identified as being unnecessary. The techniques of the present disclosure include features for reducing the amount of data storage utilized for preserving resource utilization data of a sub-tunnel selected for discontinuation or removal. A defined window or time range (e.g., one or more rebalance intervals202, one or more adjustment intervals204) may be provided in which new data points may be collected at a defined interval (e.g., once every few seconds). As a result, the data set for a window may grow large over time, occupying a significant amount of data storage space. Network devices, such as the tunnel node306, configured according to one or more embodiments herein may implement various techniques for reducing the number of data points stored for a defined retention window. One set of such techniques are described with respect toFIGS.8A and8B.FIG.8Ashows a diagram800A of a set of data points representing resource utilization of a sub-tunnel over a time interval. The diagram800A includes a plurality of data points802at least some of which were collected during a defined time frame804. The inventors of the present disclosure observed that some of the collected data points do not significantly affect a maximum resource utilization value for the time frame804. In the diagram800A, a recently captured or penultimate data point808is collected in a sub-window806at the end of the window804. A data point810is collected subsequent to the data point808. The data point808cannot be a maximum resource utilization value for the window804because the data point810has a greater value than the data point808and occurs after the data point808. For purposes of determining a maximum resource utilization value for the window804, if a data point810collected after the data point808within a defined sub-window806has a value greater or equal to a value of one or more data points808in the sub-window806, the one or more data points808can be discarded or removed from consideration. The sub-window806may be a time-range, at the end of the defined window804, defined by an authorized user (e.g., network administrator). FIG.8Bshows a diagram800B of a set of data points representing resource utilization of a sub-tunnel over a time interval. The diagram800B includes a plurality of data points812at least some of which were collected during a defined time frame814. The inventors of the present disclosure observed that some of the collected data points do not significantly affect a maximum resource utilization value for the time frame814. In the diagram800B, a recently captured or penultimate data point818is collected in a sub-window816at the end of the window814. A data point820is collected subsequent to the data point818. The data point820cannot be a maximum resource utilization value for the window814because the data point820has a lower value than the data point818and occurs after the data point818. For purposes of determining a maximum resource utilization value for the window814, if a data point820collected after one or more data points818within a defined sub-window816has a value less than a value of one or more data points818in the sub-window816, the data point820may be discarded or removed from consideration in some embodiments. The sub-window816may be a time-range, at the end of the defined window814, defined by an authorized user (e.g., network administrator). In some embodiments, a tunnel node, such as a network device that is a tunnel source end or head-end of a network tunnel, according to the present disclosure may be configured to implement techniques for reducing the number of data points stored for a defined retention window, as described with respect toFIGS.8A and8B. A maximum data storage size (in number of data points) and/or a window size (in unit time) may be defined in memory of the tunnel node for the windows804and/or814. The tunnel node may determine a size (e.g., time interval) of the sub-windows806and/or816based on the maximum data storage size and/or the window size. For instance, the size of the sub-windows806and816may be a ratio of the window size to the maximum data storage size. Discarding superfluous data points in the sub-windows806and/or816can lead to a significant reduction in storage space used to collect resource utilization data points according to the features disclosed herein. Operation of data collection and storage can be tuned to adjust accuracy of the data points collected. Another set of techniques for reducing the number of data points stored for a defined retention window are described with respect toFIGS.9A and9B.FIG.9Aillustrates a diagram900A of a set of data points representing resource utilization of a sub-tunnel during a first time interval.FIG.9Billustrates a diagram900B of data points representing resource utilization of a sub-tunnel during a second time interval subsequent to the first time interval. The diagram900A includes data points collected during a first time frame904A. The diagram900B includes data points collected during a time frame904B that includes a portion of data points in the time frame904A. More particularly, the time frame904A inFIG.9Ais from a first time916A to a second time916B. The time frame904B inFIG.9Bis from a third time916C to a fourth time916D. In the time frame904B, the third time916C is after the first time916A and the fourth time916D is after the second time916B. A scenario may arise in which a resource utilization data point is not obtained for a long period of time. For instance, a data point908in the time frame904A may be collected at a first time and then a second data point910may be collected at a second time long after the first data point908. If the resource utilization value of the second data point910is lower than the resource utilization value of the first data point908, the situation described with respect toFIG.8Bmay arise in which the second data point910may become the new maximum value for the window904A. For instance, the second data point910may become the new maximum value for the window904A even though the first data point908has a higher value for a time within the window904A. A tunnel node implementing the techniques discussed with respect toFIGS.8A and8Bmay be configured to detect that, at a time at which the second data point910was sampled, a defined sub-window912does not include another data point. The sub-window912includes a fifth time914that is a defined amount of time prior to the second time916B. The second time916B may be a current time in some implementations. In response to detecting the absence of another data point in the sub-window912and collecting the second data point910, the tunnel node306is configured to adjust a time of the first data point908to an adjusted time918that precedes the sub-window912by a time Δt. During the time frame904B and as a result of adjusting the time of the first data point908to the adjusted time918, the tunnel node306still considers an adjusted first data point908-A as the maximum resource utilization value for the window904B. The time Δt between the fifth time914and an adjusted time918of the adjusted first data point908-A is, in some embodiments, defined in memory of the tunnel node. In some embodiments, the time Δt is based on a sample rate of the network device. For instance, the time Δt may be a sample interval prior to the fifth time914or a time interval corresponding to a multiple of the sample rate of the network device. In some embodiments, the time Δt is selectively adjustable by an authorized user. The sub-window912corresponds to the sub-window806described with respect toFIGS.8A and8B. As a result of the features described with respect toFIGS.9A and9B, in conjunction with the features ofFIGS.8A and8B, a previous value from the time frame904A is not discarded during the time frame904B due to a long interval between data points. FIG.10illustrates a network device1000that is adapted to operate according to one or more embodiments of the present disclosure. The network device1000may be a switch or a router, for example, or any other device that has network traffic switching or routing, or generally, forwarding capabilities. As shown, network device1000can include a management module1002, an internal fabric module1004, and a number of I/O modules1006a-1006p. The management module1002may be disposed in a control plane (also referred to as control layer) of the network device1000and can include one or more management CPUs1008for managing and controlling operation of network device1000in accordance with the present disclosure. Each management CPU1008can be a general-purpose processor, such as an Intel®/AMD® x86-64 or ARM® processor, that operates under the control of software stored in memory, such as a storage subsystem1020, which may include read-only memory (ROM)1028and/or random-access memory (RAM)1026. In some embodiments, the CPU1008may include control circuitry, and may include or be coupled to a non-transitory storage medium storing encoded instructions that cause the CPU1008to perform operations described herein. In some embodiments, the non-transitory storage medium may include encoded logic or hardwired logic for controlling operation of the CPU1008. The control plane refers to all the functions and processes that determine which path to use, such as routing protocols, spanning tree, and the like. Internal fabric module1004and I/O modules1006a-1006pcollectively represent the data plane of network device1000(also referred to as data layer, forwarding plane, etc.). Internal fabric module1004is configured to interconnect the various other modules of network device1000. Each I/O module1006a-1006pincludes one or more input/output ports1010a-1010pthat are used by network device1000to send and receive network packets. Each I/O module1006a-1006pcan also include a packet processor1012a-1012p. Each packet processor1012a-1012pcan comprise a forwarding hardware component configured to make wire speed decisions on how to handle incoming (ingress) and outgoing (egress) network packets. In some embodiments, the forwarding hardware can comprise an application specific integrated circuit (ASIC), a field programmable array (FPGA), a digital processing unit, or other such collection of configured logic. FURTHER EMBODIMENTS Embodiments disclosed herein include a method comprising establishing, at a first time, a first distribution for resource utilization of a plurality of sub-tunnels of a network tunnel between a first tunnel endpoint and a second tunnel endpoint, the plurality of sub-tunnels operating according to a Resource Reservation Protocol and including a first sub-tunnel having a first resource utilization that is different than a second resource utilization of a second sub-tunnel; receiving, subsequent to the first time, a data set for a first time period indicating network resource utilization for each sub-tunnel of the plurality of sub-tunnels; detecting an excess resource utilization of a first set of sub-tunnels of the plurality of sub-tunnels based on the data set, the excess resource utilization exceeding a defined utilization threshold; determining a second distribution of the excess resource utilization over a second set of sub-tunnels of the plurality of sub-tunnels in response to detecting the excess resource utilization, the second distribution including a first equalized resource utilization over the second set of sub-tunnels that is less than a first next highest resource utilization among an ordered set of the plurality of sub-tunnels; and establishing, at a second time after the first time, the second distribution for the second set of sub-tunnels. In some embodiments, a resource utilization of each of the second set of sub-tunnels is less than the defined utilization threshold for the first time period. In some embodiments, the method comprises determining a second equalized resource distribution of the excess resource utilization over a third set of sub-tunnels that is smaller than the second set of sub-tunnels; detecting that the second equalized resource distribution is equal to or exceeds a second next highest resource utilization among the ordered set; and identifying the second set of sub-tunnels as a result of detecting that the second equalized resource distribution is equal to or exceeds the second next highest resource utilization. In some embodiments, the method comprises identifying a maximum collective resource utilization of the plurality of sub-tunnels during the first time period; detecting that the maximum collective resource utilization for the first time period exceeds a defined utilization threshold; and adding a new sub-tunnel to the plurality of sub-tunnels as a result of detecting that the maximum collective resource utilization exceeds the defined utilization threshold, wherein the new sub-tunnel is included in the second set of sub-tunnels. In some embodiments, the method comprises determining that a subset of the plurality of sub-tunnels can accommodate a resource utilization of the network tunnel based on aggregate resource utilization data that includes a maximum collective resource utilization of the plurality of sub-tunnels during the first time period; and discontinuing a sub-tunnel of the plurality of sub-tunnels in response to determining that the fewer number of sub-tunnels can accommodate the resource utilization of the network tunnel, wherein a resource utilization of the sub-tunnel is included in the excess resource utilization. In some embodiments, the method comprises updating aggregate resource utilization data to include a maximum collective resource utilization of the plurality of sub-tunnels during the first time period, the aggregate resource utilization data including maximum collective resource utilization data for the plurality of sub-tunnels over a second time period longer than the first time period. In some embodiments, the method comprises sorting the plurality of sub-tunnels into the ordered set according to increasing resource utilization based on the data set for the first time period. In some embodiments, the method comprises detecting, in a second time period within the first time period, that a first utilization data point has a lower resource utilization value than a second utilization data point, one data point of the first data point and the second data point being a most recent data point; and discarding the first utilization data point as a result of detecting that the first utilization data point has the lower resource utilization value. Embodiments of the present disclosure include a network device storing instructions that, as a result of execution by the network device, cause the network device to detect an excess resource utilization of a first set of a plurality of sub-tunnels based on network resource utilizations of the plurality of sub-tunnels; determine a first distribution of the excess resource utilization over a second set of the plurality of sub-tunnels; determine a second distribution of the excess resource utilization over a third set of the plurality of sub-tunnels in response to a determination that a first resource utilization associated with the first distribution exceeds a first next highest resource utilization among the plurality of sub-tunnels relative to the first resource utilization; and establish the second distribution for the third set in response to a determination that a second resource utilization associated with the second distribution is less than a second next highest resource utilization among the plurality of sub-tunnels relative to the second resource utilization. In some embodiments, execution of the instructions causes the network device to update aggregate resource utilization data to include a maximum collective resource utilization of the plurality of sub-tunnels during a first time period, the aggregate resource utilization data including maximum collective resource utilization for a network tunnel including the plurality of sub-tunnels over a second time period that includes the first time period; determine that the aggregate resource utilization data fails to satisfy a set of resource utilization criteria; and adjust a number of the plurality of sub-tunnels for a third time period after the first time period in response to determination that the aggregate resource utilization data fails to satisfy the set of resource utilization criteria. In some embodiments, determination of the second distribution is based on adjustment of the number of the plurality of sub-tunnels. In some embodiments, the second resource utilization is an equalized resource utilization for the third set of sub-tunnels resulting from the second distribution. In some embodiments, the third set includes a greater number of the plurality of sub-tunnels than the second set. In some embodiments, execution of the instructions causes the network device to sort the plurality of sub-tunnels into an ordered set according to increasing resource utilization for the first time period, wherein determination of the second distribution is based on the ordered set. Embodiments of the present disclosure include one or more non-transitory computer readable media storing instructions that, as a result of execution by one or more processors, cause the one or more processors to monitor resource utilization of each sub-tunnel of a plurality of sub-tunnels of a network tunnel implementing a Reservation Resource Protocol; as a result of a detection that a resource utilization of a first set of sub-tunnels of the plurality of sub-tunnels exceeds a defined utilization threshold, determine an adjusted resource utilization of a second subset of the plurality of sub-tunnels based on resource utilization of each of the sub-tunnels during a current measurement period, wherein the resource utilization of each sub-tunnel of the second subset of sub-tunnels during the current measurement period is less than the defined utilization threshold; and establish the adjusted resource utilization for the second set of sub-tunnels for a next measurement period, wherein the adjusted resource utilization of each sub-tunnel of the second subset of sub-tunnels is less than a lowest resource utilization among the first set of sub-tunnels. In some embodiments, execution of the instructions stored on the one or more non-transitory computer readable media cause the one or more processors to store a first collection of network resource utilization data and a second collection of network resource utilization data for the plurality of sub-tunnels; determine that the first collection includes first resource utilization data associated with a time exceeding a defined time range prior to the current measurement period; delete the first resource utilization data as a result of a determination that the time exceeds the defined time range; sort the second collection by magnitude of the resource utilization; and identify a maximum resource utilization in the second collection for a defined time period. In some embodiments, execution of the instructions stored on the one or more non-transitory computer readable media cause the one or more processors to receive a plurality of data points indicating the resource utilizations of the plurality of sub-tunnels; detect that a first utilization data point of the plurality of data points has a lower resource utilization value than a second utilization data point of the plurality of data points, one data point of the first data point and the second data point being a most recent data point and another data point of the first data point and the second data point being a second most recent data point; and discard the first utilization data point as a result of detecting that the first utilization data point has the lower resource utilization value. In some embodiments, execution of the instructions stored on the one or more non-transitory computer readable media cause the one or more processors to receive first data indicating a first resource utilization of a sub-tunnel of the plurality of sub-tunnels, the first data associated with a first time; receive second data indicating a second network resource utilization of the sub-tunnel, the second data associated with a second time subsequent to the first time; detect that the first time is outside of a defined time period; and adjust the first time to an adjusted first time after the first time and before the second time in response to detection that the first time is prior to the defined time period. In some embodiments, the third set includes a greater number of the plurality of sub-tunnels than the second set. In some embodiments, the adjusted resource utilization is a distribution of excess resource utilization among the second subset of sub-tunnels. The various embodiments further can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices or processing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of computers, such as desktop, laptop or tablet computers running a standard operating system, as well as cellular, wireless and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Such a system also can include a number of workstations running any of a variety of commercially available operating systems and other known applications for purposes such as development and database management. These devices also can include other electronic devices, such as dummy terminals, thin-clients, gaming systems and other devices capable of communicating via a network. These devices may include virtual devices such as virtual machines, hypervisors and other virtual devices capable of communicating via a network. Various embodiments of the present disclosure utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as Transmission Control Protocol/Internet Protocol (“TCP/IP”), User Datagram Protocol (“UDP”), protocols operating in various layers of the Open System Interconnection (“OSI”) model, File Transfer Protocol (“FTP”), Universal Plug and Play (“UpnP”), Network File System (“NFS”), Common Internet File System (“CIFS”) and AppleTalk. The network can be, for example, a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network, a satellite network, and any combination thereof. In some embodiments, connection-oriented protocols may be used to communicate between network endpoints. Connection-oriented protocols (sometimes called connection-based protocols) are capable of transmitting data in an ordered stream. Connection-oriented protocols can be reliable or unreliable. For example, the TCP protocol is a reliable connection-oriented protocol. Asynchronous Transfer Mode (“ATM”) and Frame Relay are unreliable connection-oriented protocols. Connection-oriented protocols are in contrast to packet-oriented protocols such as UDP that transmit packets without a guaranteed ordering. In embodiments utilizing a web server, the web server can run any of a variety of server or mid-tier applications, including Hypertext Transfer Protocol (“HTTP”) servers, FTP servers, Common Gateway Interface (“CGI”) servers, data servers, Java servers, Apache servers, and business application servers. The server(s) also may be capable of executing programs or scripts in response to requests from user devices, such as by executing one or more web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Ruby, PHP, Perl, Python or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase and IBM® as well as open-source servers such as MySQL, Postgres, SQLite, MongoDB, and any other server capable of storing, retrieving, and accessing structured or unstructured data. Database servers may include table-based servers, document-based servers, unstructured servers, relational servers, non-relational servers, or combinations of these and/or other database servers. The environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (“SAN”) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (“CPU” or “processor”), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad) and at least one output device (e.g., a display device, printer, or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices, and solid-state storage devices such as random-access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc. Such devices also can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or web browser. In addition, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets) or both. Further, connection to other computing devices such as network input/output devices may be employed. Storage media and computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and communication media, such as, but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, Electrically Erasable Programmable Read-Only Memory (“EEPROM”), flash memory or other memory technology, Compact Disc Read-Only Memory (“CD-ROM”), digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by the system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the disclosure as set forth in the claims. Other variations are within the spirit of the present disclosure. Thus, while the disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the disclosure to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the disclosure, as defined in the appended claims. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected,” when unmodified and referring to physical connections, is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. The use of the term “set” (e.g., “a set of items”), unless otherwise noted or contradicted by context, is to be construed as a nonempty collection comprising one or more members. The term “subset” (e.g., “a subset of a set of items”) may be construed as a proper subset comprising a fewer number of members than the referenced set. Conjunctive language, such as phrases of the form “at least one of A, B, and C,” or “one or more of A, B, and C,” unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with the context as used in general to present that an item, term, etc., may be either A or B or C, or any nonempty subset of the set of A and B and C. For instance, in the illustrative example of a set having three members, the conjunctive phrase “at least one of A, B, and C” refers to any of the following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. In addition, unless otherwise noted or contradicted by context, the term “plurality” indicates a state of being plural (e.g., “a plurality of items” indicates multiple items). The number of items in a plurality is at least two but can be more when so indicated either explicitly or by context. Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. Processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. The code may be stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable storage medium may be non-transitory. In some embodiments, the code is stored on set of one or more non-transitory computer-readable storage media having stored thereon executable instructions that, when executed (i.e., as a result of being executed) by one or more processors of a computer system, cause the computer system to perform operations described herein. The set of non-transitory computer-readable storage media may comprise multiple non-transitory computer-readable storage media and one or more of individual non-transitory storage media of the multiple non-transitory computer-readable storage media may lack all of the code while the multiple non-transitory computer-readable storage media collectively store all of the code. Further, in some examples, the executable instructions are executed such that different instructions are executed by different processors. As an illustrative example, a non-transitory computer-readable storage medium may store instructions. A main CPU may execute some of the instructions and a graphics processor unit may execute other of the instructions. Generally, different components of a computer system may have separate processors and different processors may execute different subsets of the instructions. Accordingly, in some examples, computer systems are configured to implement one or more services that singly or collectively perform operations of processes described herein. Such computer systems may, for instance, be configured with applicable hardware and/or software that enable the performance of the operations. Further, computer systems that implement various embodiments of the present disclosure may, in some examples, be single devices and, in other examples, be distributed computer systems comprising multiple devices that operate differently such that the distributed computer system performs the operations described herein and such that a single device may not perform all operations. In some embodiments, the techniques and methods described herein (e.g., methods400,500,700,700B) may be performed in a different order. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the disclosure and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure. Embodiments of this disclosure are described herein, including the best mode known to the inventors for carrying out the disclosure. Variations of those embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate and the inventors intend for embodiments of the present disclosure to be practiced otherwise than as specifically described herein. Accordingly, the scope of the present disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the scope of the present disclosure unless otherwise indicated herein or otherwise clearly contradicted by context.
77,596
11863465
DETAILED DESCRIPTION FIG.1is a block diagram of an example communication environment100that includes a service provider102that provides one or more services (e.g., video, telephony, Internet data, etc.) to a plurality of subscribers (generally108) via a connection to one or a combination of networks. As an example, the service provider102may provide a subscriber's premises with a connection to an access network104comprising a fiber to the premises (FTTP), a hybrid fiber coax communication infrastructure, or other communication infrastructure configured to provide access to cable television (CATV) services and access to a wide area network (WAN)116(e.g., the Internet). For example, the WAN116may provide access to various remote servers118a-n(generally118), which may operate as sources of information and various services. The service provider102may provide services according to a standardized communication protocol, such as a version of the DOCSIS® standard, for example. The standardized communication protocol according to which the service provider102operates may define upstream and downstream channels to enable bidirectional communications between the service provider102and the subscriber108. According to an aspect, the subscriber108premises includes a router112configured to provide a Local Area Network (LAN)106that various devices110a-n(generally110) are enabled to connect to for accessing services provided by the service provider102via the access network104. The subscriber108premises further includes a modem114configured to connect the service provider's access network104to the router112. In some examples, the modem114may be embodied as a gateway device that includes the router112. In other examples, the router112and the modem114are separate devices that are communicatively connected. In various aspects, the router112may include one or more wired interfaces that enable one or more devices110to connect to the router112via an Ethernet cable. In additional aspects, the router112may include an access point212(FIG.2) configured with wireless access point functionality that connects wireless-compatible devices110to the LAN106wirelessly using radio frequencies in various frequency bands. Accordingly, the LAN106may include a wireless LAN (WLAN). Devices110may include various types of computing devices such as servers, workstations, set top boxes, desktop computers, laptop computers, tablets, mobile phones, smart devices, gaming devices, video streaming devices, IoT devices, cameras, smart cars, one or more databases, etc. Further details of the computing devices and variations thereof can be found inFIGS.6and7. According to an aspect, the subscriber's connection to the service provider's access network104may be constrained to a limited amount of available bandwidth, wherein the available bandwidth may be a volume of information per unit of time that a transmission medium (e.g., the subscriber's network connection) can handle. For example, a network connection with a larger bandwidth may be enabled to move a set amount of data (e.g., a video file) faster than a network connection with a lower bandwidth. Bandwidth may be expressed in bits per second (e.g., 60 Mbps may describe a data transfer rate of 60 million bits (megabits) of data every second), and the amount of available bandwidth may be dependent on various factors, such as network infrastructure constraints, Wi-Fi-® airtime contention, capacity issues, on a subscription type or level, etc. As can be appreciated, as the amount of available bandwidth increases, the amount of data that can flow through the network connection to/from devices110connected to the LAN106increases. The bandwidth available to a subscriber108premises may be shared amongst the various devices110connected to the subscriber's LAN106. As the number of connected devices110downloading and/or uploading data increases, the amount of data that can be provided to each of the devices110in a given amount of time may be reduced. Accordingly, the various devices110may receive a portion of the full capacity of available bandwidth, which can slow transmission speeds and negatively affect time-sensitive activities. According to various aspects of the disclosure, a subscriber108may be provided a limited amount of bandwidth, and an Intelligent Bandwidth Prioritization (IBP) manager214(FIG.2) described herein may be used to manage how the bandwidth is allocated to the various devices110communicating on the LAN106. For example, allowing the devices110on the LAN106to battle for priority can have negative experiences, such as poor video streaming, choppy VoIP (Voice over Internet Protocol) calls, or lag in online gaming sessions. As an example, a time-sensitive activity may include a video-over-Internet call where the data (e.g., audio and video data) may be created and transmitted in real-time. Thus, the data may not be buffered (e.g., as opposed to downloading content that may be queued or buffered), and slower speeds may cause data packets to be delayed and dropped, resulting in poor video/audio quality (e.g., video and/or audio dropouts). As will be described in further detail below, the video-over-Internet call, which is latency-sensitive, may have a data transfer pattern (e.g., lower bandwidth data transfers sent at shorter intervals) that may correspond with a traffic classification and be used to identify, classify, and prioritize traffic flows that are latency-sensitive. Currently, the router112may be configured to allocate bandwidth to various requesting devices110as needed without regard for the services/applications that may be consuming the bandwidth. Accordingly, some applications, which may be time-sensitive, may run slowly and/or experience poor video/audio quality, particularly if a large-bandwidth-usage application is using the available bandwidth. In other examples, the router112may be configured with QoS policies and QoS functions that allows a user to assign a priority value or level to a certain application, device110, or traffic type, which the router112may apply in a QoS process as part of prioritizing traffic flows on the LAN106and determining how to allocate bandwidth between the prioritized traffic flows. For example, QoS rules/policies may be defined to prioritize and ensure good performance of time-sensitive applications or services on the LAN106and to internet access from the network compared to applications or services that may be less time-sensitive to lag. In some examples, a minimal amount of lag may be noticeable by a time-sensitive application/service and/or may render a time-sensitive application/service unusable. Some non-limiting examples of traffic flows of time-sensitive applications/services may include video streams, VoIP calls, and online gaming. Some non-limiting examples of traffic flows of less time-sensitive applications/services may include file downloads, torrents, software updates, emails, and general web browsing. As can be appreciated, performance of some time-sensitive applications/services may be more important to a subscriber108than performance of other time-sensitive applications/services, which may not be taken into consideration by the router112. For example, one user at a subscriber108premises may be a parent using a device110for an online video conference call and another user at the premises may be a child using another device110to play an online video game. Both traffic flows may be considered time-sensitive and prioritized, and the router112may share the available bandwidth between the two services without considering which may be more important, and accordingly, the quality of the online video conference call may drop. Moreover, configuring QoS settings in this way can be cumbersome, may require user knowledge of data usage of applications/services, knowledge of protocols, details about how the router112operates, and networking in general. In some cases, configuring a router's QoS settings may depend on a user-input of maximum upload and download speeds that the service provider102supports. Oftentimes, the user may not know such information and may enter incorrect values, or may configure QoS settings that cause traffic on the LAN106to perform worse instead of better. In other cases, users may be unaware of QoS features or may be intimidated (e.g., due to the above and/or other reasons) to reconfigure router112settings. Additionally, some applications/services may not support multiplexing by TCP (Transmission Control Protocol) port. As an example, both high and low priority flows may use TCP and use a same TCP port (e.g., TCP port443). Currently, without incorporating aspects of the present disclosure, the router112may be unable to determine whether the traffic is latency-sensitive or not based on the destination port and IP address (e.g., the IP address may continually change). For example, one method to differentiate between high and low priority flows may include deep packet inspection; however, this may not be possible with TLS traffic due to encryption. As another example, another method to differentiate between high and low priority flows may be to compare the IP address against lists of cloud service IP addresses; however, such lists may not generally be available. As such, a legacy QoS mechanism may not be effectively configured through a static prioritization configuration (e.g., by port or IP address). Accordingly, the available bandwidth may not be optimally distributed amongst devices110communicating on the LAN106and various devices110may experience latency and/or poor audio/video quality. According to an implementation example, the router112can include or be communicatively connected to the IBP manager214(FIG.2), described below, that operates in part to automatically monitor, measure, and manage bandwidth allocation for the router112to improve data transfer and network performance on the LAN106. In some examples, the router112can work in conjunction with a cloud controller208(FIG.2) of the service provider102as part of monitoring, measuring, and managing bandwidth allocation to optimize performance of data flows. In some examples, as part of managing bandwidth allocation, the IBP manager214may be configured to collect data associated with the subscriber's network connection, the LAN106, the router112, access points212, and devices110operating and communicating on the LAN106; use the collected data as parameters evaluated in one or a combination of prioritization rules used to prioritize traffic flows; assign priority values to the traffic flows based on the evaluation(s), and determine and implement one or more bandwidth allocation optimizations that optimize performance of data flows. The bandwidth allocation optimizations may be configured to prevent congestion, packet delays, and packet loss of all or a prioritized set of traffic, thus improving application/service performance and user experience. In some examples, a particular bit rate, delay, delay variation, packet loss, or bit error rate may be guaranteed. In some examples, one or more components of the IBP manager214may be included in the router112. In other examples, one or more components may be included in a separate device local to the LAN106and operatively connected to the router112. In other examples, one or more components may operate on a remote server118in a distributed network, such as the WAN116, on a server or other network device at the service provider head end202(FIG.2), or in the service provider access network104. FIG.2is a block diagram of an example system incorporated in an operating environment200where the service provider access network104is embodied as an HFC network and the subscriber LAN106is implemented at a subscriber's108premises, such as a residential or a business location. As should be appreciated, in other examples, the service provider access network104may be implemented as an FTTP network or other type of other communication infrastructure. The term “subscriber” may generally include the individual or business entity in an agreement with the service provider102to receive services and the premises at which those services are received. Components of the system200can be integrated, distributed, and/or provided in any combination of separate systems/components, whereinFIG.2provides one implementation example. The HFC network may extend from a head end202of the service provider102to a plurality of network nodes206, where each node serves a plurality of subscribers108a-n(generally108). For example, each node206may serve200to1,000subscribers108within a service area where the subscribers may subscribe for residential and/or business services. The service provider102may use a cable modem termination system (CMTS)204located at the head end202to provide high speed data services such as cable, Internet, voice, data, etc., to the various subscribers108. For example, the CMTS204may encode, modulate, and upconvert one or more of the services onto radio frequency (RF) carriers, combine the RF carriers into a single electrical signal, and provide the electrical signal to a broadband optical transmitter. The broadband optical transmitter may convert the electrical signal to a downstream optically modulated signal that is sent to one or more of the network nodes206over one or more fiber optic cable lines or conveying infrastructure. In an HFC network, each node206may include a broadband optical receiver to convert the downstream optically modulated signal to an electrical signal (e.g., translate the signal from a light beam to RF). The node206may transmit the electrical signal over one or more coaxial cable lines to a modem114connected to a router112or to a gateway device comprising a modem114and a router112of a subscriber108aserviced by the node206. Each subscriber108within a set of subscribers108a-nmay have at least one router112at the subscriber premises. Upon receipt of the electrical signal, the modem114included in the gateway device112(or as a separate device) may demodulate the electrical signal in order to deliver the services to one or more devices110a-nof the subscriber108, including desktop computers, laptop computers, mobile phones, tablets, gaming devices, televisions, IoT devices, cameras, among other examples. In an FTTP network, a node206may operate as a distribution hub, and the modem114may comprise or be operatively connected to an Optical Network Terminator (ONT) configured to convert the downstream optically modulated signal to an electrical signal. The access network104may operate bi-directionally, whereby signals are transmitted in both downstream and upstream directions. For example, downstream signals may transmit data from the head end202to the modem114and router112via the respective node206. The data transmitted in the downstream signals may include content associated with the one or more services being provided, such as video content, voice data, and Internet data, among other examples. The upstream signals may transmit data from the router112and modem114to the head end202via the node206. With continuing reference toFIG.2, the router112enables various connected devices110to join a LAN106and to communicate on the LAN106and to other devices and servers118connected to the WAN116via the service provider access network104(e.g., HFC network). As mentioned above, in an example aspect, the router112may include an access point212configured to connect devices110to the LAN106wirelessly (e.g., provide a wireless network). In some examples, the router112may also include or be operatively connected to a MoCA® interface operative or configured to support MoCA® communications in association with utilizing a coaxial cable network at the subscriber108premises to broadcast signals to connected devices110within a defined communications frequency range as defined by MoCA® standards. In an implementation example, the router112may be configured with control circuitry to provide control signals to select communication components (e.g., devices110, (integrated and additional) access point(s)212, a priority scheduler224, IBP manager214components) to implement intelligent bandwidth allocation on the LAN106. The IBP manager214of an embodiment includes a data collector216, a classifier218, an optimizer220, and a data store222. In some examples, various components may be combined and/or distributed amongst a combination of separate systems/devices, which in some examples may include remote systems (e.g., cloud controller208). The data collector216is illustrative of a software application, module, or computing device operative or configured to collect various data from the router112, access point(s)212, and the devices110connected to the LAN106. The data collector216may be further configured to communicate with other data sources (e.g., service provider102), nodes206, etc., for collecting various data. The collected data may include information about the access network104, the connection to the subscriber's LAN106, the LAN106, the router112and access point(s)212, and about the various devices110connected to the LAN106. More specifically, the collected data may include information about the number and types of devices110connected to the LAN106, an amount of bandwidth consumed by each device, applications/services used and the data usages of the applications/services, protocols used, types of data, dates and times of data usages, utilized frequency bands and channels, signal levels, network throughput, error rates, dropped frames, transmission delays, connection speeds, users, whether a device110is in an attended versus unattended state, etc. In some examples, the devices110may send explicit signaling to the data collector216comprising information about the devices, network connection/performance information, application/service usage, and/or other usage data. For example, a device110may be configured to explicitly signal the application/service being utilized in association with a communication to the router112. In some examples, the explicit signal may be provided as a DSCP (Differentiated Services Code Point) value included in the IP header. This information may be used alone or in combination with other collected/received data (e.g., IP address and port combination) to classify a traffic flow. For example, an IP address and port combination alone that may be included in the communication header may not be sufficient for determining whether an associated communication is latency-sensitive and should therefore be prioritized. Due at least in part to continually-changing cloud IP addresses, general non-availability of a list of cloud service IP addresses with which the IP address can be compared against, and/or other factors, the router112may not be able to effectively prioritize a traffic flow based on just an identification of the port and IP address. However, according to an aspect of the present disclosure, a traffic flow may be determined to be latency-sensitive and/or a degree of latency sensitivity may be determined for a traffic flow based at least in part on explicit signalling of an identification of the application/service associated with the traffic flow, which may correspond with a learned data transfer pattern and/or a traffic classification. In some examples, the data collector216may be configured to request and collect data obtained by the router112as part of receiving and routing traffic between the access network104and devices110connected to the LAN106. In other examples, the data collector216may be configured to apply one or more device discovery protocols according to characteristics of particular communication networks, devices and/or interfaces. Some example device discovery protocols may include a Dynamic Host Configuration Protocol (DHCP) Server discovery protocol, a DHCP version 6 (v6) Server discovery protocol, an Internet Protocol (IP)v6 Neighbor discovery protocol, an IPv6 Router advertisements discovery protocol, an Address Resolution Protocol (ARP) discovery protocol, a Simple Service Discovery Protocol (SSDP), a Universal Plug and Play (UPnP), a MoCA® controller discovery protocol, and/or Digital Living Network Alliance (DLNA) discovery protocol, a multicast Domain Name System (mDNS) discovery protocol, a Wi-Fi® Access Point discovery protocol, etc. An order, usage frequency, select protocol combinations, times, and/or intervals of protocol applications may be configurable and implemented in various ways. As should be appreciated, different and new types of discovery protocols can be implemented according to the particular hardware and communication architecture utilized in the service provider access network104, the LAN106, and/or other network and are within the scope of the present disclosure. Data collected by the data collector216and from an execution of any corresponding device discovery protocols may be stored in the data store222. For example, the data store222may store discovery information collected and/or determined by one or more components of the IBP manager214. All or a portion of the information may be obfuscated (e.g., using cryptographic hashing) to maintain privacy and/or compliance with any applicable privacy laws. In some cases, the discovery information associated with devices110can be deleted from the data store222. According to an aspect, the collected data may be monitored and measured. The collected data may be tracked over time and stored in the data store222. In some examples, the optimizer220and/or one or more other components of the IBP manager214and/or cloud controller208are configured as or are operatively connected to a machine learning engine that uses machine learning capabilities to analyze the collected data for generating a profile about the subscriber108including information about the LAN106and the devices110communicating on the LAN106. For example, a subscriber profile may comprise a data-based representation or model of the data transmission capabilities and data usages, and data transfer patterns/behaviors associated with the subscriber108network. The optimizer220may be configured to analyze the subscriber profile in view of business rules and implemented prioritization policies for identifying where bottlenecks and other network performance issues may exist on the LAN106, such as congestions, crowded channels, bandwidth being allocated to lower priority devices110and affecting performance of higher priority devices, etc. The optimizer220is illustrative of a software application, module, or computing device operative or configured to use machine learning capabilities to learn about the LAN106and the devices110that communicate on the LAN in order to determine criteria and priority values for automatically classifying traffic flows and to determine bandwidth allocation rules, optimizations, and other traffic management instructions based on known and learned information that can be used by the router112to optimize performance of data flows. For example, the optimizer220may be configured to identify various parameters and attributes associated with bottlenecks and other network performance issues such that the conditions may be corrected or modified to improve performance of data flows on the LAN106and the quality of experience from a user perspective. In some examples, the optimizer220may be configured to access and analyze the rules/policies used by the priority scheduler224to prioritize traffic and control how QoS is handled on the LAN106. The optimizer220may be further configured to create and apply one or more additional or alternative traffic prioritization policies that may be used by the priority scheduler224to improve network performance and user experience. For example, network performance may be improved by utilizing the optimizer220to dynamically create and configure traffic prioritization policies that define different traffic classification types that a data flow can be classified as and different optimizations that can be performed to improve network performance and user experience. Each traffic classification type may have an associated ranked priority value, different traffic classification criteria, an assigned queue having different queue parameters (e.g., reserved bandwidth amount, buffer size, ranking amongst other traffic classifications), etc. The traffic classifications may be based on business rules, user-defined rules, and the machine learned subscriber profile. That is, the classifications, classification criteria, mappings of the classifications to queues, queue parameters, queue scheduling, and optimizations may be customized for the subscriber LAN106based on the learned data transmission capabilities, data usages, and data transfer patterns/behaviors associated with the subscriber's network, user preferences, and business rules. In some examples, machine-learning may be utilized to empirically learn historical traffic patterns and an association between these patterns and IP address and port combinations and/or an explicit signal (e.g., indication of the application/service associated with the communication) provided by a device110. A traffic pattern and an associated IP address/port combination and/or explicit signal may be used to create an inference between an application/service associated with the IP address/port combination and/or explicit signal and a classification type. As an example, the optimizer220may learn a particular traffic pattern and an association between the traffic pattern and a particular IP address/port combination or a particular explicit signal. The traffic pattern may have a learned or assigned traffic classification type, wherein the traffic classification type may be associated with a particular queue having a priority ranking that may be used to control one or a combination of: order, transmission rate, bandwidth allocation, buffer size, scheduling, and/or other parameter of a traffic flow classified as the traffic classification type and assigned to the queue. As such, when the particular IP address/port combination and/or explicit signal is identified in association with a traffic flow, the traffic flow may be classified as the particular traffic classification type, assigned to the particular queue, and prioritized based at least in part on an identification of its IP address/port combination and/or explicit signal. One example traffic pattern may include a data transfer pattern comprised of short but fast downloads of video segments at relatively long intervals (e.g., every 30 seconds). In contrast and as another example, an online game session may have a data transfer pattern comprised of small payloads (e.g., <1000 bytes) transmitted at shorter intervals (e.g., every 10 ms). In contrast and as another example, a telemedicine session, where high quality video for treatment may be desired, may have a data transfer pattern comprised of larger payloads transmitted at shorter intervals. As can be appreciated additional and/or alternative traffic patterns learned in association with a particular device110, explicit signal, application/service, server, user, or combination thereof are possible and are within the scope of the present disclosure. The router112may be configured to support multiple traffic classification types and classification-based queuing. For example, the optimizer220may be configured to define and apply one or more traffic prioritization policies that may be used by the classifier218to classify a traffic flow into a traffic classification. For example, a traffic classification of a traffic flow can be compared against the traffic classification of another traffic flow for enabling the router112to weigh the relative importance of one packet against another. The classifier218is illustrative of a software application, module, or computing device operative or configured to analyze traffic packets for determining a classification for a traffic flow according to a prioritization policy comprising one or more classification rules. In some examples, the classifier218may be stateful so that it can handle IP fragmentation correctly. Queues may be created for different traffic classifications, wherein the queues may have different parameters/attributes that define how the queue output may be configured and scheduled. In some examples, a queue may be a dedicated portion of memory. The classification of the packet may be used to determine how the packet is queued and handled along its path to its destination on the LAN106. Non-limiting examples of collected data that may be used as traffic classification criteria may include information about the device110, application/service, data type, user, whether the device is attended or unattended, and about the status of the network (e.g., router speed, channels and capacities, number of connected devices110, available bandwidth, signal strengths). As an example, a traffic prioritization policy may define a plurality of ranked traffic classifications and a set of parameters that may include various pieces of collected data to evaluate for classifying a traffic flow into the classification. As an example, a traffic classification may define a set of criteria, parameters, or conditions of a traffic flow (e.g., a particular application operating on a particular device110and in an attended state) that, when satisfied, may cause a traffic flow to be assigned to the traffic classification. The traffic flow may be marked with a designation of the traffic classification, which may be mapped to a queue configured with a particular priority value/level that may define parameters used to service the traffic flow. In some examples, the traffic classification may be used to control the flow of traffic by managing bandwidth allocations (e.g., limit amounts, maximum and committed rate parameters, allocated queue/memory buffer sizes, drop profiles, bandwidth dependency associations). For example, a traffic flow with a higher-ranked traffic classification may be assigned a larger amount of bandwidth of the total bandwidth available on the LAN106, while another traffic flow with a lower-ranked traffic classification may be assigned less bandwidth. Traffic classifications and queue characteristics may be dynamically implemented based on various conditions (e.g., time of day, day of week, current network conditions, amounts of certain types of traffic). For example, a set of prioritization policies may optimize traffic flows on the LAN106at certain times, while another set of prioritization policies may optimize traffic flows on the LAN at other times. The various types and conditions that may be associated with a set of prioritization policies may be learned by the machine learning engine. According to an implementation example, traffic may be classified based on a variety of criteria. In one example, the classifier218may be configured to classify packets based on an evaluation of one or various combinations of: source/destination IP address, protocol (e.g., TCP, UDP, etc.), source/destination port, DSCP (Differentiated Services Code Point) value (e.g., provided by a device110and that may indicate a particular application/service associated with the communication traffic), and source/destination MAC address. In some examples, additional criteria may be used as part of classifying traffic, such as whether the device is attended or unattended, the number of other devices110communicating on the LAN106, available communication channels, device signal strengths, etc. For example, the one or a combination of various criteria may be evaluated by the classifier218for classifying a traffic flow according to one or more classification rules. In some examples, at least some of the various criteria are associated with learned data transfer patterns. According to an implementation example, a criterion for a traffic classification may include a determination of a particular device110and/or device type receiving and/or sending the traffic. For example, the device110may be identified by a MAC address, an IP address, or other identifier. In some examples, a listing of devices110, associated identifiers, communication parameters, priority parameters, etc., may be stored in the data store222. As will be described in further detail below, in some examples, various traffic patterns of a device110may be learned and stored in associated with the device110in the data store222. According to another implementation example, traffic may be classified based at least in part on a determination of whether the device110is in an attended versus unattended state. For example, sensor data may be captured and provided by a device110(e.g., with the user's permission), which can be evaluated and used to determine whether the device110is attended or unattended. An attended versus unattended state of a device110may be used as a factor in determining a priority classification of traffic flow to/from the device110(i.e., traffic classification), wherein an unattended device may be classified in a queue with lower priority than an attended device. Examples of sensor data may include microphone data, camera data, accelerometer data, etc. In some examples, device attendance states may be monitored by the classifier218. In other examples, device attendance may be monitored by a separate component that provides device attendance information to the classifier218. Other methods of determining an attended versus unattended status may be used and are within the scope of the present disclosure. In some examples, the various traffic patterns of a device110that may be learned and stored in associated with the device110in the data store222may include device attendance state information. According to another implementation example, a criterion for a traffic classification may include a determination of a particular communication protocol. For example, a particular communication protocol may be used as a criterion evaluated by the classifier218for classifying a traffic flow according to one or more classification rules. As an example, UDP (User Datagram Protocol) traffic, which is oftentimes used in time-sensitive communications where speed may be preferred over occasionally-dropped packets, may be identified and classified as non-queue-building; wherein TCP/IP (Transmission Control Protocol/Internet Protocol) traffic, which may be used by applications/services that require high reliability and where transmission time may be relatively less critical, may be identified and classified as queue-building. Examples of TCP applications/services may include video streaming applications/services, application downloads/updates, email applications, etc. Other traffic types that may be classified as non-queue-building based on a communication protocol may further include L4S (Low Latency Low Loss Scalable) traffic. In some examples, the communication protocol may be identified based on information included in an IP header. According to another implementation example, traffic may be classified based at least in part on the application/service sending/receiving the traffic flow. As part of using the application/service as a classification parameter, the classifier218may be configured to identify the application/service. In some examples, an application/service endpoint (e.g., device110or server118) may apply a tag or marking, such as a value, to a traffic flow (e.g., in a DSCP field of the IPv4 or IPv6 packet header) that may be recognized by the classifier218. In some examples, the value may correspond with and indicate a particular application/service and/or server118, such that the classifier218is able to determine the application/service and/or server associated with the traffic flow. In other examples, the value may correspond with and indicate whether the traffic is queue-building or non-queue-building. In other examples, the value may correspond with and indicate whether the traffic is related to a particular traffic classification. In some examples, the application/service may be identified based on an IP address and port combination that may be learned by the optimizer220by observing historical traffic patterns and IP address and port combinations. According to another implementation example, traffic may be classified based at least in part on business rules and user-input priority settings as classification criteria/rules. For example, a subscriber108or other user may be enabled to designate a priority setting for a particular data flow, device110, application/service, data type, explicit signal, etc., which may be used as a criterion when classifying a data flow. In some examples, a user-input priority setting may be included as a traffic classification type, wherein one or a combination of criteria for the classification type, priority value, and queue parameters may be set by the user or automatically determined and applied to the classification type. According to another implementation example, a criterion for a traffic classification may include a determination of whether a traffic flow is queue-building or non-queue-building. In one example, traffic determined to be queue-building may be classified into one classification mapped to a particular queue, while traffic determined to be non-queue-building may be classified into another classification mapped to another queue. In another example, traffic determined to be queue-building may meet one criterion of a set of criteria used to classify and map traffic flows into a particular queue. The classifier218may be configured to determine whether a traffic flow is queue-building or non-queue-building based a set of classification rules defining classification criteria. In some examples, the optimizer220is configured to use machine learning to determine the set of criteria used to determine whether a traffic flow is queue-building or non-queue-building based on observed traffic/data transfer patterns. As an example, queue-building traffic flows may include latency-insensitive traffic that may be sent by an application or service at a relatively low data rate and/or in a smooth and consistent manner, such as downloads, file transfers, etc. As another example, non-queue-building traffic flows may include latency-sensitive traffic that may be characterized by smaller payloads transmitted at shorter intervals. For example, latency-sensitive traffic, which may result in glitches or unresponsive service at low data rates, may include traffic such as VoIP (Voice over Internet Protocol), video conferencing, online/cloud gaming, live streaming, DNS (Domain Name Service) lookups, etc. The optimizer220may be configured to use machine learning techniques to analyze data collected by the data collector216and data from the classifier218, observe traffic patterns, identify traffic flows that may cause queue build up, and learn attributes of the traffic flows and queue-building behaviors. In some examples, the optimizer220may include or be in communication with a flow analyzer configured to identify flows that may cause queue build up. In some examples, the flow analyzer may utilize per-flow traffic statistics included in historical data collected and stored by the data collector216to identify whether a traffic flow may exceed the available capacity of the subscriber's link and/or LAN106, and thus may cause a queue to form. Attributes of the flow, the access network104link, and/or the LAN106may additionally be collected and stored by the data collector216and used by the optimizer220to learn associations between attributes and queue building behavior, store (e.g., in the data store222) the learned associations as classifier rules that can be used by the classifier218to classify traffic flows as queue-building, non-queue-building, and/or into other traffic classifications. As an example, attributes may include information about the device110, application/service used, port, IP address, protocol used, data type, date and time, network speed, available bandwidth, wireless band and channel used, signal strength, number of other devices110communicating on the LAN106, performance metrics, etc. That is, various sets of attributes may be learned that correlate with a queue-building, non-queue-building, or other type of traffic flow, wherein a particular traffic flow may be classified as queue-building, non-queue-building, or other traffic classification at different times and/or when different attributes are observed. In some examples, the optimizer220may be further configured to define and enable one or more optimizations that may be applied to a traffic flow based on a set of defined conditions/criteria to improve network performance. Non-limiting examples of collected data that may be used as criteria for an optimization may include information about the device110(e.g., device type, screen size, application/service, data type, user, whether the device is attended or unattended) and about the status of the network (e.g., router speed, channels and capacities, number of connected devices110, available bandwidth, signal strengths). Non-limiting examples of optimizations may include actions such as a designation of a particular frequency channel and band (e.g., use a less-crowded higher frequency band for a latency-sensitive (e.g., non-queue-building) traffic flow and a lower frequency band for a traffic flow that is not latency-sensitive (e.g., queue-building), a reduction of quality of a stream (e.g., reduce image quality of a video stream to a small-screen device110), assigning a traffic flow to be serviced by a particular access point, etc. As another example, an optimization may include an instruction to the access point212to shut down one antenna and increase strength to another antenna for increasing the signal strength and throughput to a particular device110. Other optimizations are possible and are within the scope of the present disclosure. In some examples, classification types, classification criteria/rules, priority values, bandwidth allocation rules, optimizations, and/or other traffic management instructions may be dynamically created and/or updated by the optimizer220. The classifications, classification criteria/rules, bandwidth allocation rules, optimizations, and/or other traffic management instructions may be stored in the data store222. With reference now toFIG.3, a data flow diagram300is illustrated showing a traffic prioritization and bandwidth allocation example. In the illustrated example, three devices110a-cmay be connected to a LAN106provided by a router112that includes or is connected to an IBP manager214. In some examples, each of the devices110a-cmay report information to the IBP manager214. In some examples, the devices110a-cmay report to the IBP manager214in response to a beacon or other request for response. In other examples, the devices110a-cmay be configured to report to the IBP manager214continually. The devices110a-cmay be known by the IBP manager214based on previous reporting data and communications with the router112, and may be stored in a database or table included in the data store222. In some examples, the access point212and any other access points on the LAN106may provide reports to the IBP manager214on information about the LAN106, such as a number of connected devices110, utilization (e.g., percentage of time) of a channel, available bandwidth capacity for a new device110, an MCS (Modulation and Coding Scheme) index value indicative of a data rate of a link (e.g., based on a number of spatial streams, modulation types, and coding rates), available channels, channel widths, signal strength values, etc. In some examples, information reported to the IBP manager214may be used in real time or near-real time to determine a current status of the LAN106and devices110connected to the LAN106. In other examples, information reported to the IBP manager214may be stored in the data store222and used to learn behaviors and usage profiles of the LAN106and the devices110connected to the LAN106that can be used to classify and prioritize traffic. As an example, a first device110amay be a mobile phone reporting that a particular video streaming application is being used on the device110ato request video streaming data. In an example aspect, the first device110amay further report that it is in an attended state, wherein the attended state may describe a state where the device110adetects the presence of a user. For example, the presence of the user may be detected via interaction with the first device110aand/or information detected via a camera, microphone, or other sensor included in or associated with the first device110a. A second device110bmay be a tablet device reporting that it is in a stand-by mode and is requesting a download of data as part of an automatic operating system or application update. The second device110bmay further report that it is in an unattended state, wherein the unattended state may describe a state where the device110bdoes not detect the presence of a user via user interaction with the device110bor via information detected via the camera, microphone, or other sensor. Past collected reporting data from the device110bmay further indicate that the device110bis rarely in an attended state, and accordingly, likelihood of active engagement with a user may be low. A third device110cmay be a laptop computer reporting that an online videoconferencing application is being used on the device110cto stream (e.g., upload and download) audio and video data. The third device110cmay further report that it is in an attended state based on detected user interaction and/or user presence. The access point212may additionally report information to the IBP manager214. Information that may be included in or determined from the reported information and/or past collected and analyzed data, may include information about the devices110a-c(e.g., number of devices, device types, screen sizes, processor and memory capabilities, signal strengths, capacities of links to the devices), bandwidth capacity, channel usages, etc. The IBP manager214may use the classifier218to classify the traffic flows302a-c(generally302) of the three devices110a-cinto one or more traffic classification types. The classifier218may access a prioritization policy stored in the data store222that includes classification rules that define various criteria to evaluate for determining traffic classifications for the traffic flows302. For example, the prioritization policy may define to classify packets based on one or a combination of: source/destination IP address, protocol, source/destination port, explicit signal (e.g., DiffServ field), and source/destination MAC address. In an illustrative example, the prioritization policy may include, as a criterion, to evaluate the attendance state of the devices110as part of classifying the traffic flows302. According to the example, based on an evaluation of the attended/unattended states of the devices110and other criteria, a determination may be made to classify the third traffic flow302cassociated with the third device110cas a first classification type, the first traffic flow302aassociated with the first device110aas a second classification type, and the second traffic flow302bassociated with the second device110bas a third classification type. The first classification type may have a corresponding priority value that may be ranked highest in priority amongst the classification types included in the prioritization policy. The second classification type may have a corresponding priority value that may be ranked below the first classification type, and the third classification type may have a corresponding priority value that may be ranked below the second classification type. Each classification type may be mapped to a different queue304a-c(generally304), wherein each queue may have different parameters assigned to it that define the queue's priority ranking, transmission rate, bandwidth allocation, buffer size, scheduling, etc. The classifier218may be configured to mark the traffic flows302with an indication of the classification type, which may be used to map/assign the traffic flows302to the correct queues304. In some examples, the classifier218may be further configured to mark one or more of the traffic flows302with an indication of one or more optimizations that may be determined for the traffic flows302to increase network performance. As an example, the video quality of video streaming data delivered to the first device110amay be reduced based on the screen size of the device110a. Additionally, the first traffic flow302amay be transmitted on a different band (than the third traffic flow302cand the second traffic flow302b) based on an evaluation of usages of the frequency bands/channels, the signal strength of the connection with the first device110a, etc. According to an aspect, the priority scheduler224may be configured to process the traffic flows302a-cbased on the queues304to which the traffic flows directed. For example, based on the traffic classifications determined by the classifier218, the traffic flows302may be prioritized and delivered (e.g., to/from an intended device110) according to priority, bandwidth allocation rules, optimizations, and other traffic management instructions. As described above, the traffic classifications, prioritizations, bandwidth allocation rules, optimizations, and other traffic management instructions may be created and implemented based on data collected about the subscriber LAN106and may be configured to optimize traffic flows on the LAN. Accordingly, the traffic flows302may be delivered with increased throughput and minimal lag, dropped packets, etc., for improved bandwidth allocation and better quality service and experience. FIG.4is a flow chart depicting general stages of an example process or method400for generating prioritization policies for optimizing performance of data flows on a LAN106according to an embodiment. The method400starts at OPERATION402and proceeds to OPERATION404, where various information about the LAN106, the router112and access point(s)212, and about the various devices110connected to the LAN106may be collected and stored. According to an example, the data collector216may be used to collect information associated with available amounts and usages of bandwidth, available and utilized frequency bands and channels, a number of connected devices110, device identification (e.g., IP address, MAC address) and usage, application/service identification (e.g., explicit signal, IP address and port combination) and usage, queue-building versus non-queue-building traffic classification criteria, whether a device is being actively or passively used (i.e., attendance state), signal levels, network speeds, etc. In some examples, the devices110may send explicit signaling to the data collector216comprising information about the devices, network connection/performance information, application/service usage, and other usage data. In other examples, the data collector216may be configured to request and collect data obtained by the router112as part of receiving and routing traffic between the access network104and devices110connected to the LAN106. At OPERATION406, one or more machine learning algorithms may be applied to the collected information, and at OPERATION408, one or more prioritization policies may be determined. For example, the cloud controller208and/or IBP manager214may utilize machine learning to observe traffic patterns and learn data usage/transfer behaviors (e.g., usage of a particular delivery network, amount of usage, time of usage, payload sizes, transmission intervals, etc.). For example, network configuration data and usage/transfer data may be stored in the data store222, and machine learning algorithms may be used to analyze the stored data for learning the network topologies and data usage/transfer patterns and behaviors (e.g., of a subscriber, a plurality of subscribers on a same node206, different devices110connected to a LAN106, different applications/services, different IP address and node combinations, etc.). Accordingly, the cloud controller208and/or IBP manager214may be enabled to select a prioritization policy to apply in different scenarios based at least in part on machine-learned network information, data usage/transfer patterns and behaviors, various business rules, etc. According to an aspect, a prioritization policy may include a plurality of classification types, wherein each classification type may have a corresponding priority value and queue304that may be used to prioritize traffic streams. Each traffic classification type may correspond with a set of criteria, such that when attributes of a traffic flow satisfy a set of criteria, the traffic flow may be associated with a particular classification and mapped to a particular queue304. Additionally, a prioritization policy may further include criteria corresponding to various bandwidth allocation rules and other traffic management instructions, such that when attributes of a traffic flow satisfy a set of criteria, one or more bandwidth allocation rules or other traffic management instructions may be applied to the traffic flow. Further, various criteria/conditions for various optimization actions may be determined. At OPERATION410, the one or more prioritization policies and optimizations may be stored in the data store222where they can be accessed by the router112to prioritize traffic. The process400ends at OPERATION498. FIG.5is a flow chart depicting general stages of an example process or method500for implementing a prioritization policy for optimizing performance of data flows on a LAN106according to an embodiment. The method500starts at OPERATION502and proceeds to OPERATION504, where a prioritization policy may be selected for use by the router112. In some examples, the prioritization policy may be selected based on one or more attributes/factors of the LAN106. For example, the state of the LAN106may be monitored, and depending on the number of devices110connected to the LAN106, which devices110may be connected to the LAN, current activities/usages of the devices110, etc., a particular prioritization policy may be selected. The LAN106attributes/factors that correspond with a particular prioritization policy may be based on machine-learned network and device utilization and behavior information. At OPERATION506, a traffic flow302may be received by the router112, and at OPERATION508, the classifier218may be used to evaluate various criteria to classify the traffic flow302according to a traffic classification type included in the prioritization policy. The determined traffic classification type may correspond with a priority value, and the traffic flow302may be marked with an indication of the determined traffic classification type and/or priority value. At OPTIONAL OPERATION510, one or more optimizations may be determined to improve performance of the data flow302(and/or higher prioritized data flows), and may be applied to the traffic flow302. At OPERATION512, the data flow302may be assigned to a queue corresponding to the determined traffic classification type, wherein the queue may be assigned with a certain amount of bandwidth, memory, ranking, transmission rate, etc. The data flow302may be processed based on the queue parameters and delivered to its intended device110. At OPERATION514, the traffic on the LAN106may be monitored. For example, various information about the LAN106, the router112and access point(s)212, and about the various devices110connected to the LAN106may be collected and monitored. According to an example, the data collector216may be used to collect information associated with available amounts and usages of bandwidth, available and utilized frequency bands and channels, a number of connected devices110, device identification and usage, application/service identification, queue-building versus non-queue-building traffic classification criteria, whether a device is being actively or passively used, signal levels, network speeds, etc. At OPERATION516, a determination may be made as to whether to modify the prioritization policy to another prioritization policy or to continue using the current prioritization policy. For example, the determination may be based on a determination of how the network is performing based on how traffic prioritizations and associated bandwidth allocations are affecting throughput and application/service performance. As an example, if increased lag is detected, a determination may be made to return to OPERATION504to select and implement another prioritization policy. The method500may end at OPERATION598. FIG.6is a block diagram illustrating example physical components of a computing device600or system with which embodiments may be practiced. It should be appreciated that in other embodiments, different hardware components other than those illustrated in the example ofFIG.6may be used. Computing devices may be implemented in different ways in different embodiments. For instance, in the example ofFIG.6, the computing device600includes a processing system604, memory602, a network interface card606(wired and/or wireless), a secondary storage device608, an input device610, a video interface612, a display unit614, and a communication medium617. In other embodiments, the computing device600may be implemented using more or fewer hardware components (e.g., a video interface, a display unit, or an input device) or in combination with other types of computer systems626and program modules. The memory602includes one or more computer-readable storage media capable of storing data and/or computer-executable instructions. Memory602may store the computer-executable instructions that, when executed by a processor or processing unit of the processing system604, cause operations, such as the operations described above with respect toFIGS.4and5) to provide intelligent bandwidth prioritization and allocation. In various embodiments, the memory602is implemented in various ways. For example, the memory602can be implemented as various types of computer-readable storage media. Example types of computer-readable storage media include, but are not limited to, solid state memory, flash memory, dynamic random access memory (DRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), DDR2 SDRAM, DDR3 SDRAM, read-only memory (ROM), reduced latency DRAM, electrically-erasable programmable ROM (EEPROM), and other types of devices and/or articles of manufacture that store data. The term computer-readable storage medium may also refer to devices or articles of manufacture that store data and/or computer-executable instructions readable by a computing device. The term computer-readable storage media encompasses volatile and non-volatile, removable and non-removable media implemented in various methods or technologies for storage and retrieval of information. Such information can include data structures, program modules, computer-executable instructions, or other data. The processing system604includes one or more processing units (e.g., one or more processors), which may include tangible integrated circuits that selectively execute computer-executable instructions. In various embodiments, the processing units in the processing system604are implemented in various ways. For example, the processing units in the processing system604can be implemented as one or more processing cores. In this example, the processing system604can comprise one or more microprocessors. In another example, the processing system604can comprise one or more separate microprocessors. In yet another example embodiment, the processing system604can comprise Application-Specific Integrated Circuits (ASICs) that provide specific functionality. In yet another example, the processing system604provides specific functionality by using an ASIC and by executing computer-executable instructions. The computing device600may be enabled to send data to and receive data from a communication network via a network interface card606. In different embodiments, the network interface card606is implemented in different ways, such as an Ethernet interface, a token-ring network interface, a fiber optic network interface, a wireless network interface (e.g., W-Fi®, Wi-Max, etc.), or another type of network interface. The network interface may allow the device to communicate with other devices, such as over a wireless network in a distributed computing environment, a satellite link, a cellular link, and comparable mechanisms. Other devices may include computer device(s) that execute communication applications, storage servers, and comparable devices. The secondary storage device608includes one or more computer-readable storage media, and may store data and computer-executable instructions not directly accessible by the processing system604. That is, the processing system604performs an I/O operation to retrieve data and/or computer-executable instructions from the secondary storage device608. In various embodiments, the secondary storage device608can be implemented as various types of computer-readable storage media, such as by one or more magnetic disks, magnetic tape drives, CD-ROM discs, DVD-ROM discs, BLU-RAY discs, solid state memory devices, and/or other types of computer-readable storage media. The input device610enables the computing device600to receive input from a user. Example types of input devices include, but are not limited to, keyboards, mice, trackballs, stylus input devices, key pads, microphones, joysticks, touch-sensitive display screens, and other types of devices that provide user input to the computing device600. The video interface612outputs video information to the display unit614. In different embodiments, the video interface612is implemented in different ways. For example, the video interface612is a video expansion card. In another example, the video interface612is integrated into a motherboard of the computing device600. In various embodiments, the display unit614can be an LCD display panel, a touch-sensitive display panel, an LED screen, a projector, a cathode-ray tube display, or another type of display unit. In various embodiments, the video interface612communicates with the display unit614in various ways. For example, the video interface612can communicate with the display unit614via a Universal Serial Bus (USB) connector, a VGA connector, a digital visual interface (DVI) connector, an S-Video connector, a High-Definition Multimedia Interface (HDMI) interface, a DisplayPort connector, or another type of connection. The communications medium617facilitates communication among the hardware components of the computing device600. In different embodiments, the communications medium617facilitates communication among different components of the computing device600. For instance, in the example ofFIG.6, the communications medium617facilitates communication among the memory602, the processing system604, the network interface card606, the secondary storage device608, the input device610, and the video interface612. In different embodiments, the communications medium617is implemented in different ways, such as a PCI bus, a PCI Express bus, an accelerated graphics port (AGP) bus, an InfiniBand® interconnect, a serial Advanced Technology Attachment (ATA) interconnect, a parallel ATA interconnect, a Fiber Channel interconnect, a USB bus, a Small Computing system Interface (SCSI) interface, or another type of communications medium. The memory602stores various types of data and/or software instructions. For instance, in the example ofFIG.6, the memory602stores a Basic Input/Output System (BIOS)618, and an operating system620. The BIOS618includes a set of software instructions that, when executed by the processing system604, cause the computing device600to boot up. The operating system620includes a set of software instructions that, when executed by the processing system604, cause the computing device600to provide an operating system that coordinates the activities and sharing of resources of the computing device600. The memory602also stores one or more application programs or program code622that, when executed by the processing system604, cause the computing device600to provide applications to users. The memory602also stores one or more utility programs624that, when executed by the processing system604, cause the computing device600to provide utilities to other software programs. Embodiments may be used in combination with any number of computer systems, such as in server environments, desktop environments, laptop or notebook computer systems, multiprocessor systems, micro-processor based or programmable consumer electronics, networked PCs, mini computers, main frame computers and the like. Embodiments may be utilized in various distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network in a distributed computing environment, and where program code may be located in local and/or remote memory storage (e.g., memory and/or disk(s)). All system components described herein may be communicatively coupled via any method of network connection known in the art or developed in the future including, but not limited to wired, wireless, modem, dial-up, satellite, cable modem, Digital Subscriber Line (DSL), Asymmetric Digital Subscribers Line (ASDL), Virtual Private Network (VPN), Integrated Services Digital Network (ISDN), X.25, Ethernet, token ring, Fiber Distributed Data Interface (FDDI), IP over Asynchronous Transfer Mode (ATM), Infrared Data Association (IrDA), wireless, WAN technologies (T1, Frame Relay), Point-to-Point Protocol over Ethernet (PPoE), etc. including any combination thereof. FIG.7is a block diagram illustrating a cable television services system700(hereafter referred to as “CATV”) architecture providing an operating environment according to an aspect. According to aspects, the service provider102may operate in the form of a CATV700as illustrated and described inFIG.7. As should be appreciated, a CATV services system700is but one of various types of systems that can be utilized for providing an operating environment for providing supplemental content functionality described herein. Referring now toFIG.7, digital and analog video programming, information content and interactive television services are provided via a HFC network702to a television set716for consumption by a cable television/services system customer. As is known to those skilled in the art, HFC networks702combine both fiber optic cable lines and coaxial cable lines. Typically, fiber optic cable lines run from the cable head end720to neighborhoods of subscribers. Coaxial cable lines run from the optical fiber feeders to each customer or subscriber. The CATV system700is in the form of a distributed client-server computing system for providing video and data flow across the HFC network702between server-side services providers (e.g., cable television/services providers) via a server-side head end720and a client-side customer via a set-top box (STB)718functionally connected to a customer receiving device, such as the television set716. The functionality of the HFC network702allows for efficient bidirectional data flow between the set-top box718and an application server740of the server-side head end720. As is understood by those skilled in the art, modern CATV systems700can provide a variety of services across the HFC network702including traditional digital and analog video programming, telephone services, high speed Internet access, video-on-demand, and services. On the client side of the CATV system700, digital and analog video programming and digital and analog data are provided to the customer television set716via the STB718. Interactive television services that allow a customer to input data to the CATV system700likewise are provided by the STB718. As illustrated inFIG.7, the STB718is a multipurpose computing device having a computer processor, memory, and an input/output mechanism. The input/output mechanism receives input from server-side processes via the HFC network702and from customers via input devices such as a remote control device728, keyboard730, or other computing device, such as a tablet/slate computer, smart phone, etc. The remote control device728and the keyboard730can communicate with the STB718via a suitable communication transport such as the infrared connection732. The remote control device728can include a biometric input module729. The STB718also includes a video processor for processing and providing digital and analog video signaling to the television set716via a cable communication transport734. A multi-channel tuner is provided for processing video and data to and from the STB718and the server-side head end720, described below. The STB718also includes an operating system722for directing the functions of the STB718in conjunction with a variety of client applications725. For example, if a client application725requires a news flash from a third-party news source to be displayed on the television716, the operating system722can cause the graphics functionality and video processor of the STB718, for example, to output the news flash to the television716at the direction of the client application725responsible for displaying news items. Because a variety of different operating systems722can be utilized by a variety of different brands and types of set-top boxes718, a middleware layer724can be provided to allow a given software application to be executed by a variety of different operating systems. According to an embodiment, the middleware layer724can include a set of application programming interfaces (APIs) that are exposed to client applications and operating systems722that allow client applications725to communicate with the operating systems722through common data calls understood via the API set. As described below, a corresponding middleware layer742is included on the server side of the CATV system700for facilitating communication between the server-side application server and the client-side STB718. The middleware layer742of the server-side application server and the middleware layer724of the client-side STB718can format data passed between the client side and server side according to the Extensible Markup Language (XML). According to one aspect, the STB718passes digital and analog video and data signaling to the television716via a one-way communication transport734. According to other aspects, two-way communication transports can be utilized, for example, via high definition multimedia (HDMI) ports. The STB718can receive video and data from the server side of the CATV system700via the HFC network702through a video/data downlink and data via a data downlink. The STB718can transmit data from the client side of the CATV system700to the server side of the CATV system700via the HFC network702via one data uplink. The video/data downlink is an “in band” downlink that allows for digital and analog video and data signaling from the server side of the CATV system700through the HFC network702to the STB718for use by the STB718and for distribution to the television set716. As is understood by those skilled in the art, the “in band” signaling space operates at a relative high frequency, e.g., between 54 and 1000 megahertz. The signaling space is generally divided into 6 megahertz channels in which can be transmitted as a single analog signal or a greater number (e.g., ten) of digital signals. The data downlink and the data uplink, illustrated inFIG.7, between the HFC network702and the set-top box718comprise “out of band” data links. As is understand by those skilled in the art, the “out of band” frequency range is generally at a lower frequency than “in band” signaling. For example, the “out of band” frequency range can be between zero and 54 megahertz. Data flow between the STB718and the server-side application server740is typically passed through the “out of band” data links. Alternatively, an “in band” data carousel can be positioned in an “in band” channel into which a data feed can be processed from the application server740through the HFC network702to the STB718. Operation of data transport between components of the CATV system700, described with reference toFIG.7, is well known to those skilled in the art. Referring still toFIG.7, the head end720of the CATV system700is positioned on the server side of the CATV system and includes hardware and software systems responsible for originating and managing content for distributing through the HFC network702to client-side STBs718for presentation to customers. As described above, a number of services can be provided by the CATV system700, including digital and analog video programming, interactive television services, telephone services, video-on-demand services, targeted advertising, and/or provision of supplemental content. The application server740can be configured as a computing system operative to assemble and manage data sent to and received from the STB718via the HFC network702. As described above, the application server740includes a middleware layer742for processing and preparing data from the head end720of the CATV system700for receipt and use by the client-side STB718. For example, the application server740via the middleware layer742can obtain supplemental content from third-party services746via the Internet744for transmitting to a customer through the HFC network702, the STB718, and recording by a local or remote DVR. For example, content metadata from a third-party content provider service can be downloaded by the application server740via the Internet744. When the application server740receives the downloaded content metadata, the middleware layer742can be utilized to format the content metadata for receipt and use by the STB718. Therefore, content metadata can be sent and categorized based on the availability to the customer's program guide data. According to one embodiment, data obtained and managed by the middleware layer742of the application server740is formatted according to the Extensible Markup Language and is passed to the STB718through the HFC network702where the XML-formatted data can be utilized by a client application725in concert with the middleware layer724, as described above. As should be appreciated by those skilled in the art, a variety of third-party services data746, including news data, weather data, sports data and other information content can be obtained by the application server740via distributed computing environments such as the Internet744for provision to customers via the HFC network702and the STB718. Additionally, the application server740may receive data via the Internet744. According to aspects, the application server740obtains customer support services data, including billing data, information on customer work order status, answers to frequently asked questions, services provider contact information, and the like from data services726for provision to the customer via an interactive television session. The data services726include a number of services operated by the services provider of the CATV system700which can include profile and other data associated with a given customer. A billing system762can include information such as a customer's name, street address, business identification number, Social Security number, credit history, and information regarding services and products subscribed to by the customer. According to embodiments, the billing system762can also include billing data for services and products subscribed to by the customer for bill processing, billing presentment and payment receipt. An authentication system766can include information such as secure user names, subscriber profiles, subscriber IDs, and passwords utilized by customers for access to network services. A customer information database768can include general information about customers such as place of employment, business address, business telephone number, and demographic information such as age, gender, educational level, and the like. The customer information database768can also include information on pending work orders for services or products ordered by the customer. The customer information database768can also include general customer information such as answers to frequently asked customer questions and contact information for various service provider offices/departments. As should be understood, this information can be stored in a variety of disparate databases operated by the cable services provider. Referring still toFIG.7, web services system750is illustrated between the application server740and the data services726. According to aspects, web services system750serves as a collection point for data requested from each of the disparate data services systems comprising the data services726. According to aspects, when the application server740requires customer services data from one or more of the data services726, the application server740passes a data query to the web services system750. The web services system750formulates a data query to each of the available data services systems for obtaining any required data for a requesting customer as identified by a set-top box identification associated with the customer. The web services system750serves as an abstraction layer between the various data services systems and the application server740. That is, the application server740is not required to communicate with the disparate data services systems, nor is the application server740required to understand the data structures or data types utilized by the disparate data services systems. The web services system750is operative to communicate with each of the disparate data services systems for obtaining necessary customer data. The customer data obtained by the web services system is assembled and is returned to the application server740for ultimate processing via the middleware layer742, as described above. As should be understood by those skilled in the art, the disparate systems750,762,766,768can be integrated or provided in any combination of separate systems, whereinFIG.7shows only one example. Aspects, for example, are described above with reference to block diagrams and/or operational illustrations of methods, systems, and computer program products according to embodiments. The functions/acts noted in the blocks can occur out of the order as shown in any flowchart or described herein. For example, two processes shown or described in succession can in fact be executed substantially concurrently or the blocks can sometimes be executed in the reverse order, depending upon the functionality/acts involved. While certain embodiments have been described, other embodiments may exist. Furthermore, although embodiments have been described as being associated with data stored in memory and other storage mediums, data may also be stored on or read from other types of computer-readable storage media. Further, the disclosed processes may be modified in any manner, including by reordering and/or inserting or deleting a step or process, without departing from the embodiments. The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the embodiments to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not with this detailed description, but rather by the claims appended hereto.
80,149
11863466
DESCRIPTION OF THE EXAMPLES Reference will now be made in detail to the present examples, including examples illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. FIGS.1and2provide illustrations of analytics systems that can carry out the various methods described herein. These drawings and their accompanying descriptions lay out the basic framework of the relevant systems and provide the relevant context for understanding the remaining disclosures herein.FIGS.3and4relate to a specific implementation of an analytics system, particularly with respect to core sets, core set managers, and methods for phasing in and out core set managers and their associated core sets.FIG.5relates to another implementation of an analytics systems, specifically relating to streaming data clustering such that a data stream can be separated into different clusters.FIGS.6A and6Billustrate example diagrams of multi-cluster datasets and associated trend lines.FIG.7illustrates an example GUI for interacting with one or more of the systems and methods described herein. FIG.1provides an illustration of an analytics system for providing capacity forecasting for high-usage periods of a computing infrastructure. Although this disclosure mentions capacity forecasting for “high-usage” periods, the disclosed capacity forecasting can be applied to various subsets of usage, such as high, medium, or low usage levels. In some examples, the usage levels can be subdivided into ten, twenty, or any other number of levels, and capacity forecasting can be provided for any of those levels or combinations of them. As such, the phrasing regarding “forecasting for high-usage periods” is not intended to limit the examples to only that category of forecasting. FIG.1shows an architecture of an example analytics management system100that can execute on one or more servers. The analytics management system100includes an analytics services manager102and multiple metric processors104,106,108. The analytics management system100receives streams of metric data represented by directional arrows, such as directional arrow112. The analytics management system100enables a user to create one or more metric processors104,106,108from configurable performance models described below and assigns to each metric processor104,106,108one or more streams of metric data. Each metric processor104,106,108is registered with a registration key that the analytics services manager102uses to route one or more streams of metric data to a corresponding metric processor104,106,108. The analytics services manager102can execute on one or more servers of the analytics management system100. The metric processors104,106,108can likewise execute on the server(s). The analytics management system100can copy each stream of metric data to a database110to create a history for each metric. For example, the history stored in the database110can be used to construct core sets for a predictive model as described herein. A core set can be a set of data that approximates a larger data set. In some examples, the core set can be a fixed size and can be updated as new data is added to the overall data set, such that the core set can evolve and remain an accurate approximation of the larger data set. Each metric processor104,106,108can generate forecast metric data, such as by implementing one or more of the predictive models described herein. The metric processors104,106,108can also detect anomalous behavior, and provide information and recommendations to a user, such as a data center client114,116,118, application owner, or an IT administrator. The user can elect to take remedial measures or make other changes, which can be carried out by the analytics management system100. In some examples, the analytics management system100can automatically perform remedial measures in response to a notification that anomalous behavior has been detected. FIG.2shows an example of a virtualization layer202located above a physical data center204. For the sake of illustration, the virtualization layer202is separated from the physical data center204by a virtual-interface plane208. The physical data center204is an example of a distributed computing system or computing infrastructure. The physical data center204comprises physical objects, including a management server computer216, any of various computers, such as PC218, on which a virtual-data-center (“VDC”) management interface can be displayed to system administrators and other users, server computers, such as server computers230,232,234,236,238,240,242,244, data-storage devices, and network devices. The server computers230-244can be networked together to form networks within the data center204. The example physical data center204includes three networks that each directly interconnects a bank of eight server computers and a mass-storage array. For example, network220interconnects server computers230-244and a mass-storage array214. Different physical data centers can include many different types of computers, networks, data-storage systems, and devices connected according to many different types of connection topologies. The virtualization layer202includes virtual objects, such as VMs, applications, and containers, hosted by the server computers in the physical data center204. The virtualization layer202can also include a virtual network (not illustrated) of virtual switches, routers, load balancers, and network interface cards formed from the physical switches, routers, and network interface cards of the physical data center204. In some examples, server computers can host VMs and containers. For example, server computer234hosts two containers224, server computer246hosts four VMs222, and server computer248hosts one VM226. According to some examples, server computers can host applications. For example, server computer242hosts four applications228. The virtual-interface plane208abstracts the resources of the physical data center204to one or more VDCs comprising the virtual objects and one or more virtual data stores, such as virtual data stores210and212. For example, one VDC can comprise VMs222and virtual data store210. As used herein, the term “object” can refer to a physical object or a virtual object which generates streams of metric data associated with components of a computing infrastructure such as the one shown inFIG.2. A physical object can be a server computer, network device, workstation, desktop computer, laptop computer, or tablet of a distributed computed system, for example. A virtual object can be an application, a VM, a virtual network device, or a container of a distributed computing system. The term “resource” can refer to a physical resource of a distributed computing system, such as, but not limited to, a processor, core, memory, network connection, network interface, data-storage device, mass-storage device, switch, router, and any other component of the physical data center204. Resources of a server computer and clusters of server computers can form a resource pool for creating virtual resources of a virtual infrastructure used to run virtual objects. The term “resource” can also refer to a virtual resource, which can have been formed from physical resources used by a virtual object. For example, a resource can be a virtual processor formed from one or more cores of a multicore processor, virtual memory formed from a portion of physical memory, virtual storage formed from a sector or image of a hard disk drive, a virtual switch, and a virtual router. Processes and systems described herein are implemented in a management system that monitors performance of resources and objects of a distributed computing system by collecting one or more streams of time-dependent metric data associated with one or more resources of the computing infrastructure. Each stream of metric data can be time series data generated by a metric source. The metric source can be an operating system of an object, a guest operating system, an object, an application, or a resource. A stream of metric data comprises a sequence of time-ordered metric values that can be recorded at spaced points in time called time stamps. A stream of metric data can also be called a sequence of metric data or simply a “metric.” The streams of metric data include, but are not limited to, CPU usage, amount of memory, network throughput, network traffic, and amount of storage. CPU usage is a measure of CPU time used to process instructions of an application program or operating system as a percentage of CPU capacity. High CPU usage can be an indication of unusually large demand for processing power, such as when an application program enters an infinite loop or when a CPU is processing a heavy workload. Amount of memory is the amount of memory (e.g., GB s) a computer system or other device uses at a given time. Network throughput is the number of bits of data transmitted to and from a server computer or data-storage device and is often recorded in megabits, kilobits or simply bits per second. Network traffic at a server computer or mass-storage array is a count of the number of data packets received and sent at a given time. The streams of metric data can include virtual object metrics, such as error rates, application calls, and response times. Turning back toFIG.2, the drawing also shows arrows representing streams of metric data provided to the management system206from various components of the computing infrastructure. In some examples, the management system206can be the analytics management system100shown inFIG.1and described above. The management system206can be located in the virtualization layer202and implemented in one or more VMs to receive and process the various streams of metric data. For example,FIG.2shows a directional arrow from virtual data store212to the management system206, where the directional arrow representing a stream of metric data relevant to virtual data store212. As another example,FIG.2shows a directional arrow from VM222to the management system206, where the directional arrow representing a stream of metric data relevant to VM222. As another example,FIG.2shows a directional arrow from application228to the management system206. Although not shown in the drawing, similar data streams can be provided to the management system206from other components in the computing infrastructure, such as from mass-storage array214, management server computer216, PC218, and server computers230,232,234,236,238,240,242,244,246. In some examples, these various sources of metric data streams can send metric values as those metric values are generated, while other sources can only send metric values at certain times or in response to a request from the management system206. The management system206ofFIG.2, which can also include the analytics management system100ofFIG.1, can execute one or more predictive algorithms using the various data streams received from components in the computing infrastructure. The system206can allow for large-scale, concurrent metric processors that can produce forecasts at any time without requiring preprocessing at the time of forecast. The system206can provide a general purpose library that provides real-time, reliable time-series forecasts with configurable models and a small memory footprint. A user can create a metric processor, such as metric processors104,106,108ofFIG.1, with a set of configurable statistical models to handle an individual time series metric, load metric streams in tuples (timestamp, value), and query for forecast results with confidence intervals as an array of tuples (forecast, upper, lower) starting from the last seen timestamp or any time in the future. The system206can cause the metric processors to update all model parameters in a single pass as each timepoint arrives. The system206can utilize a subscribe-publish-query pattern. After a metric processor is registered with a resource key in the analytics service, metric timepoints in the form of (resourceKey, timestamp, value) tuples are routed to the corresponding metric processor. An infrastructure tenant can subscribe to a set of metric processors and then can query the metric processor on demand for a forecast of that metric into the future. In this manner, the system206need not store an entire history of datapoints to be reprocessed in the future. Instead, the system206can store up-to-date statistics, model parameters, and a short sliding window of metric history. The short sliding window can be defined by one or more core sets that are phased out over time. This phasing-out process allows old data to be discarded to maintain a small footprint for the library. But because the metric processors update their relevant models in response to new data, the discarded data is still reflected in some way by the remaining models. Furthermore, the use of core sets itself allows for a smaller footprint, as the core sets can be constructed with data points that are representative of a larger data set, such that a forecast can be provided based on the representative core set rather than each and every data point over a period of time. FIG.3provides an illustration of a representation300of core set swapping, which can be used to phase out older data over time. The illustration shows a timeline with time periods0through10. These time periods can be set to any length of time. In some examples, each period is equivalent to about one month. However, the time periods can represent any time length, including one hour, one day, or one year, as some examples. The illustration also shows core set managers below the timeline. A core set manager can be any type of process that manages a core set, such as by initiating a new core set or discarding an old core set. In this example, there are two core set managers denoted as m0and m1. The illustration ofFIG.3also includes various sections302,304,306,308,310that span multiple time periods and are identified for purposes of describing the core set swapping. Each section can correspond to a lifecycle of a particular core set. For example, section302covers time periods0and1, during which time core set manager m0is active. As shown herein, when a core set manager is active, it is maintaining a core set and updating it with relevant representative data as new data points arrive. FIG.3shows section304spanning time periods1-3, during which a new core set manager m1is executed. As shown in this example, time period1includes overlapping sections302,304which correspond to overlapping core set managers m0, m1, and therefore overlapping core sets. In this example, the older core set (corresponding to section302) can be based on data occurring in earlier time periods, such as time period0and optionally earlier time periods as well. On the other hand, the newer core set (corresponding to section304) can be based on data occurring in more recent time periods, such as time period1. In some examples, a newer core set can rely on data from a time period before the first time period in which the core set is utilized. For example, a new core set may be implemented at time period1, at which point it can be used in forecasting future metrics, but that new core set may rely on some data occurring in the preceding period0in order to populate and be usable. During an overlapping period, such as time period1, the system can utilize two core sets when forecasting future usage metrics. In some examples, the overlapping core sets will be different from one another and, standing alone, would provide differing forecast results. Rather than immediately jumping from one core set to another, which could result in a sudden change in forecasting results, relying on both core sets for an overlapping period allows the forecasting models to transition more gradually to the new core set without sudden changes in forecasting results. This overlapping process can continue as shown inFIG.3. For example, core set manager m1can manage a core set from time period1to time period3. At time period3, core set manager m0can reinitialize and load a new core set. During time period3, one or more metric processors can utilize both core sets managed by managers m0and m1. At time period4, core set manager m1is terminated and the data from that core set can be discarded. At time period5, core set manager m1can be reinitialized with a new core set for overlapping use during time period5. At time period6, core set manager m0can be terminated and the data from that core set can be discarded. At time period7, core set manager m0can be reinitialized with a new core set for overlapping use during that time period. At time period8, core set manager m1can be terminated and the data from that core set can be discarded. This process can continue into the future indefinitely, such that core sets maintain only recent data and the transitions between older and newer core sets are effected in a smooth manner that provides consistent forecasting results. AlthoughFIG.3describes two core set managers m0and m1, any number of core set managers can be used. For example, the initialization of a previously terminated core-set-manager process can be considered a new core set manager and can be denoted m2, m3, m4, and so on. Additionally, although only two core sets are shown overlapping at any given time, in some examples more than two core sets can be used simultaneously in an overlapping manner. Furthermore, in some examples core sets can be overlapped for more than one time period or can be disbanded for more than one time period. FIG.4provides a flowchart of an example method for capacity forecasting using core set phasing as described above with respect toFIG.3. At stage402, the management system206can receive a data stream, such as any of the data streams identified inFIG.2provided to the management system206from the virtual data store212, VM222, application228, mass-storage array214, management server computer216, PC218, and server computers230,232,234,236,238,240,242,244,246. These data streams are also represented by directional arrows112inFIG.1, for example. At stage404, the management system206can segment a first portion of the data stream. In some examples, the segmentation at this stage can be performed based on a single data point. In other examples, the segmentation can be performed based on data received after a specific time stamp, where data received prior to that time stamp is segmented differently in association with an older core set. At stage406, the management system206can generate a first core set for a forecasting model using the first portion of the data stream that was segmented at stage404. Although referred to as the “first” core set, this core set need not actually be the first core set used by the system; instead, the term “first” is used merely to distinguish from other core sets described herein and is not intended to be limiting in any way. The first core set can be a set of data that is representative of a larger data set. In some examples, the core set is a fixed-sized buffer that contains a fixed amount data. When new data is received, the core set can be updated, if necessary, with a new data point by replacing an existing data point in the core set. In some examples, the core set is a ring buffer with a fixed number of data fields. At stage408, the management system206can predict future usage of relevant computing resources based on the first core set. In some examples, this prediction can be performed in a streaming fashion, such that each new data point causes a potential update to the core set and associated update to the resulting prediction. More detail regarding the prediction methods and models are provided later in this disclosure. At stage410, the management system206can segment a second portion of the data stream. The second portion of the data stream can include more-recent data relative to the first portion described above. In some examples, the second portion of the data stream does not share any data points with the first portion of the data stream. At stage412, the management system206can generate a second core set for the forecasting model using the second portion of the data stream that was segmented at stage410. At stage414, the management system206can predict future usage of the computing resources based on both the first and second core sets. This stage can correspond to a time period of overlapping core set usage, such as time periods1,3,5,7, and9identified inFIG.3and described above. Although the input to the prediction model would increase during these time periods, the overall computational load should remain low based on the fixed-size of the core sets used for the forecasting. At stage416, the management system206can determine that a relevant time period has elapsed, such as time period1illustrated inFIG.3. At stage418, the management system206can phase out the first core set. This can include terminating the relevant core set manager, such as core set manager m0inFIG.3, which is shown being used during time period1but not time period2. The method can then continue to stage420, which can include predicting future resource usage based on the second core set but not the first core set. For example, at time period2inFIG.3, only core set manager m1is active and maintaining a core set for forecasting use. This method can therefore gracefully transition between old and new core sets, maintaining fresh data in a lightweight format to promote low resource usage while discarding old data. As mentioned in the background section of this disclosure, some tenants of a computing infrastructure would find value in recognizing and isolating certain usage patterns specific to their business, as well as forecasting values for an isolated portion of that pattern. As an example, a tenant can experience distinct high and low usage periods in their business. The tenant might be interested in forecasting only the high usage periods in an example. That tenant would be less interested in forecasts that average the high and low usage periods, instead preferring to forecast the high usage periods specifically while excluding the low usage periods.FIG.5provides a flowchart of an example method for capacity forecasting using streaming data clustering to accomplish these goals. At stage502, the management system206can receive a data stream, such as any of the data streams identified inFIG.2provided to the management system206from the virtual data store212, VM222, application228, mass-storage array214, management server computer216, PC218, and server computers230,232,234,236,238,240,242,244,246. These data streams are also represented by directional arrows112inFIG.1, for example. At stage504, the management system206can generate a core set for a predictive model. This can include segmenting a portion of the data stream and generating a core set based on that segmented portion, as described with respect to stages404and406of.FIG.4. At stage506, the management system206can define at least two clusters of data. In some examples, this stage is performed based on the core set itself, while in other stages a larger set of data is utilized. Additional detail on the clustering mechanism is provided later in this disclosure. With the at least two clusters defined, the management system206can place new data into one of those clusters. For example, at stage508the management system206can receive a new data point, and at stage510that data point can be assigned to one of the clusters defined at stage506. In some examples, this stage includes updating the core set with a new data point, although this updating can be performed later as part of stage522. In the example of FIG. the new data point can be assigned to one of three clusters. In particular, it can be assigned to a high-usage cluster at stage512, a medium-usage cluster at stage514, or a low-usage cluster at stage516. These clusters are exemplary only, and the number of clusters can be used based on the data or the needs of the tenant. In this example, the data point is added to the high-usage cluster at stage512. At stage518, management system206can run a predictive analysis on the updated cluster, which in this example is the high-usage cluster. The predictive analysis can be limited to the relevant cluster and can incorporate the new data point assigned at stage510. The predictive analysis at this stage can be specific to a computing resource, such as CPU, memory, or storage resources, or it can include multiple resources. Results of the predictive analysis can be output at stage520, such as by displaying a trend line or prediction line on a graph of a GUI. This can allow, for example, a tenant to visualize a predicted resource usage specific to a particular usage cluster. In other words, in this example, the tenant can visualize expected changes in the high-usage workload periods. At stage522, the management system206can update or transition the core set as needed. For example, the management system206can replace an entry in the core set with the new data point received at stage508. As mentioned above, this replacement can also occur before the predictive analysis is run, such as at stage510. In another example, the management system206can transition the core set, such as by initiating a new core set manager or retiring an existing core set manager, as explained with respect toFIG.3. At stage524, the management system206can receive user input regarding taking an action based on the results output at stage520. For example, and as described in more detail with respect toFIG.7, the GUI that displays results can warn the user that a particular resource is expected to fall below a particular threshold within a period of time. This can include, for example, a warning message that storage capacity is expected to be reached within two days. In an example, the GUI can provide an option for the user to change a resource allocation associated with that resource. In the example of storage capacity exceeding a limit, the GUI can prompt the user to allocate greater storage capacity to their resource allocation. The user can provide input to that prompt as part of stage524, and the management system206can carry out the allocation change, or any other relevant change, at stage526. With respect to forecasting for specific clusters, in one example, the management system206can utilize a streaming mixed gaussian optimized approach. This can be implemented, at least in part, by the following code: public void init(double[ ] w) {int n = w.length;// initialize K random centers (means)Random rand = new Random( );for (intr= 0;r< K; r++) mu[r] = w[rand.nextInt(n)];// Note: initial centers could be set by running fast k-meanspass over the data.// compute sub-sample variance// ybar = 1/n * Σiyidoubleybar= 0;for (inti=0;i<n;i++)ybar+= w[i];ybar/= n;// initialize with subsample variancedoubles2= 0;for (inti=0;i<n;i++) {double z = (w[i] −ybar);s2+= z*z;}s2/= n;for (intr=0;r<K;r++) sigma2[r] =s2;// initialize weights: set all equal to 1/Kfor (intr=0;r<K;r++) pi[r] = 1.0 / (double)K;if (DEBUGGING) {System.out.println(“Initialization:”);System.out.println(“ N = ”+N);System.out.println(“ K = ”+K);for (intr=0;r<K;r++) {System.out.println(   “ μ[“+r+”] = “+mu[r] +“, σ2[“+r+”] = “+sigma2[r] +“, π[“+r+”] = “+pi[r]);}} For each data load, the following code can be applied, which reflects batches swapping out with each other to allow for pseudo-streaming behavior: public void load(double Y) {// System.out.print(“−”);oy.add(y);if (N<WARMUP) {warmBuf[N++] = y;return;}if (N==WARMUP) {init(warmBuf);N++;for (intr=0;r<K;r++) {omu[r] = mu[r];osigma2[r] = sigma2[r] * N;}}// Expectation step: r−h center, i−th sample, θr= (μr,σr2)// yr= πr*φ[θr](yi) / Σvπv*φ[θv](yi), where Σvπr= 1doublez= 0; //normalizationfor (intr=0;r<K;r++) {z+= pi[r]*phi(mu[r], sigma2[r], y);}for (intr=0;r<K;r++) {double x = phi(mu[r], sigma2[r], y);ogamma[r] = pi[r]*x /z;}if (!DECAY) {for (intq= 0;q<10;q++) {for (intr=0;r<K;r++) {g[r] += ogamma[r];}for (intr=0;r<K;r++) {omu[r] += ogamma[r]*Y;mu[r] = omu[r] / g[r];}for (intr=0;r<K;r++) {z= (y−mu[r]);osigma2[r] += ogamma[r]*z*z;sigma2[r] = osigma2[r] / g[r];}z = 0;for (intr=0;r<K;r++) {z+= g[r];}for (intr=0;r<K;r++) {opi[r] += g[r];pi[r] = opi[r] / z;}}} else { //DECAYfor (intr=0;r<K;r++) {g[r] *= decayLambda;g[r] += ogamma[r];}for (intr=0;r<K;r++) {omu[r] = decayLambda*omu[r] + ogamma[r]*y;mu[r] = omu[r] / g[r];}for (intr=0;r<K;r++) {z= (y−mu[r]);osigma2[r] = decayLambda*osigma2[r] + ogamma[r]*z*z;sigma2[r] = osigma2[r] / g[r];}z = 0;for (intr=0;r<K;r++) {z+= g[r];}for (intr=0;r<K;r++) {opi[r] += g[r];pi[r] = opi[r] /z;}}// check affinitydoublemaxVal= Double.MIN_VALUE;intmaxIndex= −1;for (intr=0;r<K;r++) {if (ogamma[r] >maxVal) {maxVal= ogamma[r];maxIndex=r;}}oaff.add(maxIndex);} In another example, the management system206can utilize an incremental k-means approach. When a new data point is loaded, the system can determine the core set, as shown with the example code below: public void load(double x) {if (!init) {init( );}buf.add(x);// initial fill : continue loading.// if (buf.size( ) < buf.capacity( )) return;// new minibatch not yet complete : continue loadingif (++batchCounter < batchFrequency) return;batchCounter = 0;// new minibatch : compute modelcluster(buf.getArray( ));clusters.clear( );for (inti= 0;i< clusterCount;i++) {if (countv[i] > 0) {Cluster c = new Cluster(countv[i], muv[i], sigma2v[i]);clusters.add(c);}}// sort prior and new before comparingCollections.sort(clusters);if (DEBUGGING) {MSG( m: “New clusters:”);for (inti= 0;i< clusters.size( );i++) {MSG( m: “[“+i+”] = “+ clusters.get(i)}}} In another example, the management system206can utilize a streaming k-means++ approach, where a new data point is added, the core set is calculated incrementally, and the membership of the core set is established. This allows for two separate core sets to be maintained at any given time, one being built up from scratch while the other is established and being used by the model. After a period of time, the model under use is replaced with the newly trained model and a new model is created. This strategy can put a limit on the amount of memory and CPU that any particular model is using. The below approach operates with the models overlapping in less of a binary fashion: if (0 == (1+ ticks) % regime) {int nBlock = (1+ ticks) / regime;if (0 == nBlock%2) {   //even: switchif (manager0_!= null) {manager = manager0_;manager0_ = null;}} else if (1==nBlock%2) {    // odd: allocatemanager0_ = new BucketManager(buckets, dim, coresetSize,seed);}}// load the point, tick the clockPoint p = new Point((float) v, ticks++);manager.insertPoint(p);if (ticks >= warmup)status = ForecastStatusEnum.STABLE_FORECAST;// if swap manager has been allocated, start warming it upif (manager0_ !=null) manager0.insertPoint(p);Relatedly, the example code provided below can be used to rebuilda core set:// each ‘refresh’ ticks we recompute coreset kMeans++if (0 == (1+ ticks) % refresh) {ArrayList<Point> coreset = buildManagerCoreset( );// compute QUORUM clusterings with kMeans++, and take the bestfloatminCost= 0.0f;floatcurCost= 0.0f;CoresetCostTripletriple=lloydPlusPlus(clusters, this.coresetSize, dim, coreset);if (triple!=null) {minCost=triple.getCoresetCost( );for (intj= 0;j< clusters;j++) {//corsetCenters[j] = triple.getCoresetCenters( )[j].clone( );}curCost= minCost;CoresetCostTriple oldTriple =triple;for (inti= 1;i< quorum;i++) {triple= lloydPlusPlus(clusters, this.coresetSize, dim, coreset);if (triple==null) {triple= oldTriple;break;}curCost=triple.getCoresetCost( );if (curCost<minCost)minCost=curCost;for (intj= 0;i< clusters;j++) {//coresetCenters[j] = triple.getCoresetCenters( )[j].clone( );}}}triple.sort( );} Next, the example code below can be used to calculate a high-use cluster center and run that through a linear model. For each cluster, the code can determine its center and store that center for each dimension. Then, for all of the centers in each cluster, the data is smoothed and the highest value cluster is passed through a linear model in order to device a forecast for the highest demand. if (triple!=null) {float[ ] centers = new float[clusters];for (inti=0;i< clusters;i++ {Point q =triple.getCoresetCenters( )[i];centers[i] = q.cc( );}// Add centers to smoothing buffersfor (inti= 0;i< clusters;i++) {datav[i].add(centers[i]);}// Add top cluster radius to smoothing bufferradius.add(triple.getRadii( )[clusters −1]);// Load smoothed high value center to linear model.RingBuffer b1 = datav[clusters −1];float z= (b1.isFilled( ) ? b1.median( ) : b1.avg( ));for (intr=0;r<refresh;r++) lm.load(z); This disclosure therefore provides multiple approaches to modeling high-demand data. FIG.6Ais an illustration of an example graph600showing a dataset that includes various data points presented in a scatter-plot format. In this example, the data has been analyzed with two potential clusters, which can be considered “high” and “low.” A first line602generally defines the high cluster while a second line604generally defines the low cluster. Additionally, a first trend line606is fit to the first line602associated with the high cluster, while a second trend line608is fit to the second line604associated with the low cluster. The trend lines606,608can be used to project future resource usage. In this example, both trend lines606,608are trending upwards. These trend lines606,608can be extrapolated into the future to provide predictions, such as in the format shown inFIG.7. FIG.6Bprovides an illustration of another example graph610showing a different dataset that includes various data points presented in a scatter-plot format. In this example, the data has been analyzed with two potential clusters, which can be considered “high” and “low.” A first line612generally defines the high cluster while a second line614generally defines the low cluster. Additionally, a first trend line616is fit to the first line612associated with the high cluster. The trend lines616can be used to project future resource usage. In this example, the trend line616is trending flat. This can be extrapolated into the future to provide predictions, such as in the format shown inFIG.7. FIG.7provides an illustration of an example GUI702for providing capacity forecasting and taking remedial actions according to one or more methods disclosed herein. The GUI702can be generated by the management system206and can reflect various metrics received from components of the computing infrastructure, as described with respect toFIGS.1and2. The GUI702can be hosted on the management server computer216in some examples. In other examples, the GUI702is hosted in a VM instantiated on a server computer in the computing infrastructure. In this example, the GUI702is intended to reflect an interface that can be provided to a tenant of the computing infrastructure. For example, an IT administrator at a tenant company can view this GUI702to determine resource capacity usage and forecasts, and to make appropriate changes. The GUI702includes a menu bar704that provides various options. In this example, the menu bar704includes options for Summary, Alerts, Metrics, Capacity, Compliance, Events, and More. Menu bar704shows a box surrounding Capacity, indicating that the GUI702is displaying a page in response to a user selection of the Capacity tab. In some examples, the GUI702can be displayed in the Metrics tab or in another tab not shown, such as a Forecasts tab. The GUI702includes various informational boxes that provide useful metric forecasting information to a user. For example, a time remaining box706provides a high-level warning to the user regarding how much time is remaining before a computing resource is forecasted to fall below a relevant threshold. In this example, the time remaining box706shows “2 days,” meaning that the metric forecasting model predicts that a computing resource is forecasted to fall below a threshold within about 2 days. The threshold can be set elsewhere, such as in the Metrics or Compliance tabs in the menu bar704. The GUI702also includes several informational boxes showing time remaining for each of the computing resources. For example, a CPU Demand box710shows that CPU demand is expected to remain within acceptable levels for over 1 year. A Memory Demand box712shows that memory demand is expected to remain within acceptable levels for about 2 days. And a Storage Demand box714shows that storage demand is expected to remain within acceptable levels for about 50 days. The GUI702also includes a capacity details box708, which has a Capacity Remaining section showing that only 10% capacity is currently remaining. The capacity details box708also shows that 3 VMs are available. Finally, the capacity details box708includes a selectable graphical element709for scheduling additional resources. In some examples, a user can select element709to provision more resources from the computing infrastructure, as explained above with respect to stages524and526ofFIG.5. For example, selecting element709can cause the management system206to automatically request and provision additional computing resources. In that example, the management system206can provision an amount of resources necessary to return the remaining capacity to above a threshold of some sort. As one example, resources can be provisioned such that remaining capacity is above 25%. In another example, resources can be provisioned such that remaining capacity provides a time remaining of at least 3 months. Any other thresholds can be used, and in some examples these thresholds can be customized or otherwise changed through other tabs in the menu bar704. In some examples, selecting element709can allow a user to make more granular decisions regarding scheduling additional resources, such as by displaying a GUI window or a new GUI page that includes relevant options. Regardless of whether the resulting display is a window within the current GUI page or a new page, the user can be presented with options for increasing or decreasing computing resources. For example, the user can select to instantiate one or more VMs. In another example, the user can select an increased amount of memory, CPU, or storage resources, and the management system206can provision the resources appropriately, such as by instantiating the required number of VMs. The user or management system206can also select the type of VMs, such as a VM provisioned with more memory than another VM which may be provisioned with more storage. The GUI702also includes a utilization section716that provides a graph726as well as selectable options for the graph. A resource field718can allow a user to select between various resources, such as storage demand, CPU demand, and memory demand. In this example, the user has selected storage demand using the resource field718. Similarly, a cluster field720is provided, allowing a user to select from multiple clusters available for the data. As explained above with respect toFIGS.5,6A, and6B, the relevant data can be separated into two or more clusters. In the example ofFIG.7, the data has been separated into high-, medium-, and low-usage clusters, with the high-usage cluster being selected in the cluster field720. In some examples, the selection in this field can cause the management system206to update relevant values in the information boxes above, such as by updating the time remaining, capacity remaining, or resource demand relevant to a particular cluster. As an example, the capacity remaining can be 10% when considering high-usage data, but 50% when considering medium-usage data. In some examples, the informational boxes in the GUI702relate to the highest-usage cluster by default, although the default setting could be modified by the user. The GUI702also includes a history field722that can be used to select the length of history shown in the graph726below the field722. In this example, the history field722provides options for 6 months, 5 months, or 4 months, although any other period of time could be included here. In this example the user has selected 6 months of history using the history field722, which is reflected in the graph726below which shows a history of storage demand from February to August. Similarly, the GUI702includes a forecast field724that can be used to select the length of a forecast shown in the graph726below the field724. In this example, the forecast field724provides options for 6 months, 5 months, or 4 months, although any other period of time could be included here. In this example the user has selected 6 months of forecast using the forecast field724, which is reflected in the graph726below which shows a forecast of storage demand from August through February of the following year. The graph726itself includes a usage line728that tracks the historical resource usage of the resource selected in the resource field718. Although not shown, the graph726can include labels along the y-axis that denote specific usage levels that can be used to interpret the data on the graph726. The graph726also includes a line734marking the present day, such that the data line to the left of that line734reflects historical data while the data line(s) to the right reflects projections into the future. In this example, the graph726includes a projection730beginning at line734and extending six months into the future. The projection730includes a dotted line that reflects the projection itself, along with upper and lower bounds reflecting a confidence level. A user can select or alter the confidence level through settings not shown in this drawing. The projection730can be compared against a threshold line732, which can indicate when the projected usage is expected to cross a relevant threshold. In this example, the storage demand projection730is expected to exceed the threshold line732in about 50 days. This time period is also reflected in the storage demand box714of the GUI702, as discussed above. Other examples of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the examples disclosed herein. Though some of the described methods have been presented as a series of steps, it should be appreciated that one or more steps can occur simultaneously, in an overlapping fashion, or in a different order. The order of steps presented are only illustrative of the possibilities and those steps can be executed or performed in any suitable fashion. Moreover, the various features of the examples described here are not mutually exclusive. Rather any feature of any example described here can be incorporated into any other suitable example. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
43,204
11863467
Throughout the description, similar reference numbers may be used to identify similar elements. DETAILED DESCRIPTION It will be readily understood that the components of the embodiments as generally described herein and illustrated in the appended figures could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of various embodiments, as represented in the figures, is not intended to limit the scope of the present disclosure, but is merely representative of various embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by this detailed description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussions of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment. Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the invention can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention. Reference throughout this specification to “one embodiment”, “an embodiment”, or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present invention. Thus, the phrases “in one embodiment”, “in an embodiment”, and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment. Network appliances can have one or more receive media access controllers (MACs) for receiving network traffic. Packets can arrive on the receive MACs at a higher rate than can be processed by the network appliance. A network appliance that has a single ingress queue will drop packets from that single ingress queue after that ingress queue fills. These drops have no tenant discrimination because the packets are dropped before tenant identification and resource consumption evaluation occurs. Thus, a greedy tenant with a high packet rate can impact well behaved tenants before an ingress processing pipeline (e.g., sub line rate packet processing circuit such as a full featured packet processing pipeline, full featured P4 engine, etc.) can exert control. In a multi-tenant cloud deployment that uses SmartNIC based pipeline processing (e.g., processing using a full featured P4 engine) for ingressing traffic towards a host, tenant isolation can be critical and providing a method for identifying the tenant receiving a network packet can help ensure fairness or meeting a quality-of-service (QoS) guarantee. The tenant identifications can be used for queuing and scheduling network packets before the cloud services are processed, thereby providing a benefit to the cloud provider in ensuring the fairness, meeting QoS guarantees, etc. A data center and a tenant often agree to a service level agreement (SLA). The SLA may guarantee a minimum bandwidth to the tenant. There may be no need to limit any tenant's bandwidth when the network appliance is able to process every packet for every tenant. If every packet cannot be processed, then some packets must be dropped. Tenant discrimination can be used to preferentially drop packets of some tenants instead of others. An in-SLA tenant is a tenant that is consuming network resources (e.g., bandwidth) at or below the level guaranteed by that tenant's SLA. An out of SLA tenant is a tenant who is not an in-SLA tenant or a tenant who has no SLA or no guaranteed minimum service level. In order to meet the data center's commitments, the network appliance should preferentially drop out of SLA tenant's traffic in favor of in-SLA tenant's traffic. Tenant discrimination can be provided by using multiple ingress queues and using a line rate classification circuit to select an ingress queue for each of the packets. The line rate classification circuit can be used to determine tenant IDs from each packet's header data. A tenant ID can be used to select an ingress queue. Packets for high priority tenants, well-behaved tenants, or in-SLA tenants can be queued on a first queue while packets for other tenants are queued on a second queue. More than two input queues may be implemented for finer grained control of which packets get dropped. A scheduler can use a scheduling policy such as weighted round robin (WRR) to select packets from the input queues for processing by the sub line rate packet processing circuit. As discussed below, the sub line rate packet processing circuit can include a configurable parser and match-action units that can be used to implement networking rules (e.g., routing, firewalling, load balancing, network address translation, etc.). The line rate classification circuit can use a much simpler parser to obtain the contents of a few header fields of the packet. For example, a few layer 2 (L2), layer 3 (L3), and layer 4 (L4) fields can be sufficient for producing tenant IDs that are sufficient for line rate tenant discrimination. Line rate processing is processing that can be performed without dropping a packet. As such, the line rate classification circuit can classify every packet received by the network appliance. Some of those packets may be dropped after line rate classification. One advantage of using a line rate classification circuit is that tenant discrimination is performed before a packet gets dropped. Another advantage is that in-SLA tenants may not be starved of bandwidth by out of SLA tenants. Yet another advantage is that the data center is better able to meet its service level guarantees to all its tenants. In the field of data networking, the functionality of network appliances such as switches, routers, and NICs are often described in terms of functionality that is associated with a “control plane” and functionality that is associated with a “data plane.” In general, the control plane refers to components and/or operations that are involved in managing forwarding information and the data plane refers to components and/or operations that are involved in forwarding packets from an input interface to an output interface according to the forwarding information provided by the control plane. The data plane may also refer to components and/or operations that implement packet processing operations related to encryption, decryption, compression, decompression, firewalling, and telemetry. Aspects described herein process packets using match-action pipelines. A match-action pipeline is a part of the data plane that can process network traffic flows extremely quickly if the match-action pipeline is configured to process those traffic flows. Upon receiving a packet of a network traffic flow, the match-action pipeline can generate an index from data in the packet header. Finding a flow table entry for the network traffic flow at the index location in the flow table is the “match” portion of “match-action”. If there is a “match”, the “action” is performed to thereby process the packet. If there is no flow table entry for the network traffic flow, it is a new network traffic flow that the match-action pipeline is not yet configured to process. If there is no match, then the match-action pipeline can perform a default action. The high-volume and rapid decision-making that occurs at the data plane is often implemented in fixed function application specific integrated circuits (ASICs). Although fixed function ASICs enable high-volume and rapid packet processing, fixed function ASICs typically do not provide enough flexibility to adapt to changing needs. Data plane processing can also be implemented in field programmable gate arrays (FPGAs) to provide a high level of flexibility in data plane processing. FIG.1is a functional block diagram of a network appliance101with a control plane102and a data plane103but without a line rate classification circuit. A network appliance101can have a control plane102and a data plane103. The control plane provides forwarding information (e.g., in the form of table management information) to the data plane and the data plane receives packets on input interfaces, processes the received packets, and then forwards packets to desired output interfaces. Additionally, control traffic (e.g., in the form of packets) may be communicated from the data plane to the control plane and/or from the control plane to the data plane. The data plane and control plane are sometimes referred to as the “fast” plane and the “slow” plane, respectively. In general, the control plane is responsible for less frequent and less time-sensitive operations such as updating Forwarding Information Bases (FIBs) and Label Forwarding Information Bases (LFIBs), while the data plane is responsible for a high volume of time-sensitive forwarding decisions that need to be made at a rapid pace. The control plane may implement operations related to packet routing that include InfiniBand channel adapter management functions, Open Shortest Path First (OSPF), Enhanced Interior Gateway Routing Protocol (EIGRP), Border Gateway Protocol (BGP), Intermediate System to Intermediate System (IS-IS), Label Distribution Protocol (LDP), routing tables and/or operations related to packet switching that include Address Resolution Protocol (ARP) and Spanning Tree Protocol (STP). The data plane (which may also be referred to as the “forwarding” plane) may implement operations related to parsing packet headers, Quality of Service (QoS), filtering, encapsulation, queuing, and policing. Although some functions of the control plane and data plane are described, other functions may be implemented in the control plane and/or the data plane. Some techniques exist for providing flexibility at the data plane of network appliances that are used in data networks. For example, the concept of a domain-specific language for programming protocol-independent packet processors, known simply as “P4,” has developed as a way to provide some flexibility at the data plane of a network appliance. The P4 domain-specific language for programming the data plane of network appliances is defined in the “P416 Language Specification,” version 1.2.2, as published by the P4 Language Consortium on May 17, 2021, which is incorporated by reference herein. P4 (also referred to herein as the “P4 specification,” the “P4 language,” and the “P4 program”) is designed to be implementable on a large variety of targets including switches, routers, programmable NICs, software switches, FPGAs, and ASICs. As described in the P4 specification, the primary abstractions provided by the P4 language relate to header types, parsers, tables, actions, match-action units, control flow, extern objects, user-defined metadata, and intrinsic metadata. The data plane103includes multiple receive media access controllers (MACs) (RX MAC)111and multiple transmit MACs (TX MAC)110. The RX MACs111implement media access control on incoming packets via, for example, a MAC protocol such as Ethernet. The MAC protocol can be Ethernet and the RX MACs can be configured to implement operations related to, for example, receiving frames, half-duplex retransmission and back-off functions, Frame Check Sequence (FCS), interframe gap enforcement, discarding malformed frames, and removing the preamble, Start Frame Delimiter (SFD), and padding from a packet. Likewise, the TX MACs110implement media access control on outgoing packets via, for example, Ethernet. The TX MACs can be configured to implement operations related to, for example, transmitting frames, half-duplex retransmission and back-off functions, appending an FCS, interframe gap enforcement, and prepending a preamble, an SFD, and padding. As illustrated inFIG.1, a P4 program is provided to the data plane103via the control plane102. Communications between the control plane and the data plane can use a dedicated channel or bus, can use shared memory, etc. The P4 program includes software code that configures the functionality of the data plane103to implement particular processing and/or forwarding logic and to implement processing and/or forwarding tables that are populated and managed via P4 table management information that is provided to the data plane from the control plane. Control traffic (e.g., in the form of packets) may be communicated from the data plane to the control plane and/or from the control plane to the data plane. In the context of P4, the control plane corresponds to a class of algorithms and the corresponding input and output data that are concerned with the provisioning and configuration of the data plane corresponds to a class of algorithms that describe transformations on packets by packet processing systems. The data plane103includes a programmable packet processing pipeline104that is programmable using a domain-specific language such as P4 and that can be used to implement the programmable packet processing pipeline104. As described in the P4 specification, a programmable packet processing pipeline can include an arbiter105, a parser106, a match-action pipeline107, a deparser108, and a demux/queue109. The data plane elements described may be implemented as a P4 programmable switch architecture, as a P4 programmable NIC, as a P4 programmable router, or some other architecture. The arbiter105can act as an ingress unit receiving packets from RX-MACs111and can also receive packets from the control plane via a control plane packet input112. The arbiter105can also receive packets that are recirculated to it by the demux/queue109. The demux/queue109can act as an egress unit and can also be configured to send packets to a drop port (the packets thereby disappear), to the arbiter via recirculation, and to the control plane102via an output CPU port113. The control plane is often referred to as a CPU (central processing unit) although, in practice, control planes often include multiple CPU cores and other elements. The arbiter105and the demux/queue109can be configured through the domain-specific language (e.g., P4). The parser106is a programmable element that can be configured through the domain-specific language (e.g., P4) to extract information from a packet (e.g., information from the header of the packet). As described in the P4 specification, parsers describe the permitted sequences of headers within received packets, how to identify those header sequences, and the headers and fields to extract from packets. The information extracted from a packet by the parser can be referred to as a packet header vector or “PHV.” The parser can identify certain fields of the header and can extract the data corresponding to the identified fields to generate the PHV. The PHV may include other data (often referred to as “metadata”) that is related to the packet but not extracted directly from the header, including for example, the port or interface on which the packet arrived at the network appliance. Thus, the PHV may include other packet related data (metadata) such as input/output port number, input/output interface, or other data in addition to information extracted directly from the packet header. The PHV produced by the parser may have any size or length. For example, the PHV may be at least 4 bits, 8 bits, 16 bits, 32 bits, 64 bits, 128 bits, 256 bits, or 512 bits. In some cases, a PHV having even more bits (e.g., 6 Kb) may include all relevant header fields and metadata corresponding to a received packet. The size or length of a PHV corresponding to a packet may vary as the packet passes through the match-action pipeline. The deparser108is a programmable element that is configured through the domain-specific language (e.g., P4) to generate packet headers from PHVs at the output of match-action pipeline107and to construct outgoing packets by reassembling the header(s) (e.g., Ethernet and IP headers, InfiniBand PDUs, etc.) as determined by the match-action pipeline. In some cases, a packet/payload may travel in a separate queue or buffer120, such as a first-in-first-out (FIFO) queue, until the packet payload is reassembled with its corresponding PHV at the deparser to form a packet. The deparser may rewrite the original packet according to the PHV fields that have been modified (e.g., added, removed, or updated). In some cases, a packet processed by the parser may be placed in a packet buffer/traffic manager for scheduling and possible replication. In some cases, once a packet is scheduled and leaves the packet buffer/traffic manager, the packet may be parsed again to generate an egress PHV. The egress PHV may be passed through a match-action pipeline after which a final deparser operation may be executed (e.g., at deparser108) before the demux/queue109sends the packet to the TX MAC110or recirculates it back to the arbiter105for additional processing. A network appliance101can have a peripheral component interconnect extended (PCIe) interface such as PCIe media access control (MAC)114. A PCIe MAC can have a base address register (BAR) at a base address in a host system's memory space. Processes, typically device drivers within the host system's operating system, can communicate with a NIC via a set of registers beginning with the BAR. Some PCIe devices are single root input output virtualization (SR-IOV) capable. Such PCIe devices can have a physical function (PF) and multiple virtual functions (VFs). A PF BAR map115can be used by the host machine to communicate with the PCIe card. A VF BAR map116can be used by a virtual machine (VM) running on the host to communicate with the PCIe card. Typically, the VM can access the NIC using a device driver within the VM and at a memory address within the VMs memory space. Many SR-IOV capable PCIe cards can map that location in the VM's memory space to a VF BAR. As such a VM may be configured as if it has its own NIC while in reality it is associated with a VF provided by a SR-IOV capable NIC. As discussed below, some PCIe devices can have multiple PFs. For example, a NIC can provide network connectivity via one PF and can provide an InfiniBand channel adapter via another PF. As such, the NIC can provide “NIC’ VFs and “InfiniBand” VFs to VMs running on the host. The InfiniBand PF and VFs can be used for data transfers, such as remote direct memory access (RDMA) transfers to other VMs running on the same or other host computers. Similarly, a NIC can provide non-volatile memory express (NVMe) and small computer system interface (SCSI) PFs and VFs to VMs running on the host. FIG.2is a high-level diagram illustrating an example of generating a packet header vector206from a packet201according to some aspects. The parser202can receive a packet201that has layer 2, layer 3, layer 4, and layer 7 headers and payloads. The parser can generate a packet header vector (PHV) from packet201. The packet header vector206can include many data fields including data from packet headers207and metadata222. The metadata222can include data generated by the network appliance such as the hardware port223on which the packet201was received and the packet timestamps224indicating when the packet201was received by the network appliance, enqueued, dequeued, etc. The source MAC address208and the destination MAC address209can be obtained from the packet's layer 2 header. The source IP address211can be obtained from the packet's layer 3 header. The source port212can be obtained from the packet's layer 4 header. The protocol213can be obtained from the packet's layer 3 header. The destination IP address214can be obtained from the packet's layer 3 header. The destination port215can be obtained from the packet's layer 4 header. The packet quality of service parameters216can be obtained from the packet's layer 3 header or another header based on implementation specific details. The virtual network identifier217may be obtained from the packet's layer 2 header. The multi-protocol label switching (MPLS) data218, such as an MPLS label, may be obtained from the packet's layer 2 header. The other layer 4 data219can be obtained from the packet's layer 4 header. State synchronization data, such as sync data fields220, can be obtained from record transition data that may be in the layer 7 packet in the layer 4 payload. The other header information221is the other information contained in the packet's layer 2, layer 3, layer 4, and layer 7 headers. The packet 5-tuple210is often used for generating keys for match tables, discussed below. The packet 5-tuple210can include the source IP address211, the source port212, the protocol213, the destination IP address214, and the destination port215. Those practiced in computer networking protocols realize that the headers carry much more information than that described here, realize that substantially all of the headers are standardized by documents detailing header contents and fields, and know how to obtain those documents. The parser can also be configured to output a packet or payload205. Recalling that the parser202is a programmable element that is configured through the domain-specific language (e.g., P4) to extract information from a packet, the specific contents of the packet or payload205are those contents specified via the domain specific language. For example, the contents of the packet or payload205can be the layer 3 payload. FIG.3is a functional block diagram illustrating an example of a match-action unit301in a match-action pipeline300according to some aspects.FIG.3introduces certain concepts related to match-action units and match-action pipelines and is not intended to be limiting. The match-action units301,302,303of the match-action pipeline300are programmed to perform “match-action” operations in which a match unit performs a lookup using at least a portion of the PHV and an action unit performs an action based on an output from the match unit. A PHV generated at the parser may be passed through each of the match-action units in the match-action pipeline in series and each match-action unit implements a match-action operation. The PHV and/or table entries may be updated in each stage of match-action processing according to the actions specified by the P4 programming. In some instances, a packet may be recirculated through the match-action pipeline, or a portion thereof, for additional processing. Match-action unit 1301receives PHV 1305as an input and outputs PHV 2306. Match-action unit 2302receives PHV 2306as an input and outputs PHV 3307. Match-action unit 3303receives PHV 3307as an input and outputs PHV 4308. An expanded view of elements of a match-action unit301of match-action pipeline300is shown. The match-action unit includes a match unit317(also referred to as a “table engine”) that operates on an input PHV305and an action unit314that produces an output PHV306, which may be a modified version of the input PHV305. The match unit317can include key construction logic309, a lookup table310, and selector logic312. The key construction logic309is configured to generate a key from at least one field in the PHV (e.g., 5-tuple, InfiniBand queue pair identifiers, etc.). The lookup table310is populated with key-action pairs, where a key-action pair can include a key (e.g., a lookup key) and corresponding action code315and/or action data316. A P4 lookup table may be viewed as a generalization of traditional switch tables, and can be programmed to implement, for example, routing tables, flow lookup tables, ACLs, and other user-defined table types, including complex multi-variable tables. The key generation and lookup functions constitute the “match” portion of the operation and produce an action that is provided to the action unit via the selector logic. The action unit executes an action over the input data (which may include data313from the PHV) and provides an output that forms at least a portion of the output PHV. For example, the action unit executes action code315on action data316and data313to produce an output that is included in the output PHV306. If no match is found in the lookup table, then a default action311may be implemented. A flow miss is an example of a default action that may be executed when no match is found. The operations of the match-action unit can be programmable by the control plane via P4 and the contents of the lookup table can be managed by the control plane. FIG.4is a functional block diagram of a network appliance430having an application specific integrated circuit (ASIC)401, according to some aspects. A network appliance may be a network interface card (NIC), SmartNIC, switch, SmartSwitch, router, or other device that handles network traffic. If the network appliance is a network interface card (NIC) then the NIC can be installed in a host computer and can act as a network appliance for the host computer and for virtual machines running on the host computer. Such a NIC can have a PCIe connection431for communicating with the host computer. The network appliance430can have an ASIC401, off-ASIC memory432, and ethernet ports433. The off-ASIC memory432can be one of the widely available memory modules or chips such as double data rate 4 (DDR4) synchronous dynamic random-access memory (SDRAM) such that the ASIC has access to many gigabytes of memory on the network appliance430. The ethernet ports433provide physical connectivity to a computer network such as the internet. The ASIC401is a semiconductor chip having many core circuits interconnected by an on-chip communications fabric, sometimes called a network on a chip (NOC)402. NOCs are often implementations of standardized communications fabrics such as the widely used AXI bus. The ASIC's core circuits can include a PCIe interface427, CPU cores403, P4 packet processing pipeline408elements, memory interface415, on ASIC memory (e.g., SRAM)416, service processing offloads417, a packet buffer422, extended packet processing pipeline423, and packet ingress/egress circuits414. The PCIe interface427can be used to communicate with a host computer via the PCIe connection431. The CPU cores403can include numerous CPU cores such as CPU 1405, CPU 2406, and CPU 3407. The P4 packet processing pipeline408can include a pipeline ingress circuit413, a parser circuit412, match-action units411, a deparser circuit410, and a pipeline egress circuit409. The service processing offloads417are circuits implementing functions that the ASIC uses so often that the designer has chosen to provide hardware for offloading those functions from the CPUs. The service processing offloads can include a compression circuit418, decompression circuit419, a crypto/PKA circuit420, and a CRC calculation circuit421. The specific core circuits implemented within the non-limiting example of ASIC401have been selected such that the ASIC implements many, perhaps all, of the functionality of an InfiniBand channel adapter, of an NVMe card, and of a network appliance that processes network traffic flows carried by IP (internet protocol) packets. A network device can include precision clocks that output a precise time, clocks that are synchronized to remote authoritative clocks via PTP, and hardware clocks424. A hardware clock may provide a time value (e.g., year/day/hour/minute/second/ . . . ) or may simply be a counter that is incremented by one at regular intervals (e.g., once per clock cycle for a device having a 10 nsec. clock period). Time values obtained from the clocks can be used as timestamps for events such as enqueuing/dequeuing a packet. The P4 packet processing pipeline408is a specialized set of elements for processing network packets such as IP (internet protocol) packets and InfiniBand PDUs (protocol data units). The P4 pipeline can be configured using a domain-specific language such as the P4 domain specific language. As described in the P4 specification, the primary abstractions provided by the P4 language relate to header types, parsers, tables, actions, match-action units, control flow, extern objects, user-defined metadata, and intrinsic metadata. The network appliance430can include a memory432for running Linux or some other operating system and for storing data used by the processes implementing network services. A network appliance that uses a line rate classification circuit for tenant discrimination can store tenant statistics440and ingress queues management code441. The tenant statistics440can indicate values for statistics that are kept for tenants. The statistics kept for a tenant can include network bandwidth used by the tenant, the number of the tenant's packets processed during a time period, the number of the tenant's packets dropped from the ingress queues, and other statistics. The ingress queue management code441can be code that is executable by the CPU cores403for maintaining or rewriting the ingress queue map455based on the tenant statistics440. The ingress queue management code441can be code that is executable by the CPU cores403for managing the ingress queues451. For example, the ingress queues451may be resized when packets are being dropped from one queue while the other has a number of vacant slots that exceeds a threshold. The CPU cores403can be general purpose processor cores, such as ARM processor cores, MIPS processor cores, and/or x86 processor cores, as is known in the field. Each CPU core can include a memory interface, an ALU, a register bank, an instruction fetch unit, and an instruction decoder, which are configured to execute instructions independently of the other CPU cores. The CPU cores may be Reduced Instruction Set Computers (RISC) CPU cores that are programmable using a general-purpose programming language such as C. The CPU cores403can also include a bus interface, internal memory, and a memory management unit (MMU) and/or memory protection unit. For example, the CPU cores may include internal cache, e.g., L1 cache and/or L2 cache, and/or may have access to nearby L2 and/or L3 cache. Each CPU core may include core-specific L1 cache, including instruction-cache and data-cache and L2 cache that is specific to each CPU core or shared amongst a small number of CPU cores. L3 cache may also be available to the CPU cores. There may be multiple CPU cores403available for control plane functions and for implementing aspects of a slow data path that includes software implemented packet processing functions. The CPU cores may be used to implement discrete packet processing operations such as L7 applications (e.g., HTTP load balancing, L7 firewalling, and/or L7 telemetry), certain InfiniBand channel adapter functions, flow table insertion or table management events, connection setup/management, multicast group join, deep packet inspection (DPI) (e.g., URL inspection), storage volume management (e.g., NVMe volume setup and/or management), encryption, decryption, compression, and decompression, which may not be readily implementable through a domain-specific language such as P4, in a manner that provides fast path performance as is expected of data plane processing. The packet buffer422can act as a central on-chip packet switch that delivers packets from the ingress/egress MAC456to packet processing elements of the data plane and vice-versa. The packet processing elements can include a slow data path implemented in software and a fast data path implemented by packet processing circuitry408. The packet processing pipeline circuit408can be a specialized circuit or part of a specialized circuit using one or more ASICs or FPGAs to implement programmable packet processing pipelines such as the programmable packet processing pipeline104ofFIG.1. Some embodiments include ASICs or FPGAs implementing a P4 pipeline as a fast data path within the network appliance. The fast data path is called the fast data path because it processes packets faster than a slow data path that can also be implemented within the network appliance. An example of a slow data path is a software implemented data path wherein the CPU cores403and memory432are configured via software to implement a slow data path. A network appliance having two data paths has a fast data path and a slow data path when one of the data paths processes packets faster than the other data path. All memory transactions in the network appliance430, including host memory transactions, on board memory transactions, and register reads/writes may be performed via a coherent interconnect402. In one non-limiting example, the coherent interconnect can be provided by a network on a chip (NOC) “IP core”. Semiconductor chip designers may license and use prequalified IP cores within their designs. Prequalified IP cores may be available from third parties for inclusion in chips produced using certain semiconductor fabrication processes. A number of vendors provide NOC IP cores. The NOC may provide cache coherent interconnect between the NOC masters, including the packet processing pipeline circuits408, CPU cores403, memory interface415, and PCIe interface427. The interconnect may distribute memory transactions across a plurality of memory interfaces using a programmable hash algorithm. All traffic targeting the memory may be stored in a NOC cache (e.g., 1 MB cache). The NOC cache may be kept coherent with the CPU core caches. The ingress/egress MAC can use the ethernet ports433to send packets to a computer network and to receive packets from the computer network. When a packet is received, the ingress/egress MAC456can act as an input port and can store the entire packet directly into the packet buffer422as a buffered network packet while also passing the packet to the line rate classification circuit450. The network packet may be passed to the line rate classification circuit450by passing the entire packet or a predetermined portion of the packet. For example, the input port (ingress/egress MAC456) may be configured to pass the first kilobyte or half kilobyte of the packet to the line rate classification circuit. A dedicated communications circuit458can be used to pass the packet from the input port456to the line rate classification circuit450. As discussed above, a NOC402may be used as a communications fabric within the ASIC401. The NOC402, however, provides communications services to many components and may not be immediately available when a packet is received. A communications circuit other than the NOC, the dedicated communications circuit458, may therefore be used. The dedicated communications circuit458may be an on chip bus or a coherent interconnect that directly connects the input port456to the line rate classification circuit450. The line rate classification circuit450can use the packet contents of the packet to produce a data value (e.g., the tenant ID). For example, field values of L2, L3, and L4 header fields can be used to produce the data value. An ingress queue map455can map the data value to a queue identifier that identifies one of the ingress queues451. As such, one of the ingress queues is selected based on the packet contents of the packet. The ingress queues451can include a first ingress queue452, a second ingress queue453, and a third ingress queue454. Implementations having only two ingress queues may use a one bit queue identifier to indicate the selected ingress queue. The buffer scheduler457can select a packet from the ingress queues451for processing by the packet processing pipeline circuit408or other components of the ASIC. FIG.5illustrates packet headers and payloads of packets for network flows500including layer 7 fields according to some aspects. A network flow500can have numerous network packets such as a first packet550, a second packet551, a third packet552, a fourth packet553, and a final packet554with many more packets between the fourth packet553and the final packet554. The term “the packet” or “a packet” may refer to any of the packets in a network flow. Packets can be constructed and interpreted in accordance with the internet protocol suite. The Internet protocol suite is the conceptual model and set of communications protocols used in the Internet and similar computer networks. A packet can be transmitted and received as a raw bit stream over a physical medium at the physical layer, sometimes called layer 1. The packets can be received by a RX MAC111as a raw bit stream or transmitted by TX MAC110as a raw bit stream. The link layer is often called layer 2. The protocols of the link layer operate within the scope of the local network connection to which a host is attached and includes all hosts accessible without traversing a router. The link layer is used to move packets between the interfaces of two different hosts on the same link. The packet has a layer 2 header501, a layer 2 payload502, and a layer 2 frame check sequence (FCS)503. The layer 2 header can contain a source MAC address504, a destination MAC address505, an optional 802.1Q header506, optional VLAN tag information507, and other layer 2 header data508. The input ports111and output ports110of a network appliance101can have MAC addresses. A network appliance101can have a MAC address that is applied to all or some of the ports. Alternatively, a network appliance may have one or more ports that each have their own MAC address. In general, each port can send and receive packets. As such, a port of a network appliance can be configured with a RX MAC111(input port) and a TX MAC110(output port). Ethernet, also known as Institute of Electrical and Electronics Engineers (IEEE) 802.3, is a layer 2 protocol. IEEE 802.11 (WiFi) is another widely used layer 2 protocol. The layer 2 payload502can include a layer 3 packet. The layer 2 FCS503can include a CRC (cyclic redundancy check) calculated from the layer 2 header and layer 2 payload. The layer 2 FCS can be used to verify that the packet has been received without errors. IEEE 802.1Q is the networking standard that supports VLANs on IEEE 802.3 networks. The optional 802.1Q header506and VLAN tag information507are specified by the IEEE 802.1Q standard. The 802.1Q header is the two-octet value 0x8100 that indicates that VLAN tag information507is present. The VLAN tag information includes a 12-bit VLAN identifier. As such, a LAN can be configured to have 4094 VLANs (0x000 and 0xFFF are reserved values). The internet layer, often called layer 3, is the network layer where layer 3 packets can be routed from a first node to a second node across multiple intermediate nodes. The nodes can be network appliances such as network appliance101. Internet protocol (IP) is a commonly used layer 3 protocol. The layer 3 packet can have a layer 3 header510and a layer 3 payload511. The layer 3 header510can have a source IP address512, a destination IP address513, a protocol indicator514, and other layer 3 header data515. As an example, a first node can send an IP packet to a second node via an intermediate node. The IP packet therefore has a source IP address indicating the first node and a destination IP address indicating the second node. The first node makes a routing decision that the IP packet should be sent to the intermediate node. The first node therefore sends the IP packet to the intermediate node in a first layer 2 packet. The first layer 2 packet has a source MAC address504indicating the first node, a destination MAC address505indicating the intermediate node, and has the IP packet as a payload. The intermediate node receives the first layer 2 packet. Based on the destination IP address, the intermediate node determines that the IP packet is to be sent to the second node. The intermediate node sends the IP packet to the second node in a second layer 2 packet having a source MAC address504indicating the intermediate node, a destination MAC address505indicating the second node, and the IP packet as a payload. The layer 3 payload511can include headers and payloads for higher layers in accordance with higher layer protocols such as transport layer protocols. The transport layer, often called layer 4, can establish basic data channels that applications use for task-specific data exchange and can establish host-to-host connectivity. A layer 4 protocol can be indicated in the layer 3 header510using protocol indicator514. Transmission control protocol (TCP), user datagram protocol (UDP), and internet control message protocol (ICMP) are common layer 4 protocols. TCP is often referred to as TCP/IP. TCP is connection oriented and can provide reliable, ordered, and error-checked delivery of a stream of bytes between applications running on hosts communicating via an IP network. When carrying TCP data, a layer 3 payload511includes a TCP header and a TCP payload. UDP can provide for computer applications to send messages, in this case referred to as datagrams, to other hosts on an IP network using a connectionless model. When carrying UDP data, a layer 3 payload511includes a UDP header and a UDP payload. ICMP is used by network devices, including routers, to send error messages and operational information indicating success or failure when communicating with another IP address. ICMP uses a connectionless model. A layer 4 packet can have a layer 4 header520and a layer 4 payload521. The layer 4 header520can include a source port522, destination port523, layer 4 flags524, and other layer 4 header data525. The source port and the destination port can be integer values used by host computers to deliver packets to application programs configured to listen to and send on those ports. The layer 4 flags524can indicate a status of or action for a network traffic flow. A layer 4 payload521can contain a layer 7 packet. The application layer, often called layer 7, includes the protocols used by most applications for providing user services or exchanging application data over the network connections established by the lower-level protocols. Examples of application layer protocols include Precision Time Protocol (PTP), Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), and Dynamic Host Configuration (DHCP). Data coded according to application layer protocols can be encapsulated into transport layer protocol units (such as TCP or UDP messages), which in turn use lower layer protocols to effect actual data transfer. A layer 4 payload521may include a layer 7 packet530. A layer 7 packet can have a layer 7 header531and a layer 7 payload532. The illustrated layer 7 packet is an HTTP packet. The layer 7 header531is an HTTP header, and the layer 7 payload532is an HTTP message body. The HTTP message body is illustrated as a hypertext markup language (HTML) document. HTTP is specified in requests for comment (RFCs) published by the Internet Engineering Task Force (IETF). IETF RFC 7231 specifies HTTP version 1.1. IETF RFC 7540 specifies HTTP version 2. HTTP version 3 is not yet standardized, but a draft standard has been published by the IETF as “draft-ietf-quic-http-29”. HTML is a “living” standard that is currently maintained by Web Hypertext Application Technology Working Group (WHATWG). The HTTP header can be parsed by a P4 pipeline because it has a well-known format having well known header fields. Similarly, HTML documents can be parsed, at least in part, by a P4 pipeline to the extent that the HTML document has specific fields, particularly if those specific fields reliably occur at specific locations within the HTML document. Such is often the case when servers consistently respond by providing HTML documents. FIG.6illustrates a block diagram of a match processing unit (MPU)601, also referred to as an action unit, that may be used within the exemplary system ofFIG.4to implement some aspects. The MPU601can have multiple functional units, memories, and a register file. For example, the MPU601may have an instruction fetch unit605, a register file unit606, a communication interface602, arithmetic logic units (ALUs)607and various other functional units. In the illustrated example, the MPU601can have a write port or communication interface602allowing for memory read/write operations. For instance, the communication interface602may support packets written to or read from an external memory or an internal static random-access memory (SRAM). The communication interface602may employ any suitable protocol such as advanced extensible interface (AXI) protocol. AXI is a high-speed/high-end on-chip bus protocol and has channels associated with read, write, address, and write response, which are respectively separated, individually operated, and have transaction properties such as multiple-outstanding address or write data interleaving. The AXI interface602may include features that support unaligned data transfers using byte strobes, burst based transactions with only start address issued, separate address/control and data phases, issuing of multiple outstanding addresses with out of order responses, and easy addition of register stages to provide timing closure. For example, when the MPU executes a table write instruction, the MPU may track which bytes have been written to (a.k.a. dirty bytes) and which remain unchanged. When the table entry is flushed back to the memory, the dirty byte vector may be provided to AXI as a write strobe, allowing multiple writes to safely update a single table data structure as long as they do not write to the same byte. In some cases, dirty bytes in the table need not be contiguous and the MPU may only write back a table if at least one bit in the dirty vector is set. Though packet data is transferred according the AXI protocol in the packet data communication on-chip interconnect system according to the present exemplary embodiment in the present specification, it can also be applied to a packet data communication on-chip interconnect system operating by other protocols supporting a lock operation, such as advanced high-performance bus (AHB) protocol or advanced peripheral bus (APB) protocol in addition to the AXI protocol. The MPU601can have an instruction fetch unit605configured to fetch instructions from a memory external to the MPU based on the input table result or at least a portion of the table result. The instruction fetch unit may support branches and/or linear code paths based on table results or a portion of a table result provided by a table engine. In some cases, the table result may comprise table data, key data and/or a start address of a set of instructions/program. Details about the table engine are described later herein. In some embodiments, the instruction fetch unit605can have an instruction cache604for storing one or more programs. In some cases, the one or more programs may be loaded into the instruction cache604upon receiving the start address of the program provided by the table engine. In some cases, a set of instructions or a program may be stored in a contiguous region of a memory unit, and the contiguous region can be identified by the address. In some cases, the one or more programs may be fetched and loaded from an external memory via the communication interface602. This provides flexibility to allow for executing different programs associated with different types of data using the same processing unit. In an example, a management PHV can be injected into the pipeline, for example to perform administrative table direct memory access (DMA) operations or entry aging functions (i.e., adding timestamps), one of the management MPU programs may be loaded to the instruction cache to execute the management function. The instruction cache604can be implemented using various types of memories such as one or more SRAMs. The one or more programs can be any programs such as P4 programs related to reading table data, building headers, DMA to/from memory, writing to/from memory, and various other actions. The one or more programs can be executed in any match-action unit. The MPU601can have a register file unit606to stage data between the memory and the functional units of the MPU, or between the memory external to the MPU and the functional units of the MPU. The functional units may include, for example, ALUs, meters, counters, adders, shifters, edge detectors, zero detectors, condition code registers, status registers, and the like. In some cases, the register file unit606may comprise a plurality of general-purpose registers (e.g., R0, R1, . . . Rn) which may be initially loaded with metadata values then later used to store temporary variables within execution of a program until completion of the program. For example, the register file unit606may be used to store SRAM addresses, ternary content addressable memory (TCAM) search values, ALU operands, comparison sources, or action results. The register file unit of a stage may also provide data/program context to the register file of the subsequent stage, as well as making data/program context available to the next stage's execution data path (i.e., the source registers of the next stage's adder, shifter, and the like). In some embodiments, each register of the register file is 64 bits and may be initially loaded with special metadata values such as hash value from table lookup, packet size, PHV timestamp, programmable table constant and the like. In some embodiments, the register file unit606can have a comparator flags unit (e.g., C0, C1, . . . Cn) configured to store comparator flags. The comparator flags can be set by calculation results generated by the ALU which in return can be compared with constant values in an encoded instruction to determine a conditional branch instruction. In some embodiments, the MPU can have one-bit comparator flags (e.g., 8 one-bit comparator flags). In practice, an MPU can have any number of comparator flag units each of which may have any suitable length. The MPU601can have one or more functional units such as the ALU(s)607. An ALU may support arithmetic and logical operations on the values stored in the register file unit606. The results of the ALU operations (e.g., add, subtract, AND, OR, XOR, NOT, AND NOT, shift, and compare) may then be written back to the register file. The functional units of the MPU may, for example, update or modify fields anywhere in a PHV, write to memory (e.g., table flush), or perform operations that are not related to PHV update. For example, an ALU may be configured to perform calculations on descriptor rings, scatter gather lists (SGLs), and control data structures loaded into the general purpose registers from the host memory. The MPU601can have other functional units such as meters, counters, action insert units, and the like. For example, an ALU may be configured to support P4 compliant meters. A meter is a type of action executable on a table match used to measure data flow rates. A meter may include a number of bands, typically two or three, each of which has a defined maximum data rate and optional burst size. Using a leaky bucket analogy, a meter band is a bucket filled by the packet data rate and drained at a constant allowed data rate. Overflow occurs if the integration of data rate exceeding quota is larger than the burst size. Overflowing one band triggers activity into the next band, which presumably allows a higher data rate. In some cases, a field of the packet may be marked as a result of overflowing the base band. This information might be used later to direct the packet to a different queue, where it may be more subject to delay or dropping in case of congestion. The counter may be implemented by the MPU instructions. The MPU can have one or more types of counters for different purposes. For example, the MPU can have performance counters to count MPU stalls. An action insert unit or set of instructions may be configured to push the register file result back to the PHV for header field modifications. The MPU may be capable of locking a table. In some cases, a table being processed by an MPU may be locked or marked as “locked” in the table engine. For example, while an MPU has a table loaded into its register file, the table address may be reported back to the table engine, causing future reads to the same table address to stall until the MPU has released the table lock. For instance, the MPU may release the lock when an explicit table flush instruction is executed, the MPU program ends, or the MPU address is changed. In some cases, an MPU may lock more than one table address, for example, one for the previous table write-back and another address lock for the current MPU program. In some embodiments, a single MPU may be configured to execute instructions of a program until completion of the program. In other embodiments, multiple MPUs may be configured to execute a program. A table result can be distributed to multiple MPUs. The table result may be distributed to multiple MPUs according to an MPU distribution mask configured for the tables. This provides advantages to prevent data stalls or mega packets per second (MPPS) decrease when a program is too long. For example, if a PHV requires four table reads in one stage, then each MPU program may be limited to only eight instructions in order to maintain a 100 MPPS if operating at a frequency of 800 MHz in which scenario multiple MPUs may be desirable. FIG.7illustrates a block diagram of a packet processing circuit701that may be used within the exemplary system ofFIG.4. A P4 pipeline can be programmed to provide various features, including, but not limited to, routing, bridging, tunneling, forwarding, network ACLs, L4 firewalls, flow based rate limiting, VLAN tag policies, membership, isolation, multicast and group control, label push/pop operations, L4 load balancing, L4 flow tables for analytics and flow specific processing, DDOS attack detection, mitigation, telemetry data gathering on any packet field or flow state and various others. A programmer or compiler may decompose a packet processing program into a set of dependent or independent table lookup and action processing stages (i.e., match-action) that can be mapped onto the table engine and MPU stages. The match-action pipeline can have a plurality of stages. For example, a packet entering the pipeline may be first parsed by a parser (e.g., parser704) according to the packet header stack specified by a P4 program. This parsed representation of the packet may be referred to as a packet header vector (PHV). The PHV may then be passed through stages (e.g., stages705,710,711,712,713,714) of the match-action pipeline. Each pipeline stage can be configured to match one or more PHV fields to tables and to update the PHV, table entries, or other data according to the actions specified by the P4 program. If the required number of stages exceeds the implemented number of stages, a packet can be recirculated for additional processing. The packet payload may travel in a separate queue or buffer until it is reassembled with its PHV in a deparser715. The deparser715can rewrite the original packet according to the PHV fields which may have been modified in the pipeline. A packet processed by an ingress pipeline may be placed in a packet buffer for scheduling and possible replication. In some cases, once the packet is scheduled and leaves the packet buffer, it may be parsed again to create an egress parsed header vector. The egress parsed header vector may be passed through a P4 egress pipeline in a similar fashion as a packet passing through a P4 ingress pipeline, after which a final deparser operation may be executed before the packet is sent to its destination interface or recirculated for additional processing. The network appliance430ofFIG.4has a P4 pipeline that can be implemented via a packet processing circuit701. A pipeline can have multiple parsers and can have multiple deparsers. The parser can be a P4 compliant programmable parser and the deparser can be a P4 compliant programmable deparser. The parser may be configured to extract packet header fields according to P4 header definitions and place them in a PHV. The parser may select from any fields within the packet and align the information from the selected fields to create the PHV. The deparser can be configured to rewrite the original packet according to an updated PHV. The pipeline MPUs of the match-action units705,710,711,712,713,714can be the same as the MPU601ofFIG.6. Match-action units can have any number of MPUs. The match-action units of a match-action pipeline can all be identical. A table engine706may be configured to support per-stage table match. For example, the table engine706may be configured to hash, lookup, and/or compare keys to table entries. The table engine706may be configured to control the address and size of the table, use PHV fields to generate a lookup key, and find Session Ids or MPU instruction pointers that define the P4 program associated with a table entry. A table result produced by the table engine can be distributed to the multiple MPUs. The table engine706can be configured to control a table selection. In some cases, upon entering a stage, a PHV is examined to select which table(s) to enable for the arriving PHV. Table selection criteria may be determined based on the information contained in the PHV. In some cases, a match table may be selected based on packet type information related to a packet type associated with the PHV. For instance, the table selection criteria may be based on packet type or protocols (e.g., Internet Protocol version 4 (IPv4), Internet Protocol version 6 (IPv6), MPLSA, or the next table ID as determined by the preceding stage. In some cases, the incoming PHV may be analyzed by the table selection logic, which then generates a table selection key and compares the result using a TCAM to select the active tables. A table selection key may be used to drive table hash generation, table data comparison, and associated data into the MPUs. In some embodiments, the table engine706can have a hash generation unit707. The hash generation unit may be configured to generate a hash result off a PHV input and the hash result may be used to conduct a DMA read from a DRAM or SRAM array. In an example, the input to the hash generation unit may be masked according to which bits in the table selection key contribute to the hash entropy. In some cases, the same mask may be used by the table engine for comparison with the returning SRAM read data. In some instances, the hash result may be scaled according to the table size, then the table base offset can be added to create a memory index. The memory index may be sent to the DRAM or SRAM array and to perform the read. The table engine706can have a TCAM control unit708. The TCAM control unit may be configured to allocate memory to store multiple TCAM search tables. In an example, a PHV table selection key may be directed to a TCAM search stage before a SRAM lookup. The TCAM control unit may be configured to allocate TCAMs to individual pipeline stages to prevent TCAM resource conflicts, or to allocate TCAM into multiple search tables within a stage. The TCAM search index results may be forwarded to the table engine for SRAM lookups. The table engine706may be implemented by hardware or circuitry. The table engine may be hardware defined. In some cases, the results of table lookups or table results are provided to the MPU in its register file. A match-action pipeline can have multiple match-action units such as the six units illustrated in the example ofFIG.7. In practice, a match-action pipeline can have any number of match-action units. The match-action units can share a common set of SRAMs and TCAMs702. The SRAMs and TCAMs702may be components of the pipeline. This arrangement may allow the six match-action units to divide match table resources in any suitable proportion which provides convenience to the compiler and eases the compiler's task of resource mapping. Any suitable number of SRAM resources and any suitable number of TCAM resources may be used by each pipeline. For example, the illustrated pipeline can be coupled to ten SRAM resources and four or eight TCAM resources. In some instances, TCAMs may be fused vertically or horizontally for a wider or deeper search. FIG.8is a high-level diagram that illustrates an input port801using a dedicated communications circuit458to communicate with a line rate classification circuit450according to some aspects. As discussed above, the input port801may write a packet into a packet buffer at the same time as it passes the packet to the line rate classification circuit450. The next packet storage location identifier802can indicate a memory location in the packet buffer at which the next packet is to be written. The dedicated communications circuit458is a communications circuit that is separate and distinct from the NOC402. The input port801may use the dedicated communications circuit458to pass the packet to the line rate classification circuit450. The line rate classification circuit450can include a parser803and a match action unit810. The parser803can parse the packet to obtain header field values804from the packet's header fields. The header field values can include one or more of the packet's destination IP address, VXLAN ID, VLAN tag, MPLS tag, etc. Notice that the listed fields are all layer 2 and layer 3 fields. In many deployments, a tenant can be identified using: the destination IP address and the VXLAN ID, the destination IP address and the VLAN tag; the MPLS tag; or some other combination of header fields. For example, within a rack or group of racks, a tenant may be assigned the subnet 192.168.50.0/24 and VLAN tag33. As such, all packets addressed to that subnet and having that VLAN tag are addressed to one of that tenant's VMs. The parsers704used in full featured packet processing pipeline circuits701are typically configurable such that they can be adapted to produce different PHVs for different situations such as parsing layer 7 HTTP fields. The parser803of a line rate classification circuit can be a much simpler circuit, particularly when it only needs to obtain field values from layer 2 headers, layer 3 headers, and perhaps fields inserted between those layers (e.g., MPLS). In fact, the parser803may be non-configurable. Similarly, the match-action unit810may be simplified when all that is required of it is to produce a tenant ID805by calculating a hash value based on the header field values804. The hash value can be produced by a hash generator. The hash generator may produce the hash value using a hash function. Examples of hashing functions can include cyclic redundancy check algorithms, well-known hashing algorithms, and other algorithms. The tenant ID805may be mapped to an ingress queue indicator806using an ingress queue map455. The ingress queue map may be a table and the tenant ID may be an index into the table. The size of the ingress queue map455may be a function of the size of the tenant ID. For example, a 5 bit tenant ID may be used as an index into a 32 entry table. The ingress queue map may be kept small in order to preserve line rate operation and to minimize the amount of chip area used for storing the ingress queue map near the line rate classification circuit. Based on current data center patterns, a 10 bit tenant ID may suffice. Based on data center growth patterns, a 14 bit tenant ID may be required to meet current and future needs. Here, the data center patterns and needs are related to the numbers of tenants in large scale data centers The ingress queues controller807can track the locations of the head and tail of each ingress queue. A head location can contain the buffer address of the next packet to be read. As such, the buffer scheduler457can read from the heads of the queues. When a packet is received, its location can be written to the tail of one of the ingress buffers. For example, a packet received at the input port801can be stored in the packet buffer422at the location indicated by the next packet storage location identifier802. The line rate classifier450can determine a tenant ID805for the packet that is mapped to ingress queue 0808. As such, the next packet storage location identifier802can be written to the tail of ingress queue 0808. FIG.9is a high-level conceptual diagram that illustrates aspects of selecting an ingress queue and placing a packet on the selected ingress queue according to some aspects. The next packet storage location identifier802indicates that the next packet to be received should be written to the next packet storage location904of the packet buffer422. As such, the input port801writes the network packet901into the next packet storage location904. The input port also passes the network packet to the line rate classification circuit450via the dedicated communications circuit458. The tenant ID produced by the line rate classification circuit450is mapped to an ingress queue indicator via the ingress queues map455. The ingress queue controller807can use the ingress queue indicator to select the tail of an ingress queue902as the next queue entry write location903. The next packet storage location904can be written into the next queue entry write location903such that the next queue entry write location903indicates the location at which the network packet901has been stored. The queue controller may perform other operations such as moving the tail when the queue is written to, moving the head when the queue is read from, and causing the packet to be dropped (or written to a lower priority queue) when a queue is full. A packet may be dropped by leaving the next packet storage location unchanged after a packet is written to the packet buffer. As such, the dropped packet is overwritten by a subsequent packet FIG.10illustrates a buffer scheduler457selecting the next packet that is to be processed by a sub line rate packet processing circuit1004according to some aspects. The buffer scheduler457can implement a scheduling policy1002such as the well-known weighted round robin (WRR) policy, the highest priority first policy, or other scheduling policies. WRR preferentially selects from the highest priority queue and occasionally selects from lower priority queues. Highest priority first selects from a queue only when all higher priority queues are empty. Based on the scheduling policy1002, the buffer scheduler457can access an ingress queue to obtain the location of the next packet to process1003and can provide that location to the sub line rate packet processing circuit1004. The sub line rate packet processing circuit1004may access the packet buffer422and process the network packet at the location of the next packet to process1003. The sub line rate packet processing circuit1004can produce a processed network packet by processing the network packet. The sub line rate packet processing circuit1004can also maintain resource consumption statistics1010for each tenant. The resource consumption statistics1010may indicate the number of network packets that have been processed (in total or over a time period) for each tenant, the network bandwidth consumed by each tenant over a time period, or other resource consumption statistics. The resource consumption statistics can be used to determine that a tenant is to be moved from one of the ingress queues to a different ingress queue. The sub line rate packet processing circuit can provide tenant service such as firewalling, load balancing, network address translations, packet rewriting, etc. Packets are provided to the sub line rate packet processing circuit via the ingress queues. As such a network packet is classified and placed on an ingress queue before tenant services are provided. FIG.11is a high-level diagram illustrating the production and use of a tenant ID1107according to some aspects. The line rate classification circuit can include a line rate parser1101that can extract data field values1102from the header fields of a network packet. The data field values1102can be the network packet's source IP address, destination IP address, MPLS tag, virtual network identifier (e.g., VLAN tag, VXLAN ID, etc.), generic routing encapsulation (GRE) data, and other values from the packet header fields. IETF RFCs1701,1702, and2784are directed to the GRE protocol. The data field values1102can be used to produce a tenant descriptor. For example, the destination IP address (32 bits), VLAN tag (12 bits), and destination port (16 bits) can be concatenated to produce a 60 bit tenant descriptor1103. The tenant descriptor is too large to be used as an index into a table because a 60 bit field can take 1018values. The tenant descriptor may therefore be run through a hash function1104. Those practiced in computer science are familiar with hash functions and understand the desirable properties of and the uses of hash functions. The hash function can be a cyclic redundancy check function, folding hash code function, mid-squares hash code function, division hashing function, algebraic coding function, or some other hash function. The hash function output1105can include unused bits1106and the tenant ID1107. For example, the hash function can produce a 12 bit output of which the most significant 7 bits are unused and the least significant 5 bits are the tenant ID. The tenant ID1107can be used as an index into the ingress queue map1108. Each entry into the ingress queue map can include an ingress queue indicator. As such, the ingress queue map can map a tenant ID to an ingress queue indicator. The ingress queue map1108can be modified over time such that a specific tenant ID is mapped to different ingress queues at different times. The control plane can use the resource consumption statistics1010to determine that a tenant ID should be remapped to a different ingress queue. Tenant ingress policies1111can specify the conditions for selecting an ingress queue for a tenant. For example, a tenant's minimum bandwidth guarantee can be specified in an SLA. That tenant can be moved to the highest priority ingress queue whenever that tenant has been using less than the guaranteed bandwidth. A tenant may be moved to the lowest priority queue when that tenant has been using more bandwidth than a specified maximum value. That maximum value can be, for example, a maximum specified in the SLA, the minimum guaranteed bandwidth plus an allowable excess amount, or some other value. An example of minimum guaranteed bandwidth plus an allowable excess amount is: 500 Mbps (minimum) plus 50 Mbps (allowable excess) which equals 550 Mbps. Many tenants will remain assigned to the ingress queue that they are currently assigned to. A new ingress queue map can be determined by determining the tenant ID for each tenant and setting the values stored in the new ingress queue map as indicated by the resource consumption statistics1010and the tenant ingress policies1111. In some instances, tenant ingress policy reconciliation1112may be necessary because two tenants may have the same tenant ID. Tenants having the same tenant ID will use the same ingress queue because the ingress queue map will map that tenant ID to one ingress queue. The tenant ingress policy reconciliation1112may indicate that tenants having the same tenant ID are to be assigned to the highest priority queue that either tenant is assigned to. FIG.12is a high-level block diagram illustrating network appliances and servers in a server rack1201according to some aspects. Computing equipment is often mounted in racks and a data center can have many rows of racks. A rack can have a top of rack (TOR) switch1202. The TOR switch is a network appliance that is tasked with carrying network traffic between the devices inside the rack and the world outside the rack. InFIG.12, the TOR switch is network appliance1. The devices inside the rack can include servers such as the first server, the second server, the Nth server, and all the servers between the second and the Nth. Each server can include a service card and can run a number of tenant workloads. The service cards can be network appliances such as NICs, smart NICs, or distributed service cards. The server rack1201can have an internal network1206that carries network traffic between the service cards and the TOR switch1202. The workloads can be server processes and VMs running on behalf of the tenants. The service cards provide network services for the tenant workloads. The service cards can be network appliances such as the network appliance430illustrated inFIG.4. The service cards can thereby have numerous ingress queues and can use different ingress queues for different tenants. The first server1203is running tenant workloads for tenant A, tenant D, tenant G, and tenant J. The tenant ID of tenant A is “1”. The tenant ID of tenant D is “5”. The tenant ID of tenant G is “0”. The tenant ID of tenant J is “1”. The service card1205in the first server1203provides network service and connectivity to the tenant workloads1204. The service card can use different ingress queues for the different tenants. However, tenant A and tenant J will use the same ingress queue because they have the same tenant ID. FIG.13illustrates a high-level flow diagram of a process that updates per tenant resource consumption statistics1010according to some aspects. The process illustrated inFIG.13can be performed by the sub line rate packet processing circuit. After the start, a network packet is received at block1301. For example, the sub line rate packet processing circuit can receive a network packet after the buffer scheduler provides the location of the network packet to the sub line rate packet processing circuit. At block1302, the network packet is processed. At block1303, the tenant to whom the network packet has been sent is identified. At block1304, the per tenant resource consumption statistics can be updated for the tenant identified at block1303. FIG.14illustrates a per tenant ingress queue map1401in accordance with some aspects. The per tenant ingress queue map1401can be maintained by the control plane and used to create new ingress queue maps such as new ingress queue map1113. The per tenant ingress queue map1401can be organized as a table that associates tenant names, tenant IDs, and ingress queue indicators. FIG.15is a high-level flow diagram illustrating an exemplary process1500that uses a per tenant ingress queue map for maintaining an ingress queue map according to some aspects. After the start, the per tenant ingress queue map is updated at block1501. As discussed above, the resource consumption statistics1010and the tenant ingress policies1111can be used to assign each tenant to an ingress queue. At block1502, the entries in the per tenant ingress queue map can be reconciled. At block1503, the per tenant ingress queue map can be used to produce a new ingress queue map. At block1504, the new ingress queue map can be written into the line rate classification circuit at which time the new ingress queue map becomes the current ingress queue map455. FIG.16is a high-level flow diagram illustrating an exemplary process1600that updates a per tenant ingress queue map in preparation for writing a new ingress queue map to a line speed classification circuit according to some aspects. After the start, at block1601the process can set the current tenant to the first tenant in the per tenant ingress queue map. At decision block1602, the process can use the resource consumption statistics1010to determine if the current tenant resource usage is above a high threshold. The high threshold can be the specified maximum value for a statistic such as the bandwidth used, number of packets processed, etc. If the current tenant resource usage is below the high threshold, then the process can proceed to decision block1604, otherwise the process can proceed to block1603. At block1603, the process can move the tenant to a lower priority ingress queue by changing the ingress queue indicator associated with the tenant in the per tenant ingress queue map. At decision block1604, the process can use the resource consumption statistics1010to determine if the current tenant resource usage is below a low threshold. The low threshold can be the resource usage level guaranteed by a service level agreement. If the current tenant resource usage is below the low threshold, then the process can proceed to block1605, otherwise the process can proceed to decision block1606. At block1605, the process can move the tenant to a higher priority ingress queue by changing the ingress queue indicator associated with the tenant in the per tenant ingress queue map. At decision block1606, the process can determine if the current tenant is the last tenant in the per tenant ingress queue map1401. If the current tenant is the last tenant in the per tenant ingress queue map the process is done, otherwise the process can proceed to block1607. At block1607, the process can set the current tenant to the next tenant in the ingress queue map before looping back to decision block1602. FIG.17is a high-level flow illustrating an exemplary process1700that performs policy reconciliation when two tenants have the same tenant ID according to some aspects. After the start, at block1701the process can initialize a new ingress queue map. At block1702, the process can set the current tenant to the first tenant in the per tenant ingress queue map. When setting the current tenant, the process also sets the current tenant ID and the current tenant ingress queue indicator to the tenant ID and ingress queue indicator associated with that tenant in the per tenant ingress queue map. At decision block1703, the process can determine if the new ingress queue has a null entry for the current tenant ID. Here, the new ingress queue is checked to determine if an ingress queue indicator has already been written into the location indexed by the tenant ID. A null entry indicates no value has yet been written. If the new ingress queue has a null entry for the current tenant ID, then the process can proceed to block1704, otherwise the process can proceed to decision block1705. At decision block1705, the process can determine if the current tenant ingress queue indicator indicates a higher priority ingress queue than the entry for the current tenant ID in the new ingress queue. If not, the process can proceed to decision block1706, otherwise the process can proceed to block1704. At block1704, the process can write the current tenant ingress queue indicator into the new ingress queue map. At decision block1706, the process can determine if the current tenant is the last tenant in the per tenant ingress queue map. If the current tenant is the last tenant in the per tenant ingress queue map the process can proceed to block1708, otherwise the process can proceed to block1707. At block1707, the process can set the current tenant to the next tenant in the per tenant ingress queue map before looping back to decision block1703. When setting the current tenant, the process also sets the current tenant ID and the current tenant ingress queue indicator to the tenant ID and ingress queue indicator associated with that tenant in the per tenant ingress queue map. At block1708, the process can set null entries in the new ingress queue map to indicate the lowest priority ingress queue. Alternatively, at block1708, the process can set null entries in the new ingress queue map to indicate the highest priority ingress queue or some other predetermined ingress queue. At block1709, the process can write the new ingress queue map to the line rate classification circuit. FIG.18is a high-level block diagram illustrating a packet header parser finite state machine (FSM) stage1801according to some aspects. In many applications, match-action units, such as the match action unit810ofFIG.8, are too slow to parse network packets or network packet headers at line speed. Those practiced in the art of packet parsing circuits are aware of finite state machine circuits that can parse network packets at line speed. Such an FSM can obtain tenant descriptors from packets at line speed. The FSM can include a number of FSM stages1801. The inputs to the FSM stage1801can be a network packet, a packet offset, a previous FSM state, a current tenant descriptor, and a tenant descriptor offset. The packet offset can indicate the location of the next packet field that is to be examined by an FSM stage. The FSM state can include data that may be useful for the next FSM stage. The current tenant descriptor can include the header field values that have been determined thus far for producing a tenant descriptor. The tenant descriptor offset can indicate the location in the current tenant descriptor at which the next header field value is to be copied into the current tenant descriptor. The locate and extract blocks can extract a field value from the packet that is located at the packet offset. The compute block can use the field value that has been extracted and can output the next packet offset for use by the next FSM stage. The compute block can also provide the field value and the length of the field value to a tenant descriptor builder. The tenant descriptor builder can assemble a tenant descriptor by inserting field values into the current tenant descriptor at the tenant descriptor offset. The tenant descriptor builder can produce a next tenant descriptor and a next tenant descriptor offset for use by the next FSM stage. The network packet can pass from one FSM stage to the next. FIG.19is a high-level block diagram illustrating a line rate classification circuit1910that includes a packet header parser FSM1901according to some aspects. The packet header parser FSM1901can include numerous packet header parser FSM stages such as packet header parser FSM stage 01902, packet header parser FSM stage 11903, packet header parser FSM stage 21904, and packet header parser FSM stage 31905. The last stage of the packet header parser FSM1901can output a tenant descriptor1103. A hash function1104can use the tenant descriptor1103to produce a tenant ID1107that is output by the line rate classification circuit1910. An ingress queue map1108can be used to determine an ingress queue indicator806for the tenant ID1107. FIG.20is a high-level flow diagram illustrating a method2000that uses a line rate packet classifier for presorting network packets onto ingress queues. The packets are presorted by placing them on the ingress queues before they are processed by the sub line rate packet processing circuit. At block2001, the process can receive a network packet at an input port operating at a line rate. At block2002, the process can process the network packet with a line rate classifier circuit that selects an ingress queue that is included in a plurality of ingress queues. At block2003, the process can place the network packet on the ingress queue. At block2004, the process can store the network packet in a packet buffer as one of a plurality of buffered network packets. At block2005, the process can use a sub line rate packet processing circuit to process the network packet after the network packet is stored in the packet buffer, wherein the sub line rate packet processing circuit is configured to process the buffered network packets that are selected from the ingress queues, the line rate classifier circuit and the sub line rate packet processing circuit operate concurrently, and the line rate classifier circuit is configured to process the network packets at the line rate. Aspects described above can be ultimately implemented in a network appliance that includes physical circuits that implement digital data processing, storage, and communications. The network appliance can include processing circuits, ROM, RAM, CAM, and at least one interface (interface(s)). The CPU cores described above are implemented in processing circuits and memory that is integrated into the same integrated circuit (IC) device as ASIC circuits and memory that are used to implement the programmable packet processing pipeline. For example, the CPU cores and ASIC circuits are fabricated on the same semiconductor substrate to form a System-on-Chip (SoC). The network appliance may be embodied as a single IC device (e.g., fabricated on a single substrate) or the network appliance may be embodied as a system that includes multiple IC devices connected by, for example, a printed circuit board (PCB). The interfaces may include network interfaces (e.g., Ethernet interfaces and/or InfiniBand interfaces) and/or PCI Express (PCIe) interfaces. The interfaces may also include other management and control interfaces such as I2C, general purpose IOs, USB, UART, SPI, and eMMC. As used herein the terms “packet” and “frame” may be used interchangeably to refer to a protocol data unit (PDU) that includes a header portion and a payload portion and that is communicated via a network protocol or protocols. A PDU may be referred to as a “frame” in the context of Layer 2 (the data link layer) and as a “packet” in the context of Layer 3 (the network layer). For reference, according to the P4 specification: a network packet is a formatted unit of data carried by a packet-switched network; a packet header is formatted data at the beginning of a packet in which a given packet may contain a sequence of packet headers representing different network protocols; a packet payload is packet data that follows the packet headers; a packet-processing system is a data-processing system designed for processing network packets, which, in general, implement control plane and data plane algorithms; and a target is a packet-processing system capable of executing a P4 program. Although the operations of the method(s) herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operations may be performed, at least in part, concurrently with other operations. Instructions or sub-operations of distinct operations may be implemented in an intermittent and/or alternating manner. It should also be noted that at least some of the operations for the methods described herein may be implemented using software instructions stored on a computer usable storage medium for execution by a computer. As an example, an embodiment of a computer program product includes a computer usable storage medium to store a computer readable program. The computer-usable or computer-readable storage medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device). Examples of non-transitory computer-usable and computer-readable storage media include a semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and an optical disk. Current examples of optical disks include a compact disk with read only memory (CD-ROM), a compact disk with read/write (CD-R/W), and a digital video disk (DVD). Although specific embodiments of the invention have been described and illustrated, the invention is not to be limited to the specific forms or arrangements of parts so described and illustrated. The scope of the invention is to be defined by the claims appended hereto and their equivalents.
88,795
11863468
DETAILED DESCRIPTION OF EMBODIMENTS Embodiments that are described herein provide improved methods and systems for Ethernet communication, in which an Ethernet PHY device controls a GPIO port in a peer Ethernet PHY device via the exchange of management frames, e.g., Operations, Administration and Maintenance (OAM) frames. In some embodiments, an Ethernet PHY device communicates with a peer PHY device over a physical link, e.g., a twisted-pair link. Among other functions, the PHY device is configured to generate layer-1 frames that encode layer-2 Ethernet frames in a way that is suitable for transmission over the physical link (physical medium) to the peer PHY device. The peer PHY device (also referred to as a “link partner”) is associated with a General-Purpose Input-Output (GPIO) port (e.g., comprises an integral GPIO port or connected locally to a host that comprises a GPIO port). Typically, encoding of layer-2 frames to produce a stream of layer-1 frames is not a one-to-one translation process. The encoding process is typically performed in accordance with a suitable algorithm specified for the applicable link-layer technology. In assembling the stream of layer-1 frames, the PHY device is configured to encode, among the layer-2 Ethernet frames, one or more Ethernet OAM frames that are configured to control the GPIO port associated with the peer PHY device (sometimes referred to as a link partner). The PHY device is configured to transmit the layer-1 frames, which comprise the layer-2 frames and the OAM frames, to the peer PHY device over the physical link. Ethernet OAM, as specified, for example, in the IEEE 802.3bp standard, cited above, supports a built-in acknowledgement mechanism that uses OAM frames in the opposite direction. In some embodiments, the PHY device is configured to receive, from the peer PHY device, one or more OAM verifications acknowledging that the one or more OAM frames were received successfully at the peer Ethernet PHY device. In this manner, a highly reliable mechanism for controlling remote GPIO ports is implemented entirely within the PHY link layer, i.e., within OSI layer-1. It is possible in principle to control a remote GPIO port using conventional Ethernet frames that are dedicated for GPIO control. This solution, however, affects the Medium Access Control (MAC) layer and possibly upper layers, is complicated to implement and incurs large delay. The disclosed techniques, on the other hand, are implemented entirely within the PHY device without involving upper layers, are simpler to implement and incur minimal latency overhead. Moreover, controlling a remote GPIO port using dedicated Ethernet frames typically requires the use of additional microcontrollers and software stacks, and additional scheduling and interleaving of such Ethernet frames among other data frames. This added complexity introduces unreliability and risk of missed delivery. Furthermore, the disclosed techniques provide suitable replacement for Controller Area Networks (CAN), which are used some conventional automotive networks as redundant communication means for safety messages between Electronic Control Units (ECUs). As will be explained and demonstrated below, the reliability of the disclosed techniques is not dependent on other systems or controllers—In some typical implementations the disclosed solution is fully self-contained in a single Integrated Circuit (IC). Some of the embodiments disclosed herein are described in the context of automotive applications, e.g., systems in which various automotive peripherals communicate with Electronic Control Units (ECUs), and ECUs communicate with one another. Other example embodiments are described in the context of avionic communication systems. Some example use-cases pertain to safety control features in automotive and avionic communication systems. These embodiments and use-cases, however, are provided solely by way of example. The disclosed techniques are equally applicable in other applications, for example in industrial and/or smart-home networks. For the sake of clarity, the embodiments described herein refer mainly to Ethernet OAM frames. Generally, however, the disclosed techniques are in no way limited to OAM frames, and can be implemented using any suitable type of management frames. In some embodiments, the OAM frames are composed by a host coupled to the PHY device. In these embodiments the PHY device receives the OAM frames from the host, and encodes them as part of the stream of layer-1 frames. In alternative embodiments, the OAM frames are composed internally in the PHY device. In an example embodiment, the PHY device receives, e.g., via a register, parameters for controlling the remote GPIO, and composes an OAM frame based on the received parameters. FIG.1is a block diagram that schematically illustrates an automotive communication system20, in accordance with an embodiment that is described herein. System20is typically installed in a vehicle, e.g., a passenger car. System20comprises one or more automotive Electronic Control Units (ECUs)24and a main automotive ECU28, which control a plurality of Automotive peripherals32using Ethernet communications. Among other tasks, ECUs24and main ECU28verify the safety status of the various system components by sending safety status messages embedded in Ethernet OAM frames, and controlling remote GPIO ports based on the embedded safety status messages, as will be explained in detail below. In the present example, system20operates in accordance with the IEEE 802.3bp standard, cited above. Alternatively, however, any other suitable Ethernet standard, or other suitable network communication protocol, can be used. The disclosed techniques can be used for modifying any suitable physical layer to carry extra information for the purpose of remote GPIO control. In various embodiments, automotive peripherals32may comprise, for example, lidar, radar, camera, stereo amplifier, telematics radio, body ECU, powertrain, and/or any other suitable type of peripheral. For purposes of communication and control, each peripheral32comprises a host controller, also referred to as a system-on-Chip (SoC)36, and a PHY device40. Each peripheral32is connected to one of ECUs24by a respective physical link. Each ECU24comprises multiple PHY devices44that communicate with peripherals32. Each PHY device44comprises Physical Coding Sublayer (PCS) circuitry48that carries out PCS functions of the PHY device, and Physical Media Attachment sublayer (PMA) circuitry52that carries out PMA functions of the PHY device. In an embodiment, each ECU24further comprises an automotive Ethernet network switch56that connects peripherals32to main ECU28, Switch56comprises multiple ports57. Each PHY device48is connected to a respective port57of switch56. Each ECU24further comprises a safety microcontroller (μC)68that verifies the safety status of ECU24and/or of the associated peripherals32, and sends corresponding safety messages to main ECU28. Safety μC68comprises a status output circuit72that outputs the safety status messages as one or more logic signals (denoted “STATUS” in the figure). In the present example, each ECU24comprises a PHY device58, which is responsible for communication with main ECU28. PHY device58is connected to one of ports57of switch56, and to a physical link62that connects ECU24with main ECU28. In the present example link62comprises a twisted-pair link in accordance with IEEE 802.3bp. As part of the communication with main ECU28, PHY device58transmits conventional layer-2 Ethernet frames, and also transmits the safety status messages generated by safety μC68embedded in Ethernet OAM frames. Typically, PHY device58combines the layer-2 Ethernet frames with the OAM frames in accordance with the applicable link-layer algorithm, to form a layer-1 frames for transmission over the physical medium. In the case of IEEE 802.3bp, the physical medium comprises an Unshielded Twisted Pair UTP). PHY device58of ECU24comprises PCS circuitry48, PMA circuitry60, and GPIO control circuitry64. GPIO control circuitry64receives the (one or more) logic signals that carry the safety status messages from safety μC68, using (one or more) respective GPIO input ports. PHY device58embeds the safety status messages in Ethernet OAM frames, and inserts the OAM frames into the stream of Ethernet layer-1 frames that are transmitted over link62to main ECU28. This functionality, and the internal structure of PHY device58, are addressed in greater detail inFIG.2below. Main ECU28comprises a network switch56, multiple PHY devices58connected to ports57of network switch56, and a main safety μC76. Each PHY device58is connected to a respective ECU24over a respective physical link62. In each PHY device58of main ECU28, PMA circuitry60and PCS circuitry48transfer conventional Ethernet traffic between ECU24and switch56of main ECU28. In addition, in each PHY device58of main ECU28, PCS circuitry48extracts the OAM frames from the stream of layer-1 frames received from ECU24, extracts the safety status messages from the OAM frames, and sends the safety status messages to GPIO control circuitry64. GPIO control circuitry64sets one or more GPIO outputs according to the safety status messages. Main safety μC76comprises a status input circuit80that receives the GPIO outputs from PHY devices58. Main safety μC76may initiate any suitable safety-related action in response to the safety status messages received from the various safety μCs68of ECUs24. FIG.2is a block diagram that schematically illustrates the internal structure of Ethernet PHY device58of system20, in accordance with an embodiment that is described herein. Typically, PHY devices58in ECUs24and in main ECU28have a similar internal structure. In the present example, PCS circuitry48comprises a PCS transmission (TX) circuit100, a PCS reception (RX) circuit104, and a PCS OAM circuit108. PCS TX circuit100receives Ethernet layer-2 frames for transmission (e.g., from switch56), encodes them so as to produce a stream of layer-1 frames, and provides the layer-1 frames to PMA circuit60. PMA circuit60, which is also referred to herein as a PHY interface, transmits the layer-1 frames over physical link62. In the opposite direction, PMA circuit60receives layer-1 frames over physical link62and delivers them to PCS RX circuit104. PCS RX circuit104decodes the layer-1 frames so as to produce layer-2 frames, and outputs the layer-2 frames (e.g., to switch56). In some embodiments, PHY device58further comprises a PHY controller112, a link monitor116, and GPIO control circuitry64. PHY controller112is connected to switch56(e.g., to a Medium Access Control (MAC) device in switch56) over a Serial Management Interface (SMI), e.g., for receiving control and configuration commands from the switch. Switch56may, for example, initialize and monitor PHY device58via the SMI. Link monitor116is configured to monitor the quality of communication over physical link62. In some embodiments, PHY controller112is also responsible for transmitting and receiving safety status messages. When PHY device58operates in one of ECUs24, PHY controller112receives (from safety μC68, via GPIO control circuitry64) logic values that are indicative of safety status messages to be reported to main ECU28. PHY controller112sends the logic values to PCS OAM circuit108. PCS OAM circuit108constructs one or more OAM frames that carry the safety status messages, and includes the OAM frames in the stream of layer-1 frames that are produced by PCS TX circuit100. The layer-1 frames, which have been encoded from the layer-2 Ethernet frames and from the OAM frames, are transmitted by PMA circuitry60over link62. When PHY device58operates in main ECU28, PMA circuitry60receives a stream of layer-1 frames over link62, which encode layer-2 Ethernet frames and also one or more OAM frames that carry safety status messages, from one of ECUs24. PMA circuitry60delivers the stream of layer-1 frames to PCS RX circuit104. PCS OAM circuit108extracts the OAM frames from the stream, extracts the safety status messages from the OAM frames, and sends the corresponding logic values to PHY controller112. PHY controller112sends the logic values to GPIO control circuitry64, which in turn sets the appropriate GPIO outputs. The GPIO outputs are provided in this manner as inputs to status input circuit80of main safety μC76. In the embodiments described herein, the logic values (which are exchanged between safety μC68and main safety μC76and are indicative of safety status messages) are static, discrete logic values. The disclosed techniques, however, are not limited to this sort of implementation. In alternative embodiments, any of the logic value may change over time, and thus, for example, form temporal patterns of logic values that carry multiple bits of information. By varying a logic value on a certain GPIO port (pin) over time, it is possible to implement a serial data link that transfers data between PHY devices in accordance with any suitable protocol. By varying multiple logic values corresponding to multiple respective GPIO ports (pins), the disclosed techniques can be used to implement a parallel data link between the PHY devices. Additionally or alternatively, the logic values need not necessarily be discrete, digital value, and may comprise analog waveforms that carry information in any suitable way. In the present context, PCS circuitry48(including PCS TX circuit100, PCS RX circuit104and PCS OAM circuit108), PHY controller112, GPIO control circuit64and link monitor116are referred to jointly as “PHY circuitry.” In alternative embodiments, the PHY circuitry may be implemented in any other suitable manner. FIG.3is a diagram showing an OAM frame that carries a GPIO command, e.g., a safety status message, in accordance with an embodiment that is described herein. In an embodiment, a PHY device (e.g., PHY device58ofFIGS.1and2) uses OAM frames of this sort for controlling remote GPIO ports of a link partner (e.g., a remote PHY device58). The structure of the example OAM frame ofFIG.3is compliant with section 97.3.8 of the IEEE 802.3bp standard, cited above. The disclosed techniques, however, are in no way limited to any specific format, and any other suitable formats. Moreover, as noted above, the disclosed techniques can be implemented using other suitable kinds of management frames, not necessarily OAM frames. As seen in the figure, the OAM frame comprises twelve symbols (denoted Symbol_0through Symbol_11), each symbol comprising nine bits (denoted D0through D8). Symbol_0, bits D4-D7of Symbol_1, and Symbol_10and symbol_11, as well as bit D8of each symbol, are defined as in IEEE 802.3bp. In some embodiments, a “Message number” field120, which occupied bits D0-D3of Symbol_1, is set to a four-bit value that is dedicated to indicate a GPIO message. In various embodiments, any or all of symbol_2through Symbol_10may serve as a space124for the actual GPIO command or commands. In one embodiment, each symbol is used for specifying GPIO commands for a respective different GPIO port (input or output) of the remote PHY device. In one embodiment, “Message number” field120in Symbol_1of the OAM frame is set to 4b′ 1111, although any other suitable value can be used. The GPIO command is embedded in Symbol_8and Symbol_9of the OAM frame, as follows: TABLE 1Example GPIO command format (in OAM frame)D7D7, D6D5-D3D2-D0SymbolOddGPIOGPIOGPIO8ParityCommand<2:0>Direction<2:0>Value<2:0>SymbolOdd8b′ 000000019Parity(Remote GPIO) As seen, Symbol_9is set to 0x01 to indicate that the OAM frame carries a remote GPIO command. The GPIO command in Symbol_8comprises the following fields:GPIO command code (0x00 indicating GPIO read request, 0x01 indicating GPIO write request, 0x02 indicating GPIO read response, i.e., acknowledgement, and 0x03 a reserved value).GPIO direction bit (“1”=output, “0”=input).GPIO value (“1”=high, “0”=low). Alternatively, any other suitable format and/or any other suitable numerical values can be used for embedding a GPIO Command in an OAM frame. In an embodiment, when the disclosed remote GPIO feature is enabled in both link partners (PHY devices), the feature begins working upon link-up. During operation, the pin status and value of each GPIO port (pin) is typically mirrored both to GPIO control circuit64(of the remote PHY device) and to the remote GPIO register. In an embodiment, following the example of Table 1 above, the GPIO register at the remote PHY device has the following format: TABLE 2Example GPIO register formatD15D14D13D12D11D10D9D8EnableReservedD7D6D5-D3D2-D0ExecuteR/WGPIO Direction<2:0>GPIO Value<2:0> In this example, the bits of the GPIO register are defined as follows:Bit<15>: Enable Remote GPIO Control Feature.Bit<14:8>: Reserved and set to all 0s.Bit<7>: Set to 1 to execute GPIO Read/Write. When bit 6 (defined below) indicates GPIO write, the PHY device automatically sends a “GPIO Write Request” to the link partner with the GPIO direction and value in bit <5:0>. When bit 6 indicates GPIO read, the PHY device automatically sends a “GPIO Read Request” and updates bit <5:0> with the received GPIO direction and value in the subsequently received “GPIO Read response” OAM frame. This bit is automatically cleared when the GPIO write is acknowledged by the link partner, or when a GPIO read response is received for a GPIO read request.Bit<6>: 0=Read, 1=WriteBit<5:3>: GPIO Direction<2:0>Bit<2:0>: GPIO Value<2:0> In an embodiment, the update rate of the GPIO information is set to match the OAM period (3.6 μs*12=43.2 μs). When the disclosed remote GPIO feature is enabled, upon receiving a new OAM frame that carries a GPIO command, the remote PHY device interprets the “GPIO Command” field to determine the appropriate action. If the GPIO command is a “GPIO Write Request,” the remote PHY device updates its local GPIO register and GPIO pin according to the direction and value indicated in the OAM frame, and marks the OAM frame as read. Upon servicing the GPIO command, the mr_rx_lp_valid state variable (specified in IEEE 802.3bp) is cleared to “0” to indicate that the PHY device is ready for the next OAM frame. When the GPIO command is a “GPIO Read Request,” the remote PHY device transmits back to the requesting PHY device a “Remote GPIO OAM” frame. In this “Remote GPIC OAM” frame, the GPIO Command is set to “GPIO Read Response” and the “GPIO Direction” and “GPIO Value” fields reflecting the current status of the local GPIC register. Upon servicing the GPIC command, the mr_rx_lp_valid is cleared to “0” to indicate that the PHY device is ready for the next OAM frame. When the GPIO command is “GPIO Read Response,” the recipient PHY device stores the received remote GPIO direction and value in its remote GPIO register. Upon servicing the command, if the read request originated from the remote GPIO register, then mr_rx_lp_valid is cleared to “0” to be ready for the next OAM frame. Otherwise, mr_rx_lp_valid is retained at “1” for a management entity to handle through OAM message register. In some practical scenarios, it may occur that an existing OAM frame is in-flight (i.e., transmitted but not yet acknowledged), and at the same time another user OAM frame is awaiting transmission. In such a case, the PHY device typically transmits the remote GPIO OAM frame immediately after the existing OAM transmission is acknowledged, but before transmitting the next user OAM message. The management entity would typically detect and handle the scenario where an OAM frame cannot be transmitted for an extended period of time due to the link partner's management entity not processing OAM, as in normal OAM operations. In an embodiment, e.g., when using “open drain” mirroring, a single GPIO port (pin) can be used for bi-directional signaling, i.e., as both an input and an output. This sort of implementation is useful, for example, to transfer a two-wire serial interface bi-directionally for initialization, monitoring, or management of a remote system. In an example embodiment, after initialization or other management task is completed, a mode switch can be performed to use the GPIO pin as a single-direction pin. FIG.4is a flow chart that schematically illustrates a method, which is carried out an Ethernet PHY device58for controlling a GPIO port in a peer Ethernet PHY device58, in accordance with an embodiment that is described herein. The method begins with the host of the PHY device (e.g., a MAC device in network switch56) composing an OAM frame having a GPIC command embedded therein, at an OAM frame composition operation130. The host writes the OAM frame to the host's OAM registers. At a writing operation134, the host writes the OAM frame from the OAM registers to the PHY device over the SMI. At an insertion operation138, PHY controller112and PCS OAM circuit108of the PHY device include the OAM frame in the stream of layer-1 frames that PCS TX circuit100produces from layer-2 Ethernet frames, e.g., between two successive layer-2 Ethernet frames. At a transmission operation142, PMA circuitry60of the PHY device transmits the layer-1 frames to the peer PHY device using the Media-Dependent Interface (MDI), e.g., over link62. The layer-1 frames convey the layer-2 Ethernet frames and the OAM frame. In the peer PHY device, PMA circuitry60receives the stream of layer-1 frames, and PCS OAM circuit108extracts the OAM frame from the stream, at a reception and extraction operation146. At a GPIO command processing operation150, PHY controller112identifies the OAM frame as carrying a GPIO command, and controls GPIO control circuit64accordingly. For example, if the GPIO commands is a write command, PHY controller112writes the value specified in the command to the GPIO port specified in the command. If the GPIO commands is a read command, PHY controller112reads a value from the GPIO port specified in the command. At an acknowledgement operation154, PHY controller112of the peer PHY device composes and returns an acknowledgement message to the PHY device that initiated the GPIO command. The acknowledgement message, as specified section 97.3.8.2.8 of IEEE 802.3bp, is itself an OAM frame sent in the opposite direction. The method ofFIG.4is an example method, which is depicted solely for the sake of clarity. In alternative embodiments, the PHY device and the peer PHY device may carry out the disclosed techniques using any other suitable flow of operations. For example, in an alternative embodiment the PHY device, not the host, composes the OAM frame that carries the GPIO command. In this embodiment, the host typically sends the command parameters (e.g., command code, GPIO port number, and value to be written in case of a write command), and PHY controller112composes the OAM frame using these parameters. In various embodiments, the PHY device may use the disclosed techniques to apply any suitable action using the GPIO port or ports of the remote PHY device. In one example embodiment, a GPIO port of the remote PHY device is connected to a reset pin of a remote host, and the PHY device uses the disclosed technique for resetting the remote host remotely, over the physical Ethernet link. In another example embodiment, a “status output” port of a remote host is connected to a GPIO port of the remote PHY device, and the PHY device uses the disclosed technique for querying the status of the remote host remotely, over the physical Ethernet link. In an embodiment, “status output” ports of two or more remote hosts are combined using suitable logic (e.g., an OR or NOR gate) and the output of the logic is connected to a GPIO port of the remote PHY device. In this manner, the PHY device is able to query a combined status of multiple remote hosts (e.g., whether any of the hosts has failed) using a single GPIO port. The description above illustrates an example use-case of the disclosed remote GPIO control technique, for exchanging safety status messages between safety microcontrollers in an automotive communication system. The description below provides two additional example use-cases, one relating to simplifying software validation in an avionic system, and the other relating to transparent GPIO-based communication between controllers. FIG.5is a block diagram that schematically illustrates an avionic communication system, in accordance with an embodiment that is described herein. In the system ofFIG.5, a main system microcontroller (μC)160is coupled to an Ethernet PHY device168, and a remote μC164is coupled to an Ethernet PHY device172. PHY devices168and172are link partners, and communicate with one another using a MDI over a suitable physical link, e.g., a twisted-pair link. For data transmission, main μC160sends the data in layer-2 frames to PHY device168over a MAC interface, PHY device168sends the data to PHY device172in layer-1 frames using the MDI, and PHY device172sends the data to remote μC164in layer-2 frames over the MAC interface. In addition to the data transfer between μCs160and164, in the present example it is required to transfer safety messages over the same physical link (MDI). Reusing the same physical link for transferal of safety messages is useful, for example, for reducing the number of wires and simplifying the wiring process. In one example use case, a safety state having four possible values (denoted State_0through State_3) is to be transferred to an aircraft entertainment system, e.g., in order to pause the entertainment system during Public Announcements (PA). It is desirable for the safety messages to be transferred independently of main μC160and remote μC164, in order to simplify software validation. In the embodiment ofFIG.5, the safety messages (safety-state values) are transferred independently of the rest of the system using a safety control μC176, a 4-to-2 encoder180and a 2-to-four decoder184. Additionally, PHY device168has two GPIO input ports denoted GPIO1and GPIO2, and PHY device172has two GPIO output ports denoted GPIO1and GPIO2. The two PHY devices are configured to transfer the input values of GPIO1and GPIO2of PHY device168to PHY device172, and to set the GPIO1and GPIO2outputs of PHY device172to these values. The PHY devices transfer the two GPIO values by embedding them as GPIO commands in OAM frames, as described above. In the present example, the value of the safety state is provided (on the left-hand side of the figure) over four inputs corresponding to State_0through State_3. A single one of these inputs is set to “1” to indicate the current safety state, and the other inputs are set to “0”. 4-to-2 encoder180converts the four inputs into a two-bit value, and provides the two bits as inputs to ports GPIO1and GPIO2of PHY device168. PHY device168transfers the values of GPIO1and GPIO2to its link partner (also referred to as “peer PHY device”), PHY device172, using the disclosed techniques. PHY device172sets its GPIO1and GPIO2outputs to these values. 2-to-4 decoder184decodes the two-bit value carried by GPIO1and GPIO2, so as to reconstruct and output (on the right-hand side of the figure) the four original values State_0through State_3. In this embodiment, the transfer of the safety messages is managed by safety control μC176, independently of the other microcontrollers in the system. In the present example, the Serial Management Interface (SMI) between main system μC160and PHY device168passes through safety control μC176. This configuration is useful, for example, for ensuring that safety aspects of the system cannot be reconfigured at a later time by main μC160(e.g., by malware that has compromised the main μC). Passing the SMI via safety control μC176is, however, not mandatory. During normal operation, safety control μC176enables SMI traffic to pass through transparently between main system μC160and PHY device168. In order to transfer safety messages, safety control μC176overrides the SMI, i.e., prevents SMI traffic of main system μC160from passing to PHY device168, and controls PHY device168itself. Safety control μC176typically also initializes PHY devices168and172upon power-up. Typically, safety control μC176is small and runs only a small amount of simple software code. As such, and since safety control μC176is separate from μCs160and164, validation of safety-related software is simplified considerably. For example, there is no need to re-validate the safety-related software when updating software in μC160or164. FIG.6a block diagram that schematically illustrates a communication system, in accordance with another embodiment that is described herein. The embodiment ofFIG.6demonstrates a configuration in which the GPIO ports are part of the hosts, not of the PHY devices. In the context of the present disclosure and in the claims, a GPIO port referred to as being associated with a PHY device. The term “associated with” refers to implementations in which the GPIO port is part of the PHY device (as in the example ofFIGS.1,2and5), and to implementations in which the GPIO port is part of a host coupled to the PHY device (as in the example ofFIG.6). In the system ofFIG.6, main system μC160is coupled to an Ethernet PHY device188, and remote μC164is coupled to an Ethernet PHY device192. PHY devices188and192are link partners, and communicate with one another using a MDI over a suitable physical link, e.g., a twisted-pair link. For data transmission from main μC160to remote μC164, main μC160sends the data in layer-2 frames to PHY device188over a MAC interface, PHY device188sends the data to PHY device192in layer-1 frames using the MDI, and PHY device192sends the data to remote μC164in layer-2 frames over the MAC interface. In addition, the system ofFIG.6comprises a pair of auxiliary μCs196and200that communicate with one another over the same MDI, using the same pair of PHY devices188and192. In the present example, μC196sends to μC200three binary values denoted GPIO1, GPIO2and GPIO3. The communication between auxiliary μCs196and200is independent of the communication between μCs160and164. Unlike the previously described embodiments, in the present example the GPIO inputs and outputs are part of the hosts (auxiliary μCs in this example), not part of the PHY devices. In the present embodiment, the SMI interface between main system μC160and PHY device188passes through auxiliary μC196. Similarly, the SMI interface between remote system μC164and PHY device192passes through auxiliary μC200. During communication between main system μC160and remote system μC164, auxiliary μC196enables SMI traffic to pass through transparently between main system μC160and PHY device188, and auxiliary μC200enables SMI traffic to pass through transparently between remote system μC164and PHY device192. In order for auxiliary μCs196and200to communicate with one another, each of auxiliary μCs196and200overrides the respective SMI and controls the respective PHY device itself. Overriding the SMI allows auxiliary μCs196and200to control the registers in PHY devices188and192that are used for inserting or reading OAM frames. Additionally, overriding the SMI allows auxiliary μCs196and200to prevent main μC160and remote μC164from interrupting the communication channel between the auxiliary μCs (e.g., as a result of malware). In the embodiment ofFIG.6, auxiliary μC196transmits the values of GPIO1, GPIO2and GPIO3to auxiliary μC200by composing an OAM frame that carries these values as GPIO commands, and writing the OAM frame to PHY device188. PHY device188transmits the OAM frame to PHY device192as part of the stream of layer-1 frames, using the disclosed techniques. PHY device192extracts the OAM frame from the stream and writes the OAM frame to auxiliary μC200, which in turn sets its GPIO1, GPIO2and GPIO3outputs accordingly. The configurations of the communication systems shown inFIGS.1,5and6, and of their components, such as the Ethernet PHY device shown inFIG.2and the various ECUs and microcontrollers, are example configurations that are depicted solely for the sake of clarity. In alternative embodiments, any other suitable configurations can be used. For example, as noted above the logic values (transferred between PHY devices to control GPIO ports) may be discrete or analog, static or time-varying. The logic values may form temporal patterns that carry multiple bits of information. By varying one or more logic values over time, the disclosed techniques can be used for implementing serial or parallel data links between PHY devices. As another example, a GPIO port can be used for bidirectional signal or communication, e.g., using an open-drain configuration. Any combination of such features can be used, and any feature can be reconfigured at any desired time. In an example embodiment, the disclosed technique is used for implementing a protocol interface between PHY devices. Such an interface can be used, for example, for configuring a PHY device remotely. Elements that are not mandatory for understanding of the disclosed techniques have been omitted from the figures for the sake of clarity. The different elements of the disclosed communication systems, and their components, may be implemented using dedicated hardware or firmware, such as using hard-wired or programmable logic, e.g., in one or more Application-Specific Integrated Circuits (ASICs) or Field-Programmable Gate Arrays (FPGAs). Additionally or alternatively, some functions, e.g., functions of any of microcontrollers68and76(FIG.1), PHY controller112(FIG.2), microcontrollers160,164and176(FIG.5) and/or microcontrollers196and200(FIG.6), may be implemented in software and/or using a combination of hardware and software elements. In some embodiments, any of microcontrollers68and76(FIG.1), PHY controller112(FIG.2), microcontrollers160,164and176(FIG.5) and/or microcontrollers196and200(FIG.6) may comprise a programmable processor, which is programmed in software to carry out the functions described herein. The software may be downloaded to the processor in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory. As noted above, although the embodiments described herein mainly address automotive and avionic applications, the methods and systems described herein can also be used in other applications that involve control of GPIO ports, such as in industrial or smart-home communication systems. It is noted that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art. Documents incorporated by reference in the present patent application are to be considered an integral part of the application except that to the extent any terms are defined in these incorporated documents in a manner that conflicts with the definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.
35,521
11863469
DETAILED DESCRIPTION OF THE DRAWINGS As a preliminary matter, cache coherence, also referred to as memory coherence, is an issue that affects the design of computer systems in which two or more processors or cores share a common area of memory. In a single processor system, there is only one processing element doing all the work and, therefore, only one processing element that can read to or write from a given memory location. As a result, when a value is changed, all subsequent read operations of the corresponding memory location will see the updated value, even if it is cached. Conversely, in multiprocessor (or multicore systems), there are two or more processing elements working at the same time, and so it is possible that they simultaneously access the same memory location. Provided none of the processors changes the data in this location, the processor can share the data indefinitely and cache the data as it pleases. But, as soon as a processor updates the location, the other processors might work on an out-of-date copy that may reside in its local cache. Consequently, some scheme is required to notify all the processing elements of changes to shared values; such a scheme is known as a “cache coherence protocol,” and if such a protocol is employed the system is said to have “cache coherence.” The exact nature and meaning of the cache coherency is determined by the consistency model that the coherence protocol implements. A cache coherency protocol typically defines a set of cache states stored in association with cached copies of memory blocks, as well as the events triggering transitions between the cache states and the cache states to which transitions are made. Thus, in order to maintain coherency of data across the system, the cache coherency protocol is used such as, for example, a directory-based protocol, a snoop-based protocol, a combinations thereof, or other variations so as to ensure at least a minimum required level of coherence among the various processor core's “views” of the contents of system memory. In addition, modern day computing systems, with various system buses, system intraconnects and interconnects between various application and local or adjacent systems, have various protocols for transferring data and sharing memory between various components. For example, computing systems aim to provide increased performance using cache coherence to enable coherent interconnections between general-purpose processors and acceleration devices for heterogeneous computing, which attempt to avoid the bandwidth limitations or latency that is inherent in some connection such as, for example, a on a PCI Express (PCIe) bus (where the PCIe is a multilane, point-to-point interconnect that can scale from one lane to many). That is, computing systems attempt to provide increased computing efficiency while maintaining cache coherency when providing data access across memory spaces of various types of processors. For example, the open coherent accelerator processor interface (CAPI) which is an interface between processor and accelerators to increase bandwidth and provide lower latency. As another example, cache coherent interconnect for accelerators (“CCIX”) may be used, which is built on a PCI Express (PCIe) to provide a chip-to-chip interconnect for high-speed hardware accelerators and targets certain applications. However, even within these modern computing systems, transferring, communicating, and/or receiving data between various application and local or adjacent systems still experience network latency along the network path. For example, in the context of tightly integrated high performance computer (“HPC”) systems with few switching layers, delivering data to the target application includes a network path delay (once inside a destination device) that may be several nanoseconds (“nsecs”) higher than transferring the data among servers (e.g., at least within a scale of a few co-located racks, especially if a network stack needs to be traversed). Thus, a need exits to provide a cache-coherent interconnect system to maintain cache coherency, increase bandwidth, and reduce/eliminate the network access latency path in HPC/heterogenous computing systems, where the network access latency may starts from the time a data signals arrive at a network interface and ends with a shared memory having the data (e.g., ends with an actual data copy of the data to the destination memory/memory cells) to enable receiving applications to uses the data. Thus, various embodiments, as described herein, provide an enhanced network architecture that leverages the cache-coherent attachment of processors that enable direct mastering of local system bus architecture-agnostic loads and stores operations to system memory by off-chip peripherals. This way, memory transactions get decoupled from specific bus architecture and the same unmodified coherently-attached logic (of accelerators, network interfaces, etc.) can be interfaced to different SoC architectures. In one aspect, the various embodiments improve a network access latency path and provides for sharing memory between one or more applications and one or more network interfaces while bypassing one or more drivers and the operating system. The mechanisms of the illustrated embodiments of the enhanced network architecture enable off-chip accelerators to obtain be integrated with a system on a chip (“SoC”) and directly master and cache-coherently load and store to system memory using the same memory access data path (e.g., a hardware data path similar to on-chip entities (e.g., processors, coprocessors) having with comparable latency and bandwidth. In an additional aspect, the present invention provides for direct access to the in-memory data, the generation of interrupts, and ability to atomically compete for spinlocks with CPUs are also provided by coherently attached ports. This means that if application and coherently-attached device agree on a data format, there is no requirement for any operating system device driver support and DMA programmable hardware to be used for scatter-gather data transfers. By enabling and providing driver-less integration, the programming model and application programming interface (“API”), which appears as a thread mutual exclusion, i.e., the network I/O is integrated as a special form of a hardware thread, is further simplified providing increased computing efficiency. Accordingly, the present invention provides for a network framework stack sharing memory buffers (e.g., “network buffers”) between the various applications and the network interface(s). In one aspect, the memory buffers may be used interchangeably with network buffers. The network buffers may be allocated on behalf of one or more applications by an operating system (“OS”) and are offered under the control of a library (e.g., a shared library). In one aspect, a shared address space, between all participating applications that is established over shared memory mechanism, may be provided and used for input/output (“I/O”) control (i.e., exchange of pointers and spinlock handling). Then, each application may have a memory management unit (“MMU”) protected shared access with the accelerator on a common address space which contains the application network buffers. In this way, the present invention enables applications to seamlessly exchange in-memory data over a network by only handling pointers and spinlocks to further simplify a remote direct memory access (“RDMA”)-style network communication by using coherently attached port technology to achieve unprecedented latency (e.g., current RDMA round trip latency is 1.2 microseconds “usec” with one switching layer, whereas Coherently attached interfaces may reduce this latency down to 600-700 nanoseconds “nsec”) for data delivery within a “black box” and enable the network media to leverage ultra-high bursts (i.e., a single hardware-level burst for the whole application-level message size is now possible). The illustrated embodiments of the network stack framework system is a framework that is agnostic to any network Medium Access Protocols (“MAC”) or Link Layer Control protocols (“LLC”), and thus, can be potentially integrated with any packet or circuit network technology. It is understood in advance that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes. Referring now toFIG.1, a schematic of an example of a cloud computing node is shown. Cloud computing node10is only one example of a suitable cloud computing node and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the invention described herein. Regardless, cloud computing node10is capable of being implemented and/or performing any of the functionality set forth hereinabove. In cloud computing node10there is a computer system/server12, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server12include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Computer system/server12may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server12may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As shown inFIG.1, computer system/server12in cloud computing node10is shown in the form of a general-purpose computing device. The components of computer system/server12may include, but are not limited to, one or more processors or processing units16, a system memory28, and a bus18that couples various system components including system memory28to processor16. Bus18represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. Computer system/server12typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server12, and it includes both volatile and non-volatile media, removable and non-removable media. System memory28can include computer system readable media in the form of volatile memory, such as random access memory (RAM)30and/or cache memory32. Computer system/server12may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system34can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus18by one or more data media interfaces. As will be further depicted and described below, system memory28may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention. Program/utility40, having a set (at least one) of program modules42, may be stored in system memory28by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules42generally carry out the functions and/or methodologies of embodiments of the invention as described herein. Computer system/server12may also communicate with one or more external devices14such as a keyboard, a pointing device, a display24, etc.; one or more devices that enable a user to interact with computer system/server12; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server12to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces22. Still yet, computer system/server12can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter20. As depicted, network adapter20communicates with the other components of computer system/server12via bus18. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server12. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. In the context of the present invention, and as one of skill in the art will appreciate, various components depicted inFIG.1may be located in a moving vehicle. For example, some of the processing and data storage capabilities associated with mechanisms of the illustrated embodiments may take place locally via local processing components, while the same components are connected via a network to remotely located, distributed computing data processing and storage components to accomplish various purposes of the present invention. Again, as will be appreciated by one of ordinary skill in the art, the present illustration is intended to convey only a subset of what may be an entire connected network of distributed computing components that accomplish various inventive aspects collectively. Referring now toFIG.2, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50comprises one or more cloud computing nodes10with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes10may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.2are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.3, a set of functional abstraction layers provided by cloud computing environment50(FIG.2) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.3are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Device layer55includes physical and/or virtual devices, embedded with and/or standalone electronics, sensors, actuators, and other objects to perform various tasks in a cloud computing environment50. Each of the devices in the device layer55incorporates networking capability to other functional abstraction layers such that information obtained from the devices may be provided thereto, and/or information from the other abstraction layers may be provided to the devices. In one embodiment, the various devices inclusive of the device layer55may incorporate a network of entities collectively known as the “internet of things” (IoT). Such a network of entities allows for intercommunication, collection, and dissemination of data to accomplish a great variety of purposes, as one of ordinary skill in the art will appreciate. Device layer55as shown includes sensor52, actuator53, “learning” thermostat56with integrated processing, sensor, and networking electronics, camera57, controllable household outlet/receptacle58, and controllable electrical switch59as shown. Other possible devices may include, but are not limited to various additional sensor devices, networking devices, electronics devices (such as a remote control device), additional actuator devices, so called “smart” appliances such as a refrigerator or washer/dryer, and a wide variety of other possible interconnected objects. Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provides cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may comprise application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provides pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and, in the context of the illustrated embodiments of the present invention, various workloads and functions96for utilizing coherently attached interfaces in a network stack framework. In addition, workloads and functions96for utilizing coherently attached interfaces in a network stack framework may include such operations as data analysis. One of ordinary skill in the art will appreciate that the workloads and functions96for utilizing coherently attached interfaces in a network stack framework may also work in conjunction with other portions of the various abstractions layers, such as those in hardware and software60, virtualization70, management80, and other workloads90(such as data analytics processing94, for example) to accomplish the various purposes of the illustrated embodiments of the present invention. Turning now toFIG.4a block diagram depicting exemplary functional components400according to various mechanisms of the illustrated embodiments is shown.FIG.4illustrates system400for utilizing coherently attached interfaces in a network stack framework. As will be seen, many of the functional blocks may also be considered “modules” or “components” of functionality, in the same descriptive sense as has been previously described inFIGS.1-3. With the foregoing in mind, the module/component blocks400may also be incorporated into various hardware and software components of a system for intelligent causal knowledge extraction in accordance with the present invention. Many of the functional blocks400may execute as background processes on various components, either in distributed computing components, or on the user device, or elsewhere. As illustrated inFIG.4, network stack sharing service410is shown, incorporating processing unit420(“processors) and memory430, which may also be the processing unit16(“processor”) and memory28ofFIG.1, to perform various computational, data processing and other functionality in accordance with various aspects of the present invention. The processing unit420may be in communication with memory430. The network stack sharing service410may be provided by the computer system/server12ofFIG.1. As one of ordinary skill in the art will appreciate, the depiction of the various functional units in the network stack sharing service410is for purposes of illustration, as the functional units may be located within the network stack sharing service410or elsewhere within and/or between distributed computing components. The network stack sharing service410may include a sharing component440, an application buffer component450, a circular buffer component460, and a queuing and pooling component470. Thus, the network stack sharing service410enables coherent attachment of network interfaces to system memory entirely bypassing drivers and an OS. In one embodiment, by way of example only, the sharing component440(and/or in association with the application buffer component, circular buffer component460, the queueing and pooling component470, or a combination thereof) may share a plurality of network buffers coherently attached between one or more applications and a network interface while bypassing one or more drivers and an operating systems using an application buffer, a circular buffer and a queuing and pooling operation. The sharing component440may include and/or be associated with a shared library (see also a shared library504ofFIG.5A-5B) control the plurality of network buffers by a shared library. The sharing component440may share one or more address spaces of the plurality of network buffers between the one or more applications using the network interface, wherein the plurality of network buffers used for input/output (I/O) control. The application buffer component450may safely share one or more application virtual address spaces of multiple network buffers with the coherently attached network interface (see coherently attached network interface512ofFIG.5). The circular buffer component460may exchange memory pointers with one or more coherently attached devices. The queuing and pooling component470may execute the queuing and pooling operation for the plurality of network buffers for network buffer transmission, reception, and manipulation. The queuing and pooling component470may move, assign, or reassign one of the plurality of network buffers from one or more queues and one or more pools for executing the queuing and pooling operation. That is, the queuing and pooling component470may provide for network buffer transmission, reception, manipulation and share buffers that belong to different application virtual address spaces with a network interface in a coherent domain. The sharing component440may establish a shared memory region and a private memory region using the plurality of network buffers. Turning now toFIGS.5A-5B, diagram depicting a schematic of a network stack framework500and515for utilizing coherently attached interfaces. As will be seen, many of the functional blocks may also be considered “modules” or “components” of functionality, in the same descriptive sense as has been previously described inFIGS.1-4. Also, one or more of the operations and steps ofFIGS.1-4may also be included in one or more operations or actions ofFIGS.5A-5B. Repetitive description of like elements, components, modules, services, applications, and/or functions employed in other embodiments described herein is omitted for sake of brevity. As depicted, the network stack framework500includes and/or is associated with one or more applications (e.g., App1or “Application502A” and App N or “Application N”). Also, the one or more applications502A-502N may be in communication with a shared library504. The network stack framework500may also include one or more network buffers such as, for example, network buffers510A-510C. In one aspect, the network buffers510A-510C may be shared and/or restricted as “private.” For example, network buffer510A may be a shared network buffer while network buffers510B and510C may be private network buffers. The network buffers510A-510C may be in communication/association with a coherently attached network interface512. Thus, the network buffers510A-510C may be coherently attached between one or more applications (e.g., the applications502A-502N) and a network interface (e.g., the coherently attached network interface512) while bypassing one or more drivers and an operating systems using an application buffer, a circular buffer (see alsoFIGS.6A-6B) and a queuing and pooling operation (see alsoFIG.6A-6B). In one aspect, by way of example only, the arrangement of the address spaces for the network buffers510A-510C is depicted for N example applications such as, for example, applications502A-502N. The shared library504may establish a common region over shared memory (e.g., “APP_SHARED” for sharing access for application in network buffers510A) and a private region (e.g., “APP_PRIVATE” for providing private access for an application in network buffers510B-510C). Also, as described herein, the various constructs that comprise the network buffer stack will also refer to the address space used and coherently attached network interface(s)512may be associated with distinct application address spaces concurrently by leveraging hardware level support such as, for example, PCI PASID (“peripheral component interface (“PCI”) process address space identifier (“PASID”) and the like. In one aspect, as part of connection establishment with one or more remote counterparts, an application (e.g., applications502A-502N which may be a user application) may reserve a number of associated network buffers for communication, which reside at a private region (e.g., “APP_PRIVATE” for providing private access for an application in network buffers510B-510C). In the context of each application, a network buffer (e.g., one of the network buffers510A-510C) may belong at any given point in time to only one of the following 6 constructs, which may be separately maintained by the shared library504for each application. Also, each of the queues and pools, as described herein, may use the circular buffers620A-620N ofFIGS.6A-6Bfor execution and performing the various functions of each of the described queues and pools. In a first construct or “first pool,” a “global software (“s/w”) free pool (e.g., default state) may be a common pool for all connections across all applications and all network buffers belong here during initialization until action is taken. The pointers to all these network buffers are maintained by the shared library504at the “APP_SHARED” region in the network buffer510A, classified per active application and contains only the network buffer pointers. In a second construct or “second pool,” a global hardware (“H/w”) free pool may proactively be a number of free network buffers that are pushed to this global hardware free pool and the free network buffers may be moved from global software (“s/w”) free pool. Each application may have contributed buffers to this pool (e.g., the global software (“s/w”) free pool) and should always replenish the global software (“s/w”) free pool as network buffers are consumed for data reception. If the global software (“s/w”) free pool becomes empty for a given application, the global software (“s/w”) free pool will stop accepting network buffers from remote nodes destined to that application. The pointers for this global software (“s/w”) free pool reside in the “APP_SHARED” region in the network buffer510A and contains only the network buffer pointers. In a third construct or “third pool,” a processing pool(s) may include network buffers that are being actively modified by local processors, graphics processing units (“GPUs”), accelerators etc., which is “per application pool” and is maintained for garbage collection purposes. In case the network buffers are not returned, the network buffers are garbage collected by the shared library504upon owner application exit. The processing pool(s) may reside at the “APP_SHARED” region in the network buffer510A and contains only the network buffer pointers. In a third construct or “third queue,” a receive queue(s) may include network buffers that are updated with contents sent from the remote host with which the communication is established where one receive queue is created at the “APP_SHARED” region in the network buffer510A per remote connection (e.g., hosts only the buffer pointers). The receive queue(s) may be a first-in-first-out (“FIFO”) queue. In a fourth construct or “fourth queue,” a global send queue may include network buffers that are marked for sending and the global send queue is shared among all connections across all applications so that hardware access all network buffers and perform a transmission (e.g., the global send queue hosts only the network buffer pointers). The global send queue is a FIFO queue that contains pointers and is at the “APP_SHARED” region in the network buffer510A. In a fifth construct or “fifth queue,” a sent queue(s) may include network buffers for which sending is complete can be returned back to the owner application via this queue. The sent queue(s) may be a FIFO queue with pointers residing at the “APP_SHARED” region in the network buffer510A (which are protected network buffer space). In an additional aspect, as depicted inFIG.5B, all queues and pools may be maintained, for example, at the “APP_SHARED” region in the network buffer510A and may contain virtual address pointers that point to various application buffers (e.g., APP1buffers and APPN buffers) that reside in each application address space (of “APP_PRIVATE” region in the network buffers510B-510C). The application buffers (e.g., APP1buffers and APPN buffers) may be transferred/moved between the various pools and queues (as described above) as a response to one or more application programming interface (“API”) calls and/or hardware events. That is, applications may move network buffers from a global S/w Free pool to their processing Pool. Each network buffer that belongs to an application processing pool may be moved by the owner application to the global send queue. An application may reuse one or more network buffers from the sent queue of an application by transferring/moving the one or more network buffers back to the processing pool. The sent queue may have a fixed size for each application so if the sent queue ignores the application, the sent queue may start returning the buffers to global S/w free pool, in case the sent queue is full. Every network buffer in the receive queue may be removed to the processing pool upon read by the shared library504. For every network buffer returned to the receive queue, the shared library504moves a network buffer from the global S/w free pool to the global H/w free pool so the hardware may continue receiving data from the remote host. In an additional aspect, for network addressing and logical connections (i.e., established communication between a local application that runs on local host and a remote application that runs on a remote host) the shared library504(which may be a software library) maintains a connection stack that has all the active connections of local-to remote-applications. Connection tuples, (i.e., tuples that hold all the required routing and identifier information to establish bidirectional communication between a local application that runs on the local host and a remote application that runs on a remote host) may be of fixed size and are accessible by unique identifiers, which also act as offsets on a dedicated stack where they get stored. Each connection tuple may feature 1) a destination connection identifier (“ID”) and network identifier (that may be acquired during connection establishment and specific to the underlying network architecture/technology), 2) local identifiers to access the various queues, 3) authentication credentials for the various queues the connection tuple has access to, and/or 4) internal port information so that network buffers can be delivered to an application. FIGS.6A-6Bare diagrams600and615use of a circular buffer for utilizing coherently attached interfaces in a network stack framework in a computing environment. As will be seen, many of the functional blocks may also be considered “modules” or “components” of functionality, in the same descriptive sense as has been previously described inFIGS.1-5A-5B. Also, one or more of the operations and steps ofFIGS.1-5A-5Bmay also be included in one or more operations or actions ofFIGS.6A-6B. Repetitive description of like elements, components, modules, services, applications, and/or functions employed in other embodiments described herein is omitted for sake of brevity. As depicted, a circular buffer620(e.g., circular buffer620ofFIG.6Aand circular buffers620-620N ofFIG.6B) is depicted that may resides at a system memory. In one aspect, given the ability of a hardware component to compete atomically for spinlocks, a hardware-software co-design may be used for circular buffer communication with the circular buffer620. In one aspect, the circular buffer620may resemble/mirror operations similar to behavior of a processor (e.g., how CPUs compete), but an all-hardware thread may be implemented by the network interface (e.g., network interface hardware630) and may be required to “push/pull” data from system memory towards/from the network interface (e.g., network interface hardware630). A CPU thread may also be required for delivery to applications and is spawned by a library610. For example, a particular atomic command (e.g., atomic built-in commands of a specific processor architecture that can atomically assert and de-assert bits in a register) may be used to implement shared access to the circular buffer620. Each circular buffer instance features a single register which represents which fields of the circular buffer have valid entries. More specifically each bit position represents that the corresponding specific circular buffer entry has valid data if the bit position value equals to logical one and it does not contain data if the bit position value equals to zero. Taking advantage of the atomic operations on the aforementioned register that indicates which entries are valid in the circular buffer620, the coherently attached network interface can safely share the buffer with the corresponding CPU (on which the software library runs) and exchange data. As more clearly depicted inFIG.6B, the circular buffers620A-620N depicts end-to-end connection between the shared library610(e.g., software library or “software side”) and the network interface hardware630(e.g., hardware buffer, network buffer, network buffer dispatcher or “hardware side”). The circular buffers620A-620N may be a single direction hardware-software shared circular buffer (e.g., the software library610and the network interface hardware630share the circular buffer) and used for interactions (e.g., used for pointer exchange). The circular buffers620A-620N may be single direction and different instances need to be used for different pool/queue types such as, for example, the global software (“s/w”) free pool602A, the global hardware (“H/w”) free pool602B, the processing pool(s) (i.e. pools of buffers that are currently being manipulated by application code that runs on the local host), the receive queue(s)608A-606B (e.g., receive queues per connection), the global send queue (i.e., a single host-wide queue shared between all applications on a host that contains all the buffers that need to be transmitted by the coherently attached network interface but have not been transmitted yet)604A and604B, and/or the global sent queue(s)606A-606B (i.e. one sent queue per connection, contains the buffers that have been sent to the remote host by the coherently attached network interface and therefore the application can reuse them). In one aspect, on the software side, a dispatcher thread640may be used and on the hardware side a priority/arbiter dispatch640(e.g., a hardware buffer dispatcher), which may be used by the circular buffers620A-620N for pushing and/or pulling data between system memory towards/from the network interface. Thus, each of the circular buffers620A-620N may be used for both software function and hardware functions. Thus, by way of example only, each of the circular buffers620A-620N may pull for spinlocks from the hardware side (e.g., the network interface hardware630) and hardware threads may mutually exclude itself from software threads to obtain the data. In one aspect, the circular buffers620A-620N may be only used for immediate transfer, so both the software (e.g., application) and as well as the hardware (network/memory) side may have first-in-first-out “FIFO” operations to support asynchronous operations. Thus, the circular buffers620A-620N may retrieve pointers to an address space and use the various queues and pools to remove, move, and/or transfer network buffers from within applications and the network. In operation, by way of example, each application has a state and the state may be the state of the queues and pools indicating where the network buffers are in the queues and pools (e.g., the global software (“s/w”) free pool602A, the global hardware (“H/w”) free pool602B, the processing pool(s) (these are virtual pools as their refer to buffers that do not belong to any other pool and therefore are being manipulated by applications), the receive queue(s)608A-606B, the global send queue604A and604B, and/or the global sent queue(s)606A-606). Using the dispatcher thread640, each state of the queues and pools is mirrored by only exchanging pointers from the software side (S/W) to the hardware side (H/W). Mirroring the state of the queues and pools enables awareness for any changes and any updates. The hardware side now has the same view as the software library610and perform actions that may be offloaded to the hardware. In this way, both the software side (S/W) to the hardware side (H/W) understand which of the network buffers may be sent out, transferred/received, and/or free by exchanging pointers from the software side to the hardware side and using the pointers to the queues and pools. Thus, for example, the hardware may execute a decision, perform an operation, and retrieve date from the application and push back the results to the applications with transparity. FIG.7is a block diagram depicting an exemplary end-to-end driverless connection system700for utilizing coherently attached interfaces in a network stack framework in a computing environment. As will be seen, many of the functional blocks may also be considered “modules” or “components” of functionality, in the same descriptive sense as has been previously described inFIGS.1-6A-6B. Also, one or more of the operations and steps ofFIGS.1-6A-6Bmay also be included in one or more operations or actions ofFIG.7. Repetitive description of like elements, components, modules, services, applications, and/or functions employed in other embodiments described herein is omitted for sake of brevity. As depicted, the system700for utilizing coherently attached interfaces in a network stack framework may include one or more system interconnect710A and710B. The system interconnect710A (having coherence domain1) and710B (having coherence domain2) connects one or more application buffers710C to network buffer hardware710A and710B (e.g., the hardware side), which may be enabled via network interface hardware730that is also connected to the network switching layers712. Also, local interface712may also be used to connect each network buffer hardware710A and710B to the network switching layers712(e.g., local network interface). In short, the network interface hardware730enables the applications702A-N (e.g., App1, . . . , AppN and is the software side), the library704and the buffers710to be directed connected end-to-end via the network interface hardware730. Thus, the system700enables coherent attachment of network interface hardware730via the system interconnects710A-710B bypassing drivers and OS entirely. The application buffers get directly copied to the network interface hardware output buffers without any intervention from the Operating System. FIG.8is a flowchart diagram depicting an exemplary method800for utilizing coherently attached interfaces in a network stack framework in a computing environment, in which various aspects of the illustrated embodiments may be implemented. The functionality800may be implemented as a method executed as instructions on a machine, where the instructions are included on at least one computer readable storage medium or one non-transitory machine-readable storage medium. The functionality800may start in block802. One or more network buffers that are coherently attached between one or more application and a network interface, as in block804. Network buffers coherently attached between one or more applications and a network interface may be shared while bypassing one or more drivers and an operating systems using an application buffer, a circular buffer and a queuing and pooling operation for network communication, as in block806. The functionality800may end in block808. In one aspect, in conjunction with and/or as part of at least one block ofFIG.8, the operations of method800may include one or more of each of the following. The operations of method800may control the plurality of network buffers by a shared library. The operations of method800may share one or more address spaces of the plurality of network buffers between the one or more applications using the network interface. The plurality of network buffers may be used for input/output (I/O) control. A circular buffer may exchange memory pointers with coherently attached devices. The operations of method800may execute the queuing and pooling operation for the plurality of network buffers for network buffer transmission, reception, and manipulation. The operations of method800may move, assign, or reassign one of the plurality of network buffers from one or more queues and one or more pools for executing the queuing and pooling operation. The operations of method800may establish a shared memory region and a private memory region using the plurality of network buffers. The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowcharts and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowcharts and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowcharts and/or block diagram block or blocks. The flowcharts and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowcharts or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
56,739
11863470
DETAILED DESCRIPTION For purposes of illustrating the present innovation, it might be useful to understand phenomena relevant to various implementations of the disclosure. The following foundational information can be viewed as a basis from which the present disclosure can be explained. Such information is offered for purposes of explanation only and, accordingly, should not be construed in any way to limit the scope of the present disclosure and its potential applications. Conventionally, a security artifact is any information (e.g., a log or other document) that can be used to trace back a security vulnerability in a customer network. For example, a security artifact might list threat events (e.g., a malware detection) that occurred at an end-user device. In another example, a security artifact might list information of a logged-in user. As another example, a security artifact is a document that was sent in violation of a data loss prevention policy. When security artifacts are transmitted over a network, they are sent in the form of a message payload. The present disclosure uses the phrase “security payload” to refer to those message payloads. FIG.1illustrates a functional block diagram of a delivery pipeline, according to an implementation of the present disclosure. The delivery pipeline includes delivery system100and message receivers160,170,180. The delivery system100can include a message producer no, a delivery processor120, and a queuing service130. The message producer no includes a security payload140. The queuing service130includes partitions P1, P2, and P3. In various implementations, the delivery system100implements message queue partitioning to retry delivery of a security context if a receiving service (e.g., cloud partner, such as message receivers160,170,180) is temporarily offline. As a result, some implementations of the delivery system100can ensure that push delivery of the security context to the receiving service is successful. In various implementations, security artifacts are gathered by the message producer no to produce a security payload140. The message producer no feeds the security payload140to the delivery processor120. The delivery processor120delivers security payloads to one or more designated endpoints as a service. In many implementations, an entity controlling the delivery processor120offers that service in a business relationship between the entity and another entity controlling one or more of the message receivers160,170,180. Thus, the delivery processor120can look up the destination(s) (e.g., the designated endpoints) for the security payload140, based on such a relationship. The delivery processor120attempts to connect to the one or more designated endpoints (e.g., REpresentational State Transfer (REST) endpoints). In the implementation illustrated inFIG.1, the one or more designated endpoints are message receivers160,170,180. FIG.1illustrates that the message receivers170and180are offline. According to an example implementation, the message producer110, on an initial delivery failure to the message receivers170and180, generates security payloads destined for the queueing service130. The message producer no can label the security payload140, at least in part based on a timestamp and the identity of the offline message receivers (e.g., RN1-SP1-Timestamp1-MR170 and RN1-SP1-Timestamp2-MR18). The message producer110can insert the security payload into a partition P1, P2, P3 of the queueing service130. In various implementations, the queueing service130is partitioned based on a number of retries (e.g., RN1-RNx) for delivery of the security payload140to the offline message receivers170,180. According to an example implementation, security payloads that enter the queueing service130(i.e., have had at least one delivery failure) are queued independently per message receiver destination. In various implementations, the message producer no can determine retry intervals for the queued security payloads as follows: Retry NumberRetry Interval120seconds210minutes390minutes48hours These retry intervals are exemplary and do not limit the scope of the present innovation. Many message receivers are offline for a short period of time, and few message receivers are offline for long periods of time. An example of a message receiver160,170,180in the context of the present innovation is a data center. Such a data center might have estimated uptime of 99.9%, meaning the data center experiences downtime of 0.1% each year. That is, the data center expects to have cumulative downtime of 1.6 hours or less each year. Thus, in this context, a short time is a few seconds, and several minutes is a fairly long time. In some implementations, queue partitions representing fewer retries advantageously store security payloads in high numbers for a short period of time. Further, queue partitions representing a greater number of retries store fewer security payloads for longer periods of time. Thus, the queuing service130can distribute the security payloads across its memory, based on the retry number. In other words, the queuing service130partitions the security payloads by the number of retries, thus distributing the load to promote balance across the partitions of the queuing service130. By balancing the storage of the security payloads across the partitions based on retry numbers with increasing retry intervals, some implementations can balance the queue memory and disk footprint. Additional implementations can balance usage of the delivery processor120. In various implementations, the queuing service130can balance the partitions programmatically by altering the size of the retry intervals to achieve balance. In various implementations, on a delivery success or failure, the queuing service130deletes the security payload. In further implementations, on a delivery failure, the delivery processor120increments the number of retries and records the current time. In two particular examples, the delivery processor120can store the number of retries and current time in a table (as later described) or in the name of the security payload140. Other examples are possible, such as storing the number of retries and the current time in the security payload140itself, such as in a header or footer. In select implementations, the header or footer can also include the identity of the message receiver. The header or footer also can include the number of retries and the time of a most recent failed delivery to the message receiver. Then, the delivery processor120can insert the security payload140into the partition of the queuing service130corresponding to the next number of retries. In various implementations, the delivery processor120, when polling a query to the queuing service130, determines whether the current time has exceeded a threshold for the next retry attempt. As an example, the delivery processor120can determine the threshold by adding the recorded time and the retry interval for the current number of retries. In some implementations, the recorded time is retrieved from a portion of the name of the security payload140or from a header or footer of the payload. This determination can prevent reading records from the queueing service130that have not aged sufficiently for the given retry number. In other words, according to example implementations, the queue partitions in the queueing service130act as not-before, as opposed to between, times. FIG.2illustrates a table200used by the delivery system, according to an implementation of the present disclosure. In various implementations, the table200includes a retry number column210, a message receiver column220, a timestamp column230, and a security payload identity column240. As discussed above, the queuing service130stores a security payload in a partition defined by a number of retries of sending the security payload. Thus, in many implementations, the queuing service130tracks the partition of the queuing service130storing the security payload by way of the number of retries. The retry number column210includes a number of retries for sending a security payload to a particular message receiver. In some implementations, the table200can include a column for a partition of the queuing service130in addition or as an alternative to the number of retries. Similarly, in some implementations, the table200can include a column for a retry period in addition or as an alternative to the number of retries. The message receiver column220includes the identity of the particular message receiver to which the delivery processor120will attempt a retry transmission of the security payload. In some implementations, the message receiver column can identify an Internet Protocol (IP) address, a name, a domain name, a customer ID, or other identifying information. The timestamp column230includes a time at which the particular security payload was placed in the partition of the queuing service130. In other implementations, the timestamp column230includes a time at which the delivery processor120performed the last attempt to transmit the security payload. In further implementations, the table200can include a column for a time at which the delivery processor120performs the next retry attempt for the particular security payload in addition or as an alternative to the timestamp column230. The security payload identity column240includes the identity of the particular security payload of which the transmission failed. In some implementations, the security payload identity column240can identify the security payload by a name or a checksum. In the example ofFIG.2, the table200includes three entries. The top entry of the table200indicates a failure of an attempt to transmit security payload SP2 to message receiver MR2. The queuing service130placed security payload SP2 in a partition P1 corresponding to retry number RN1 at time 10:00:00. The middle entry of the table200indicates a failure of an attempt to transmit security payload SP2 to message receiver MR3. The queuing service130placed security payload SP2 in a partition P1 corresponding to retry number RN1 at time 10:05:00. In some implementations, two copies of security payload SP2 are placed in the partition P1 of the queuing service130, which corresponds to retry number RN1. In other implementations, the partition P1 includes one copy of security payload SP2. The bottom entry of the table200indicates a failure of an attempt to retry a transmission of security payload SP1 to message receiver MR2. The queuing service130placed security payload SP1 in a partition P2 corresponding to retry number RN2 at time 11:00:00. In various implementations, based on the top entry in the table200, the delivery processor120retries a transmission of the security payload SP2 to the message receiver MR2 at a time of 10:00:20 (e.g., based on the above retry interval for a first retry). Based on the middle entry in the table200, The delivery processor120retries a transmission of the security payload SP2 to the message receiver MR3 at a time of 10:05:20 (e.g., based on the retry interval for the first retry). Based on the bottom entry in the table200, the delivery processor120retries a transmission of the security payload SP1 to the message receiver MR2 at a time of 11:10:00 (e.g., based on the above retry interval for a second retry). As discussed above, the retry interval for the second retry is greater than the retry interval for the first retry. As discussed previously, some implementations modify the name of the security payload, often as a substitute for using the table200. Other implementations modify the security payload itself, such as a header or foot thereof. In these implementations, the queuing service130stores independent copies of the security payloads. FIG.3illustrates an algorithm300performed by a delivery system according to an implementation of the present disclosure. The algorithm300begins at S305and advances to S310. At S310, a processor receives a security payload from the message producer no and instructs an attempt to transmit a security payload for a particular message receiver. A network interface (e.g., of delivery processor120) receives the instruction and performs an initial attempt to transmit the security payload to the particular message receiver. The algorithm300then advances to S315. In S315, the processor determines whether the initial attempt to transmit the security payload to the particular message receiver failed. For example, the processor can determine whether the network interface received an acknowledgement from the particular message receiver within a predetermined period of time (e.g., 10 seconds). If the network interface receives an acknowledgement from the particular message receiver within the predetermined period of time, then the processor determines the initial attempt to transmit the security payload was successful (e.g., did not fail). If the network interface does not receive an acknowledgement from the particular message receiver within the predetermined period of time, then the processor determines the initial attempt to transmit the security payload failed. If the processor determines the initial attempt to transmit the security payload did not fail, then the algorithm300advances to S360. If the processor determines the initial attempt to transmit the security payload failed, then the algorithm300advances to S320to prepare a first retry. In S320, the processor determines a partition of the queuing service130in which to store the security payload, based on the retry number. For an initial retry (i.e., the retry after a failed initial transmission), the processor determines the security payload is to be stored in a first partition of the queuing service130. The processor can instruct the queuing service130to store the security payload in the partition, and the queuing service130does so. The processor also determines a time at which the security payload is stored in the queuing service130. The processor records that time as a timestamp. In addition, the processor determines a retry interval, based on the retry number. As discussed above, in many implementations, a lower retry number (e.g., one) has a shorter retry interval than a larger retry number (e.g., four). In various implementations, the retry intervals are static values. In other implementations, the processor dynamically determines one or more retry intervals to achieve a better balance of security payloads across the queuing service130. In some implementations, the processor stores the retry number, the identity of the message receiver, the timestamp, and the identify of the security payload in a table, such as table200. In various implementations, the processor renames an instance of the security payload, at least based on the determined time and the message recipient. The name can also include the retry number and the identity of the security payload. For example, the processor can rename the packet RN1-SP2-100000-MR2. In at least one implementation, the processor can modify the header or footer of the security payload itself to identify the determined time and the message recipient. The algorithm300then advances to S330. In S330, the processor determines whether the retry interval determined in S320has elapsed since the time determined in S320. For example, if the retry interval is 10 seconds and the determined time is 10:00:00, then the processor determines whether a current time is later than 10:00:10. For example, the processor can query the entries of the table200. If the processor determines in S330the retry interval has not elapsed, then the algorithm300returns to S330to wait for the retry interval to elapse. If the processor determines in S330the retry interval has elapsed, then the algorithm300advances to S335. In S335, the processor instructs a retrieval of the security payload from a partition of the queuing service130. For example, the processor can determine the identity of the security payload for which the next transmission retry attempt is due, based on the retry number and the time stored for the entry in the table200. In addition, the processor can determine the partition of the queuing service130that stores the security payload, based on the retry number stored in the table200. Further, the processor can determine the message receiver to which the delivery processor120will attempt the next transmission retry, based on the entry in the table200. In some implementations, the processor can identify an instance of the security payload, based on the retry number and a time stored in a name of the security payload. Then, the processor can determine the message receiver, based on the name of the instance of the security payload. The processor can also determine the partition of the queuing service130and/or the retry number, based on the name of the instance of the security payload. The processor optionally deletes the security payload from its current partition in S335. The algorithm300then advances to S340. In S340, the processor instructs performance of an attempt to retry transmission of the security payload to the message receiver determined in S335. For example, the processor instructs the delivery processor120to perform the attempt. In other implementations, the processor instructs a network interface to perform the attempt. The algorithm300then advances to S345. In S345, the processor determines whether the attempt to retry the transmission of the security payload to the message receiver failed. As before, in one example, the processor can determine whether the network interface received an acknowledgement from the message receiver within a predetermined period of time (e.g., 10 seconds). If the network interface receives an acknowledgement from the message receiver within the predetermined period of time, then the processor determines the retry attempt to transmit the security payload was successful (e.g., did not fail). If the network interface does not receive an acknowledgement from the message receiver within the predetermined period of time, then the processor determines the retry attempt to transmit the security payload failed. If the processor determines the retry attempt to transmit the security payload did not fail, then the algorithm300advances to S360. If the processor determines the retry attempt to transmit the security payload failed, then the algorithm300advances to S350. In S350, the processor determines whether a maximum number of retries has been reached. For example, the maximum number of retries can be determined in view of the unlikelihood that a retry interval is exceeded. For example, a retry interval of more than two hours is unlikely to be exceeded for a message receiver expected to have cumulative downtime of less than 1.6 hours each year. The maximum number of retries can also be determined in view of potential circumstances surrounding a retry interval being exceeded. For example, if a retry attempt to a message receiver is unsuccessful after eight hours, then the message receiver might be being replaced. In such a situation, the message receiver itself might request (e.g., pull) the security payloads, when the message receiver is again stable. Thus, it might not make sense to queue messages for more than eight hours for such a message receiver. The maximum number of retries further can be determined based on storage constraints of the partitions or on any other criterion or criteria. If the processor determines in S350that the maximum number of retries has been reached, the algorithm300advances to S360. If the processor determines in S350that the maximum number of retries has not been reached, then the delivery processor120can delete the security payload. In some implementations, the delivery processor notifies an administrator of the delivery system100that the security payload was undeliverable to the message receiver. The algorithm300then advances to S355. In S355, the processor increments the retry number for the security payload. For example, the processor can increment the number of retries in the entry for the security payload in the table200. In another implementation, the processor can rename the instance of the security payload, based on the incremented retry number. In suitable implementations, the processor can modify the header or footer of the security payload with the incremented retry number. The algorithm300then returns to S320. In S360, the algorithm300concludes. In the example shown inFIG.3, the processor retrieves the security payload from the current partition in S335. In other implementations, the processor periodically polls the queuing service130with the current time. Thus, the queuing service130or the partition thereof can identify and retrieve a security payload from the partition, based on the current time and the retry interval associated with the partition. The queuing service130or partition then returns the security payload to the processor. Further, when an offline message receiver170,180returns online, the delivery processor120delivers the queued push notifications in the order in which their retry intervals from their storage time elapse. Consequently, the push notifications are potentially delivered out of order (i.e., not in the order in which their initial transmission was attempted) and not necessarily all at once. As above, the delivery processor120can select security payloads from the queuing service130and can attempt to send the security payloads by the queue polling schedule. Such an implementation of the delivery system100typically delivers one security payload to a particular message receiver at a time. In some implementations, the queuing service130is maintained per message receiver. In an exemplary such implementation, upon the delivery system100receiving an acknowledgement of a receipt of a security payload from the message receiver170, the delivery processor120can retrieve outstanding security payloads from the partitions of the queuing service130for the message receiver170. For example, the queuing service130can determine these security payloads, based on the message receiver column220of entries of the table200. In other implementations, the queuing service130can determine these security payloads, based on the message receiver identified in the names of the security payloads. The delivery processor120can also identify the security payloads by determining whether their header or footer identifies the message receiver. The delivery processor120then can prioritize attempts to transmit the outstanding security payloads to message receiver170. That is, the delivery processor120can recognize a successful delivery of a security payload to a message receiver, query the queuing service130for security payloads intended for this message receiver, and deliver the security payloads to the message receiver. FIG.4illustrates a logical diagram of delivery pipeline400, according to another implementation of the present disclosure. The delivery pipeline includes a delivery queue405, message producers410, a download queue430, and message receivers460,470. As illustrated inFIG.4, the message producer410provides a security payload to the delivery queue405. The delivery queue405attempts to dispatch the security payload to the message receivers460,470. If the dispatch is unsuccessful, then the number of retry attempts to publish the security payload is set at 1, and a retry interval of t1 for the security payload is established. The security payload is then stored in a first partition of the download queue430. After a first retry interval (e.g., t1 hours) has elapsed since storage of the security payload in the first partition, the overall retention period of the security payload is decreased by the first retry interval. The security payload is consumed, and a dispatch is again attempted. In particular, a first retry transmission of the security payload to message receivers460,470is attempted. If the first retry transmission is unsuccessful, then the number of retry attempts to publish the security payload is set at 2, and a retry interval of t2 is established. The retry interval t2 generally differs from, and typically is longer than, the retry interval t1. After a second retry interval (e.g., t2 hours) has elapsed since storage of the security payload in the second partition, the overall retention period of the security payload is decreased by the second retry interval, the security payload is again consumed, and a dispatch is again attempted. In the example ofFIG.4, this process repeats until the nth retry transmission. If the nth retry transmission is unsuccessful, then the delivery system determines whether an overall retention period has elapsed. If the overall retention period has not elapsed, then the overall retention period is decreased by the retry interval of the nth retry, and the security payload is again stored in the first partition (with its retry interval of t1). Advantageously, some implementations of the delivery system100do not require additional permanent storage. In various implementations, the delivery system100can keep the queue partitions balanced. The delivery system100illustrated inFIG.1included the message producer no. In many implementations, the message producer110is outside the delivery system100. In the present disclosure, the queued messages were described in the context of security payloads. In other implementations, the queued messages are not security payloads and can include any other type of data. In some implementations, the system can be modified to use the queuing service130for an initial attempt to send a security payload. In this sense, the queuing service130and the table200can accommodate initial tries (e.g., a retry number of 0). Some implementations of the present disclosure use the Apache Kafka® platform. In particular, the queuing service130can be a Kafka® queue. Select implementations use IBM MQ products. In various implementations, messages are delivered through an API (application programming interface) program. In many implementations, partitions within the queuing service130act as distinct logical queues that are backed by a given partition within the queuing service130. In the above description, the queuing service130can contain multiple partitions, and the partitions can be further divided. In several implementations, the partitions of queuing service130can be generic for a plurality of message receivers; in other implementations, one or more message receivers have independent partitions of the queuing service130. In many implementations, the message producer no publishes security artifacts to a delivery queue for an initial delivery attempt of a security payload. In various implementations, the queuing service130is generic in the sense that it does not contain differentiated partitions. Further, although the delivery processor120attempts to transmit a security payload to multiple message receivers, the delivery queue typically contains only one copy of the security payload. When an attempt to deliver a security payload fails, the delivery processor120inserts or publishes the security payload to the queuing service130. In various implementations, the delivery processor120pushes an independent copy of the security payload into the queuing service130for a failed message receiver. That is, if the delivery processor120fails to deliver the security payload to message receivers170,180, then the queuing service130can contain two copies of the security payload, according to many implementations. In some implementations, there is one generic queuing service130for a plurality of message receivers that is partitioned per retry number value. In some implementations, the queuing service130further divides the partitions to have one download queue per message receiver. The system may include one or more processors and memory that, when executed by the one or more processors, cause the system to perform the implementations described herein. FIG.5illustrates a computing device500, according to an implementation of the present disclosure. Although illustrated within a single housing, the computing device500can be distributed across plural housings or sub-systems that cooperate in executing program instructions. In some implementations, the computing device500can include one or more blade server devices, standalone server devices, personal computers (including laptop computers and tablet computers), routers, hubs, switches, bridges, firewall devices, intrusion detection devices, mainframe computers, network-attached storage devices, video game systems, smartphones and other mobile telephones, and other computing devices. The computing device500can execute the Windows® operating system in many implementations. The hardware of the computing device500can be configured according to a Symmetric Multi-Processing (SMP) architecture or a Non-Uniform Memory Access (NUMA) architecture. The computing device500can include a network interface510that provides one or more communication connections and/or one or more devices that allow for communication between the computing device500and other computing systems (not shown) over a communication network or collection of networks (not shown) or the air. The network interface can communicate using near-field communications (NFC), Wi-Fi™, Bluetooth, Ethernet, cellular (e.g., 4G, 5G), facsimile, or any other wired or wireless interface. The computing device500can also include a user input interface520that receives inputs from a human. The user input interface520can be or include a mouse, a touchpad, a keyboard, a touchscreen, a trackball, a camera, a microphone, a joystick, a game controller, a scanner, or any other input device. The computing device500can include a memory530, also termed a “storage.” The memory530can include or be one or more computer-readable storage media readable by a processor540and that store software. The memory530can be implemented as one storage device and can also be implemented across multiple co-located or distributed storage devices or sub-systems. The memory530can include additional elements, such as a memory controller, that communicate with the processor540. The memory530can also include storage devices and/or sub-systems on which data and/or instructions are stored. The computing device500can access one or more storage resources to access information to carry out any of the processes indicated in this disclosure and, in particular,FIG.3. The memory530can be or include a read-only memory (ROM), a programmable ROM (PROM), an erasable PROM (EPROM), an electrically erasable PROM (EEPROM), a random-access memory (RAM), a dynamic RAM (DRAM), a static RAM (SRAM), a hard drive, a cache memory, a flash memory, a removable disk, or a tape reel. The memory530can be or include resistive RAM (RRAM) or a magneto-resistive RAM (MRAM). Other implementations are possible. A message queuing program560stored in memory530can include routines for at least partially performing at least one of the processes illustrated inFIG.3and can be implemented in program instructions. Further, the software, when executed by the computing device500in general or the processor540specifically, can direct, among other functions, the computing device500or the processor540to perform the message queuing as described herein. The computing device500can include a processor540(e.g., a processing unit). The processor540can perform the operations of the message producer110, the delivery processor120, and/or the queuing service130. The processor540can be or include one or more hardware processors and/or other circuitry that retrieve and execute software from the memory530. The processor540can be implemented within one processing device, chip, or package and can also be distributed across multiple processing devices, chips, packages, or sub-systems that cooperate in executing program instructions. In some implementations, the processor540is or includes a Graphics Processing Unit (GPU). The processor540can have any register size, such as a 32-bit register or a 64-bit register, among others. The processor540can include multiple cores. Implementations of the processor540are not limited to any particular number of threads. The processor540can be fabricated by any process technology, such as 14 nm process technology. The computing device500can also include a user output interface550that outputs information to a human user. The user output interface550can be or include a display (e.g., a screen), a touchscreen, speakers, a printer, or a haptic feedback unit. In many implementations, the user output interface550can be combined with the user input interface520to include, for example, a touchscreen or a headset including headphones and a microphone. In implementations including multiple computing devices, a server of the system or, in a serverless implementation, a peer can use one or more communications networks that facilitate communication among the computing devices. For example, the one or more communications networks can include or be a local area network (LAN), a wide area network (WAN), or a metropolitan area network (MAN) that facilitate communication among the computing devices. One or more direct communication links can be included between the computing devices. In addition, in some cases, the computing devices can be installed at geographically distributed locations. In other cases, the multiple computing devices can be installed at one geographic location, such as a server farm or an office. As used herein, the terms “storage media” or “computer-readable storage media” can refer to non-transitory storage media, such as non-limiting examples of a hard drive, a memory chip, and cache memory, and to transitory storage media, such as carrier waves or propagating signals. Aspects of the system for message queuing can be implemented in various manners (e.g., as a method, a system, a computer program product, or one or more computer-readable storage media). Accordingly, aspects of the present disclosure can take the form of a hardware implementation, a software implementation (including firmware, resident software, or micro-code) or an implementation combining software and hardware aspects that can generally be referred to herein as a “circuit,” “module” or “system.” Functions described in this disclosure can be implemented as an algorithm executed by one or more hardware processing units, e.g., one or more microprocessors of one or more computers. In various implementations, different operations and portions of the operations of the algorithms described can be performed by different processing units. Furthermore, aspects of the present disclosure can take the form of a computer program product implemented in one or more computer-readable media having computer-readable program code implemented, e.g., encoded or stored, thereon. In various implementations, such a computer program can, for example, be downloaded (or updated) to existing devices and systems or be stored upon manufacture of these devices and systems. The detailed description presents various descriptions of specific implementations. The innovations described can be implemented in a multitude of different ways, for example, as defined and covered by the claims and/or select examples. In the description, reference is made to the drawings where like reference numerals can indicate identical or functionally similar elements. Elements illustrated in the drawings are not necessarily drawn to scale. Additionally, particular implementations can include more elements than illustrated in a drawing and/or a subset of the elements illustrated in a drawing. Further, some implementations can incorporate a suitable combination of features from two or more drawings. The disclosure describes various illustrative implementations and examples for implementing the features and functionality of the present disclosure. The components, arrangements, and/or features are described in connection with various implementations and are merely examples to simplify the present disclosure and are not intended to be limiting. In the development of actual implementations, implementation-specific decisions can be made to achieve the developer's specific goals, including compliance with system, business, and/or legal constraints, which can vary from one implementation to another. Additionally, while such a development effort might be complex and time-consuming, it would be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure. The systems, methods and devices of this disclosure have several innovative aspects, no one of which is solely responsible for the attributes disclosed herein. Some objects or advantages might not be achieved by implementations described herein. Thus, for example, certain implementations can operate in a manner that achieves or optimizes one advantage or group of advantages as taught herein and not other objects or advantages as taught or suggested herein. In one example implementation, electrical circuits of the drawings can be implemented on a board of an electronic device. The board can be a general circuit board that can hold various components of the internal electronic system of the electronic device and, further, provide connectors for other peripherals. More specifically, the board can provide the electrical connections by which other components of the system can communicate electrically. Any processors (inclusive of digital signal processors, microprocessors, and supporting chipsets) and computer-readable memory elements can be coupled to the board based on configurations, processing demands, and computer designs. Other components such as external storage, additional sensors, controllers for audio/video display, and peripheral devices can be attached to the board as plug-in cards, via cables, or integrated into the board itself. In various implementations, the functionalities described herein can be implemented in emulation form as software or firmware running within one or more configurable (e.g., programmable) elements arranged in a structure that supports these functions. The software or firmware providing the emulation can be provided on one or more non-transitory, computer-readable storage media including instructions to allow one or more processors to carry out those functionalities. In another example implementation, the electrical circuits of the drawings can be implemented as stand-alone modules (e.g., a device with associated components and circuitry configured to perform a specific application or function) or implemented as plug-in modules into application-specific hardware of electronic devices. Implementations of the present disclosure can be readily included in a system-on-chip (SOC) package. An SOC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into one chip. The SOC can contain digital, analog, mixed-signal, and radio frequency functions on one chip substrate. Other implementations can include a multi-chip-module (MCM) with a plurality of separate ICs located within one electronic package and that interact through the electronic package. In various other implementations, the processors can be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), a programmable logic array (PLA), programmable array logic (PAL), generic array logic (GAL), and other semiconductor chips. The specifications, dimensions, and relationships outlined herein (e.g., the number of processors and logic operations) have been offered for non-limiting purposes of example and teaching. Such information can be varied considerably. For example, various modifications and changes can be made to arrangements of components. The description and drawings are, accordingly, to be regarded in an illustrative sense, not in a restrictive sense. With the numerous examples provided herein, interaction was described in terms of two, three, four, or more electrical components for purposes of clarity and example. The system can be consolidated in any manner. Along similar design alternatives, the illustrated components, modules, and elements of the drawings can be combined in various possible configurations within the scope of this disclosure. In some cases, it is clearer to describe one or more of the functionalities of a given set of flows by referencing a reduced number of electrical elements. The electrical circuits of the drawings and their teachings are readily scalable and can accommodate many components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided do not limit the scope or inhibit the teachings of the electrical circuits as potentially applied to a myriad of other architectures. In this disclosure, references to various features (e.g., elements, structures, modules, components, steps, operations, characteristics, etc.) included in “one implementation,” “example implementation,” “an implementation,” “another implementation,” “some implementations,” “various implementations,” “other implementations,” “alternative implementation,” and the like are intended to mean that any such features are included in one or more implementations of the present disclosure and might not necessarily be combined in the same implementations. Some operations can be deleted or omitted where appropriate, or these operations can be modified or changed considerably. In addition, the timing of these operations can be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Implementations described herein provide flexibility in that any suitable arrangements, chronologies, configurations, and timing mechanisms can be provided. EXAMPLES In Example A1, an apparatus includes a network interface that performs an initial attempt to transmit a security payload; and a processing unit configured to determine a first partition of a queuing service for storing the security payload at a first time, at least in part based on a determination that an initial attempt to transmit the security payload failed, and to instruct a retrieval of the security payload from the first partition to perform a first retry attempt to transmit the security payload, at least in part based on a determination that a first retry interval since the first time has elapsed. Example A2 is the apparatus of Example A1, wherein the processing unit further is configured to determine a second partition of the queuing service for storing the security payload at a second time, at least in part based on a determination that the first retry attempt to transmit the security payload failed. Example A3 is the apparatus of Example A2, wherein the determination that the first retry attempt to transmit the security payload failed is at least in part based on a determination that the network interface did not receive an acknowledgement of the first retry attempt within a second predetermined time. Example A4 is the apparatus of any of Examples A2-A3, wherein the processing unit further is configured to determine a second retry interval, the second retry interval being different than the first retry interval, and to instruct a retrieval of the security payload from the second partition to perform a second retry attempt to transmit the security payload, at least in part based on a determination that the second retry interval since the second time has elapsed. Example A5 is the apparatus of Example A4, wherein the second retry interval is greater than the first retry interval. Example A6 is the apparatus of any of Examples A1-A5, wherein the processing unit further is configured to name the security payload, at least in part based on the first time and a recipient of the security payload. Example A7 is the apparatus of any of Examples A1-A6, wherein the processing unit further is configured to instruct the network interface to perform the first retry attempt to transmit the security payload. In Example C1, a non-transitory, computer-readable medium is encoded with instructions that, when executed by a computer, cause the computer to perform operations comprising: performing an initial attempt to transmit a security payload; determining a first partition of a queuing service for storing the security payload at a first time, at least in part based on a determination that an initial attempt to transmit the security payload failed; and instructing a retrieval of the security payload from the first partition to perform a first retry attempt to transmit the security payload, at least in part based on a determination that a first retry interval since the first time has elapsed. Example C2 is the medium of Example C1, the operations further comprising: determining a second partition of the queuing service for storing the security payload at a second time, at least in part based on a determination that the first retry attempt to transmit the security payload failed. Example C3 is the medium of Example C2, wherein the determination that the first retry attempt to transmit the security payload failed is at least in part based on a determination that an acknowledgement of the first retry attempt was not received within a second predetermined time. Example C4 is the medium of any of Examples C2-C3, the operations further comprising: determining a second retry interval, the second retry interval being different than the first retry interval; and instructing a retrieval of the security payload from the second partition to perform a second retry attempt to transmit the security payload, at least in part based on a determination that the second retry interval since the second time has elapsed. Example C5 is the medium of Example C4, wherein the second retry interval is greater than the first retry interval. Example C6 is the medium of any of Examples C1-C5, the operations further comprising: naming the security payload, at least in part based on the first time and a recipient of the security payload. Example C7 is the medium of any of Examples C1-C6, the operations further comprising: instructing a performance of the first retry attempt to transmit the security payload. In Example M1, a method includes: performing an initial attempt to transmit a security payload; determining a first partition of a queuing service for storing the security payload at a first time, at least in part based on a determination that the initial attempt to transmit the security payload failed; and instructing a retrieval of the security payload from the first partition to perform a first retry attempt to transmit the security payload, at least in part based on a determination that a first retry interval since the first time has elapsed. Example M2 is the method of Example M1, further comprising: determining a second partition of the queuing service for storing the security payload at a second time, at least in part based on a determination that the first retry attempt to transmit the security payload failed. Example M3 is the method of Example M2, wherein the determination that the first retry attempt to transmit the security payload failed is at least in part based on a determination that an acknowledgement of the first retry attempt was not received within a second predetermined time. Example M4 is the method of any of Examples M2-M3, further comprising: determining a second retry interval, the second retry interval being different than the first retry interval; and instructing a retrieval of the security payload from the second partition to perform a second retry attempt to transmit the security payload, at least in part based on a determination that the second retry interval since the second time has elapsed. Example M5 is the method of Example M4, further comprising: naming the security payload at least in part based on the first time and a recipient of the security payload. Example M6 is the method of any of Examples M1-M5, wherein the second retry interval is greater than the first retry interval. Example M7 is the method of any of Examples M1-M6, further comprising: instructing a transmission of the first retry attempt of the security payload.
48,695
11863471
DESCRIPTION OF EMBODIMENTS In a current LTE system, in case that a PUCCH sending mode of a PUCCH format 3 is configured, assuming that five FDD downlink carriers are configured, specifically, data scheduling and a PUCCH channel resource indication manner are as follows: If UE receives only a PDCCH for scheduling a PDSCH on a primary component carrier, the UE feeds back an ACK/NACK by using a PUCCH format 1a/1b, and a channel resource of the PUCCH format 1a/1b is implicitly indicated by using a control channel element (CCE) number of the PDCCH. In case that UE receives at least a PDCCH for scheduling a PDSCH on a secondary component carrier, the UE feeds back an ACK/NACK by using a PUCCH format 3. A channel resource of the PUCCH format 3 is explicitly indicated by using a 2-bit field on the PDCCH for scheduling the PDSCH on the secondary component carrier. The 2-bit field may be referred to as a channel resource indication field. Specifically, a base station allocates four PUCCH format 3 channel resources to the UE by using radio resource control (RRC) signaling in advance, and a specific channel resource among the four channel resources that is used for each time of scheduling is indicated by using the 2-bit field on the PDCCH for scheduling the PDSCH on the secondary component carrier. An ACK/NACK feedback of a TDD single carrier is further supported in a current PUCCH format 3 mode. A specific procedure is as follows: If UE receives only a PDCCH for scheduling a PDSCH on a primary component carrier, and a value indicated by a downlink assignment index (DAI) field on the PDCCH is ‘1’, the UE feeds back an ACK/NACK by using a PUCCH format 1a/1b, and a channel resource of the PUCCH format 1a/1b is implicitly indicated by using a CCE number of the PDCCH. If UE receives a PDCCH for scheduling a PDSCH on a primary component carrier, and a value indicated by a DAI field on the PDCCH is greater than ‘1’, the UE feeds back an ACK/NACK by using a PUCCH format 3, and a channel resource of the PUCCH format 3 is explicitly indicated by using a 2-bit field on the PDCCH. The foregoing ACK/NACK is transmitted by using the PUCCH format 1a/1b, so as to reduce overheads of the PUCCH format 3. Because code division multiplexing can be performed only for five UEs in one RB by using the PUCCH format 3, but code division multiplexing can be performed for a maximum of 36 UEs in one RB by using the PUCCH format 1a/1b, resource overheads of the PUCCH format 3 are reduced as much as possible. The following describes the technical solutions in the embodiments of the present disclosure with reference to the accompanying drawings in the embodiments of the present disclosure. Apparently, the described embodiments are a part rather than all of the embodiments of the present disclosure. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present disclosure without creative efforts shall fall within the protection scope of the present disclosure. It should be noted that provided that no conflict occurs, the embodiments of the present disclosure and features in the embodiments may be mutually combined. FIG.1shows an application scenario according to an embodiment of the present disclosure. InFIG.1, an LTE system is used as an example for description, but this embodiment of the present disclosure is not limited to the LTE system. As shown inFIG.1, an LTE communications system includes an access network device and user equipment. The access network device may configure multiple carriers for UE to increase a data rate of the UE, so as to implement CA, or may configure only one carrier for UE. The carrier herein refers to a carrier group inFIG.1, and one carrier group includes one uplink carrier and one downlink carrier. For example, inFIG.1, the access network device configures two carriers for user equipment2, and configures one carrier for user equipment1. It should be understood that in the embodiments of the present disclosure, the user equipment may also be referred to as a terminal, a terminal equipment, a mobile station (MS), a mobile terminal, or the like. The user equipment may communicate with one or more core networks by using a radio access network (RAN). For example, the user equipment may be a mobile phone (or referred to as a cellular phone), or a computer with a mobile terminal. For example, the user equipment may be a portable, pocket-sized, handheld, computer built-in, or in-vehicle mobile apparatus, and exchanges voice and/or data with the radio access network. In the embodiments of the present disclosure, the access network device may be a base station, an enhanced base station, a relay with a scheduling function, or the like. The base station may be an evolved NodeB (eNB or e-NodeB) in the LTE system, or may be a base station in another system, such as an evolved system of the LTE system. This is not limited in the embodiments of the present disclosure. The subsequent embodiments are described by using the base station as an example, but it does not indicate that the embodiments of the present disclosure are limited only to the base station. It should be noted that, for specific functions performed by the user equipment and the access network device included in the system in this embodiment, refer to descriptions of the subsequent embodiments. FIG.2shows a schematic structural diagram of user equipment100according to an embodiment of the present disclosure. As shown inFIG.2, the user equipment100includes a receiving module110, a processing module120, and a sending module130. When the user equipment100feeds back an ACK/NACK by using a PUCCH format 3, a DFT-S-OFDM transmission manner may be used. A channel structure for feeding back the ACK/NACK by using the PUCCH format 3 is shown inFIG.3, and the channel structure may be implemented by the processing module120. Specifically, Reed Muller (RM) channel encoding is performed on original ACK/NACK bits such as 20 bits to generate 48 bits, the encoded bits are scrambled, and then the scrambled bits are modulated into 24 quadrature phase shift keying (QPSK) symbols, which are equally placed in two timeslots of one subframe. In this way, there are 12 QPSK symbols in each timeslot, and the 12 QPSK symbols are placed on 12 consecutive subcarriers of one time-domain symbol in one timeslot, that is, the 12 QPSK symbols occupy 12 subcarriers of one time-domain symbol on one resource block (RB). Then, for each timeslot, spectrum spread is performed by using a sequence w with a length-5 orthogonal cover code (OCC) in a time domain. The OCC occupies five time-domain symbols on one RB in one timeslot, code division multiplexing may be performed for different UEs on one RB by using different OCCs, and other two symbols are used for carrying a reference signal (RS). Then, DFT precoding and inverse fast Fourier transform (IFFT) are performed on a signal obtained by means of spectrum spread. To support transmission of more than 20 ACK/NACK bits, a method is to expand a current capacity of the PUCCH format 3, for example, from one RB to multiple RBs. Specifically, a dual-RB PUCCH format 3 is used as an example. In this way, in the foregoing channel format, there are 40 original ACK/NACK bits, and 12 subcarriers occupied in each timeslot only need to be expanded into 24 subcarriers occupied in each timeslot without time-domain OCC spectrum spread. In this way, the dual-RB PUCCH format 3 can proportionally support a feedback of the 40 ACK/NACK bits, and can further support CA of more carriers (such as ten carriers). A case in which one RB is expanded into three RBs or more RBs is similar, and expansion needs to be performed only in a frequency domain. However, due to a limited multiplexing capability of a single-RB PUCCH format 3, overheads of the single-RB PUCCH format 3 are higher than those of a PUCCH format 1a/1b. When the single-RB PUCCH format 3 is expanded into a multi-RB PUCCH format 3, overheads may be even higher because a multiplexing capability of the multi-RB PUCCH format 3 is the same as that of the single-RB PUCCH format 3, but occupied resources double with RB expansion. The dual-RB PUCCH format 3 is used as an example. Assuming that CA of ten carriers is supported, that is, if ten carriers are scheduled, the dual-RB PUCCH format 3 is used. However, after ten carriers are configured for UE, not all of the ten carriers in each subframe are scheduled, but instead with multiple factors considered, several specific carriers in the ten carriers are scheduled for the UE for data transmission. Specifically, a quantity of to-be-scheduled carriers may be determined according to current service load. However, even if there is a scheduling requirement, a capacity of a PDCCH resource area further needs to be considered. If a PDCCH capacity of the UE is already insufficient for scheduling, data on a corresponding carrier cannot be scheduled. Therefore, even if ten carriers are configured for the UE, only some carriers in a subframe may need to be scheduled for data transmission. In addition, not all downlink subframes on one carrier are scheduled for the UE in actual scheduling. Therefore, optimization of overheads of the PUCCH format 3 may be considered, to reduce the overheads as much as possible. Based on a solution in which a single-RB PUCCH format 3 is expanded into a PUCCH format 3 of at least two RBs, in this embodiment of the present disclosure solution, overheads of a PUCCH format 3 are optimized according to a dynamically scheduled downlink subframe and/or carrier. A specific solution is as follows: User equipment UE receives downlink control information sent by an access network device. The UE receives a data channel scheduled by using the downlink control information. The UE determines an uplink subframe used for sending feedback information corresponding to the data channel. A first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset. The UE determines a channel resource. The channel resource is a first uplink channel resource or a second uplink channel resource, the first uplink channel resource corresponds to the first subset, and the second uplink channel resource corresponds to the second subset. The UE sends the feedback information on the channel resource in the uplink subframe by using a channel format. The channel format is a p-codebook-size channel format, and the UE sends the feedback information on the first uplink channel resource by using the p-codebook-size channel format, or the channel format is a q-codebook-size channel format, and the UE sends the feedback information on the second uplink channel resource by using the q-codebook-size channel format; and p and q are natural numbers, and p>q. The p-codebook-size channel format means that the channel format can support a feedback of an ACK/NACK of a maximum of p codebook sizes, and the q-codebook-size channel format means that the channel format can support a feedback of an ACK/NACK of a maximum of q codebook sizes. A codebook size refers to a quantity of original unencoded ACK/NACK bits. Specifically, the p codebook sizes correspond to the first subset, and the q codebook sizes correspond to the second subset. That is, the p codebook sizes are determined according to a quantity of downlink subframes in the first subset, and the q codebook sizes are determined according to a quantity of downlink subframes in the second subset. Optionally, a channel resource occupied by the p-codebook-size channel format includes n resource elements, a channel resource occupied by the q-codebook-size channel format includes m resource elements, m and n are natural numbers, and m is greater than or equal to n. In this way, the p-codebook-size channel format may also be considered as an n-resource-element channel format, and the q-codebook-size channel format may also be considered as an m-resource-element channel format. When m is greater than n, the following embodiment based on an m-resource-element channel format and an n-resource-element channel format is completely applicable to this solution of p codebook sizes and q codebook sizes. In this case, in descriptions of the following embodiment, the n-resource-element channel format may be directly replaced with the p-codebook-size channel format, and the m-resource-element channel format may be directly replaced with the q-codebook-size channel format. In addition, the p-codebook-size channel format and the q-codebook-size channel format are further applicable to a case in which m is equal to n. In this case, descriptions of the following embodiment are also applicable to a case in which the n-resource-element channel format is directly replaced with the p-codebook-size channel format, and the m-resource-element channel format is directly replaced with the q-codebook-size channel format, but in this case, m=n instead of m>n. Therefore, optionally, in an embodiment, the p-codebook-size channel format and the q-codebook-size channel format occupy a same quantity of resource elements, and a length of an orthogonal code used by the p-codebook-size channel format is greater than a length of an orthogonal code used by the q-codebook-size channel format. In the subsequent embodiments, the n-resource-element channel format and the m-resource-element channel format (where m>n) are used as an example for description. However, it should be noted that the embodiments of the present disclosure are not limited thereto. The embodiments of the present disclosure may be applied to a case of the p-codebook-size channel format and the q-codebook-size channel format (where q>p) and a case of the n-resource-element channel format and the m-resource-element channel format (where m=n). In the user equipment100in this embodiment, the receiving module110and the sending module130are coupled to the processing module120. The user equipment100may further include a storage module and another component. The receiving module110is configured to: receive downlink control information sent by an access network device, and receive a data channel scheduled by using the downlink control information. The processing module120is configured to: determine an uplink subframe used for sending feedback information corresponding to the data channel that is received by the receiving module110, and determine a channel resource. A first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, the first subset is a proper subset of the second subset, the channel resource is a first uplink channel resource or a second uplink channel resource, the first uplink channel resource corresponds to the first subset, and the second uplink channel resource corresponds to the second subset. The sending module130is configured to: under control of the processing module120, send the feedback information on the channel resource in the uplink subframe by using a channel format. The channel format is an n-resource-element channel format, and the first uplink channel resource carries feedback information of the n-resource-element channel format, or the channel format is an m-resource-element channel format, and the second uplink channel resource carries feedback information of the m-resource-element channel format; and m and n are natural numbers, and m>n. The receiving module110receives the downlink control information by using a downlink control channel. For example, the receiving module110receives the downlink control information by using a PDCCH, or receives the downlink control information by using an enhanced PDCCH (ePDCCH). The control channel is a control channel for scheduling a data channel on a secondary component carrier, and/or the control channel is a control channel for scheduling a data channel on a primary component carrier. The feedback information in this embodiment of the present disclosure may be an ACK/NACK. Certainly, the feedback information may be other feedback information, and the feedback information can indicate whether data carried on the data channel is received. The uplink subframe determined by the processing module120is determined according to a preconfiguration. For example, the access network device sends an uplink-downlink subframe configuration to the UE in advance. The processing module120can determine, according to the uplink-downlink subframe configuration preconfigured by the access network device, the uplink subframe used for sending the feedback information. Therefore, the UE in this embodiment of the present disclosure further includes the storage module, configured to store the preconfiguration sent by the access network device to the UE. ACKs/NACKs corresponding to scheduled data channels in downlink subframes associated with the uplink subframe need to be fed back in the uplink subframe. These downlink subframes are determined according to a preconfigured time sequence or timing correspondence between a downlink subframe and an uplink subframe, that is, according to the preconfigured uplink-downlink subframe configuration. For example, a downlink subframe associated with an uplink subframe may be determined according to Table 2. In this embodiment of the present disclosure, all downlink subframes that are associated with the uplink subframe used for feeding back ACKs/NACKs are referred to as the first downlink subframe set, and the first downlink subframe set includes at least two subsets, that is, the first subset and the second subset. All the downlink subframes are all downlink subframes that are on all carriers configured for the UE and for which an ACK/NACK feedback is configured in the uplink subframe. For example, if 15 carriers are configured by the access network device for the UE, and a same uplink-downlink subframe configuration 2 (for details, refer to Table 1 and Table 2) is configured for the 15 carriers, the uplink subframe is a subframe 2, and all the downlink subframes associated with the uplink subframe, that is, the first downlink subframe set, include downlink subframes 4, 5, 6, and 8 on the 15 carriers. This embodiment is described by using an example in which the first downlink subframe set includes two subsets, but is not limited to two sets. The first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset, that is, the first subset includes some downlink subframes in the first downlink subframe set. The second subset may include all downlink subframes in the first downlink subframe set, or may include only some downlink subframes in the first downlink subframe set. For a downlink subframe that is in the first downlink subframe set but does not belong to the second subset, refer to a method for determining the first subset and the second subset in this embodiment of the present disclosure. It should be noted that this embodiment of the present disclosure is not limited to the foregoing two subsets, and there may be more than two subsets. For example, if 15 carriers are configured for the UE, and downlink subframes corresponding to these carriers may be grouped into three subsets, or four subsets. Certainly, there may be more sets. It should be further noted that a subset in this embodiment of the present disclosure may be a part of a universal set, or may be a universal set. For example, in this embodiment of the present disclosure, if A is a subset of B, A may include some elements in B, or may include all elements in B. However, in this embodiment, if A is a proper subset of B, A includes only some elements in B. Further, the UE may determine the first subset and the second subset according to a preconfiguration. For example, in TDD CA, it is assumed herein that subframes with a same subframe number on different carriers are different downlink subframes, and a TDD special subframe may be classified as a downlink subframe because downlink data can be transmitted in the special subframe but uplink data cannot be transmitted in the special subframe. For example, if the first subset includes downlink subframes 4, 5, 6, and 8 on carriers 1 to 5, and the second subset includes downlink subframes 4, 5, 6, and 8 on carriers 1 to 10, it can be learned that the second subset completely includes the first subset. In this embodiment, there may further be a third subset, which specifically includes downlink subframes 4, 5, 6, and 8 on carriers 1 to 15, that is, the third subset includes all preconfigured downlink subframes that are associated with ACKs/NACKs fed back in an uplink subframe and that are configured for the UE. That is, the third set is a universal set, that is, the foregoing first downlink subframe set. However, it can be learned that a relationship between the first subset and the second subset is structurally similar to both a relationship between the second subset and the third subset and a relationship between the first subset and the third subset. Therefore, the solution in this embodiment of the present disclosure may be directly extended to the second subset and the third subset, and the first subset and the third subset. Certainly, there may be another manner this embodiment of the present disclosure, and details are not described herein. Optionally, the UE may determine the first subset and the second subset by using a preconfigured rule. There may be multiple preconfigured rules, and this is not limited in this embodiment of the present disclosure. For example, the preconfigured rule may be a manner based on the preconfigured rule in which the first subset and the second subset are determined according to an ACK/NACK bit quantity threshold (such as 20 bits, 21 bits, or 22 bits) and at least one of a carrier number or a frame number. In this way, the UE determines that the first subset includes downlink subframes 4, 5, 6, and 8 on carriers 1 to 5, and the second subset includes downlink subframes 4, 5, 6, and 8 on carriers 1 to 10. In a method for selecting the first subset, all downlink subframes on a carrier 1 are first selected according to a sequence of time-domain subframe numbers, and then based on a frequency-domain carrier number, downlink subframes on a carrier 2 are selected, and selection continues until a quantity, limited by the threshold, of downlink subframes is reached. A manner of selecting the second subset is similar to that of selecting the first subset. For another example, it is assumed that the foregoing threshold is 10. Still in an example in which there are five carriers and each carrier has a subframe configuration 2, a set division manner is as follows: The first subset includes downlink subframes 4, 5, 6, and 8 on carriers 1 and 2 and downlink subframes 4 and 5 on a carrier 3. In addition to all downlink subframes in the first subset, the second subset includes downlink subframes 6 and 8 on the carrier 3, and downlink subframes 4, 5, 6, and 8 on carriers 3 and 4. In this case, different subframes on one carrier may be grouped into different downlink subframe sets. It can be learned that in this example, the first subset and the second subset are still selected first according to a time-domain subframe number and then according to a frequency-domain carrier number. For another example, the preconfigured rule may be as follows: Downlink subframes on a maximum quantity of carriers are determined as a set according to a carrier number, a subframe number, and a threshold, where the maximum quantity does not exceed the threshold. In this rule, because different subframes on a same carrier cannot be grouped into multiple sets that do not completely intersect, a quantity of downlink subframes in a set may be less than the foregoing threshold. For example, the first subset includes downlink subframes 4, 5, 6, and 8 on carriers 1 and 2. In addition to all downlink subframes in the first subset, the second subset includes downlink subframes 4, 5, 6, and 8 on carriers 3 to 5. For another example, the preconfigured rule may be: with reference to a threshold, the first subset and the second subset are selected first according to a frequency-domain carrier number and then according to a time-domain subframe number. It can be learned that there may be multiple preconfigured rules. Any preconfigured rule that can achieve an objective in this embodiment of the present disclosure can be used in this embodiment of the present disclosure, and details are not described herein. Optionally, the UE may determine the first subset and the second subset by using signaling sent by the access network device. The access network device may notify the UE of a division rule by using signaling, or may directly notify the UE of the first subset and the second subset. Certainly, the UE may determine the first subset and the second subset in another manner. Further, in this embodiment of the present disclosure, the first subset corresponds to the first uplink channel resource, and the sending module130adds the feedback information of the n-resource-element channel format to the first uplink channel resource by using the n-resource-element channel format; the second subset corresponds to the second uplink channel resource, and the sending module130adds the feedback information of the m-resource-element channel format to the second uplink channel resource by using the m-resource-element channel format; and m and n are natural numbers, and m>n. During each feedback, feedback information is sent on a corresponding channel resource by using only one channel format. That is, for a large subset, feedback information is sent by using a large resource format, and for a small subset, feedback information is sent by using a small resource format. A resource element in the m-resource-element channel format and the n-resource-element channel format may include any one of a resource block (RB), a resource block pair, a sub resource block, or a sub resource block pair. For example, if the resource element is an RB, there are m RBs and n RBs, where n may be 1, and m is a natural number greater than 1. A sub resource block is a part of a resource block. A frequency-domain width of a sub RB may be less than a frequency-domain width of an RB. For example, a sub RB occupies four subcarriers, and occupies one timeslot or one subframe in a time domain. Alternatively, a time-domain width of a sub RB may be less than a timeslot. For example, a sub RB occupies three time-domain symbols, and occupies 12 subcarriers in a frequency domain, that is, a frequency-domain width of one RB. Alternatively, a sub RB occupies a smaller frequency-domain width and a smaller time-domain width than a current RB in both the time domain and the frequency domain. A sub resource block pair is a pair of sub resource blocks. When m is greater than n, the foregoing embodiment based on an m-resource-element channel format and an n-resource-element channel format, including descriptions about a first downlink subframe set, a first subset, a second subset, and the like, is completely applicable to this solution of p codebook sizes and q codebook sizes. In addition, the p-codebook-size channel format and the q-codebook-size channel format are further applicable to a case in which m is equal to n. Details are as follows: When m is equal to n, that is, time-frequency resources occupied by the two channel formats have a same quantity of resource elements, such as one RB, or time-frequency resources occupied by the two channel formats completely overlap. In this case, the q-codebook-size channel format or the m-resource-element channel format may feed back more ACK/NACK codebooks than the p-codebook-size channel format or the n-resource-element channel format because a length of an orthogonal code used by the former is less than a length of an orthogonal code used by the latter. A codebook size is increased by reducing multiplexing efficiency of a same time-frequency resource. In an example of a PUCCH format 3, it is assumed that time-frequency resources of the PUCCH format 3 on an RB are occupied by two formats, but a length of a time-domain orthogonal code of the p-codebook-size channel format is 5, that is, p-codebook-size channel formats of a maximum of five UEs can be multiplexed on the RB. It is assumed that lengths of time-domain orthogonal codes of the q-codebook-size channel format are 2 and 3, that is, spectrum spread is performed on the first two ACK/NACK symbols by using a length-2 time-domain orthogonal code, and spectrum spread is performed on the last three ACK/NACK symbols by using a length-3 time-domain orthogonal code. Assuming that other two symbols in this timeslot are used for transmission of an uplink demodulated pilot, q-codebook-size channel formats of two UEs can be accommodated on the RB in this case. However, an ACK/NACK codebook size supported by the q-codebook-size channel format is two times an ACK/NACK codebook size supported by the p-codebook-size channel format because spectrum spread is performed by using two groups of time-domain orthogonal codes in the q-codebook-size channel format. A code length of each group of time-domain orthogonal codes is less than the length 5 of the time-domain orthogonal code used by the p-codebook-size channel format, and a multiplexing capability is determined by a time-domain orthogonal code having a shorter length in the two groups of time-domain orthogonal codes. For a length of an orthogonal code, assuming that a group of orthogonal codes are {(1, 1), (1, −1)}, a length of the orthogonal code is 2 in this case, and there are a maximum of two orthogonal codes in this group of orthogonal codes having a code length 2. Alternatively, assuming that another group of orthogonal codes are {(1, 1, 1, 1), (1, 1, −1, −1), (1, −1, −1, 1), (1, −1, 1, −1)}, a code length of the orthogonal code is 4 in this case, and there are a maximum of four orthogonal codes in this group of orthogonal codes having a code length 4. According to the foregoing embodiment, when more carriers are configured for UE, a maximum quantity of ACK/NACK bits exceeds a current bearer capability of a single-RB PUCCH format 3. In this embodiment of the present disclosure, a downlink subframe set corresponding to an uplink subframe is divided into at least two subsets, a first subset is a proper subset of a second subset, and corresponding uplink channel resources are configured for the two subsets. For a large subset, feedback information is sent by using a large resource format, and for a small subset, feedback information is sent by using a small resource format, thereby resolving a problem of how to send feedback information when more carriers are configured. In addition, when there is a small quantity of instantaneously scheduled carriers, feedback information may be sent by using the fallback small resource format. Therefore, in this embodiment of the present disclosure, resource overheads can be reduced when an ACK/NACK is fed back. The following using a single-RB PUCCH format 3 and a dual-RB PUCCH format 3 as an example for description, that is, n is 1 and m is 2. Certainly, this embodiment may be further applied to a PUCCH format 3 of more RBs. In addition, this solution may be extended to a PUCCH format of another resource element, such as a dual-RB PUCCH format and a quad-RB PUCCH format, or PUCCH formats of different quantities of sub RBs. Therefore, the single-RB PUCCH format 3 and the dual-RB PUCCH format 3 in this embodiment may be extended to an m-resource-element PUCCH format and an n-resource-element PUCCH format. Both m and n are natural numbers, and m>n. Therefore, in this embodiment of the present disclosure, a module for channel encoding may encode original bits in feedback information of a small resource format, such as 20 ACK/NACK bits; or may encode original bits in feedback information of a large resource format, such as 40 ACK/NACK bits or 60 ACK/NACK bits. Encoding may be implemented by one or more channel encoders. The channel encoder may be a unit in a processor, or may be an independent channel encoder. Further, it is configured that the receiving module110receives, in the following manner, the data channel scheduled by using the downlink control information: receiving, in a downlink subframe included in a second downlink subframe set, the data channel scheduled by using the downlink control information. In this embodiment, the UE receives the downlink control information and then determines a to-be-scheduled downlink subframe according to the downlink control information. The to-be-scheduled downlink subframe constitutes the second downlink subframe set, and the second downlink subframe set is a subset of the first downlink subframe set. There may be one or more to-be-scheduled downlink subframes. When there is one to-be-scheduled downlink subframe, the downlink subframe may be a downlink subframe on a secondary component carrier, or may be a downlink subframe that is on a primary component carrier and that is scheduled by using a control channel with a downlink assignment index (DAI) field value greater than 1, but not a downlink subframe that is used for scheduling a PDSCH on a primary component carrier and that corresponds to a PDCCH with a DAI field ‘1’. Optionally, the second downlink subframe set may be a subframe set associated with the foregoing uplink subframe, and downlink subframes on all carriers currently activated for the UE constitute the subframe set. A downlink subframe actually scheduled for the UE belongs to a subset of the activated second downlink subframe set. It can be understood that the first downlink subframe set is a subframe set associated with the uplink subframe used for sending the feedback information, and the first downlink subframe set is configured for the UE by using radio resource control (RRC) signaling. The activated second downlink subframe set is a subset of the first downlink subframe set, and is configured for the UE by using Media Access Control (MAC) signaling. The downlink subframe actually scheduled for the UE is a downlink subframe in the second downlink subframe set. However, this embodiment of the present disclosure is described by using an example in which the second downlink subframe set is an actually scheduled downlink subframe set, but may also be applicable to a case in which the second downlink subframe set is the foregoing activated downlink subframe set. In this embodiment, although 15 carriers are configured for the UE, and a maximum of 60 ACK/NACK bits need to be fed back on an uplink subframe 2, a quantity of carriers or downlink subframes scheduled for the UE in a subframe may be less than the foregoing maximum value, such as 60 subframes on the 15 carriers in this embodiment. The quantity of carriers or subframes scheduled for the UE is specifically related to multiple factors such as instantaneous service load of the UE and a capacity of a control channel. After determining the uplink subframe, the processing module120further determines the channel resource. When relationships between the second downlink subframe set and the first subset, the second subset, and the first downlink subframe set are different, determined channel resources are different, and used PUCCH formats may also be different. When m is greater than n, the foregoing embodiment based on an m-resource-element channel format and an n-resource-element channel format, including descriptions about a second downlink subframe set and the like, is completely applicable to this solution of p codebook sizes and q codebook sizes. In addition, the p-codebook-size channel format and the q-codebook-size channel format are further applicable to a case in which m is equal to n. Further, the downlink control information includes resource indication information. It is configured that the processing module120determines the channel resource in the following manner: determining, according to the resource indication information, the channel resource used for carrying the feedback information. It should be noted that, a DAI in the downlink control information including the resource indication information is not 1. For example, the DAI is greater than 1. Alternatively, in an FDD CA system, because downlink control information does not have a DAI field, downlink control information used for scheduling a primary component carrier does not have the foregoing resource indication information, and only downlink control information used for scheduling a secondary component carrier has the foregoing resource indication information. Optionally, the resource indication information may be an explicit bit in the downlink control information. For example, at least one bit on a control channel is used as the resource indication information, and different states of the at least one bit instruct to use different PUCCH channel resources. Optionally, the resource indication information may be an implicit indication manner. For example, different scrambling codes on a control channel indicate different channel resources. Specifically, the resource indication information may be an ACK/NACK resource indicator (ARI). When m is greater than n, the following embodiment based on an m-resource-element channel format and an n-resource-element channel format, including a method for indicating a channel resource by using resource indication information, is completely applicable to this solution of p codebook sizes and q codebook sizes. In addition, the p-codebook-size channel format and the q-codebook-size channel format are further applicable to a case in which m is equal to n. For the channel resource in this embodiment of the present disclosure, two specific solutions are provided below. Solution 1 In case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the processing module120is the first uplink channel resource; orin case that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset, the channel resource determined by the processing module120is the second uplink channel resource; orin case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the processing module120is the second uplink channel resource. This embodiment is described by using an example in which the n-resource-element channel format is a single-RB PUCCH format and the m-resource-element channel format is a dual-RB PUCCH format. The first uplink channel resource corresponds to the single-RB PUCCH format, and the second uplink channel resource corresponds to the dual-RB PUCCH format. In this solution, in case that the second downlink subframe set is a subset of the first subset (case 1 for short), the channel resource determined by the processing module120is the first uplink channel resource. For example, the second downlink subframe set includes subframes 4, 5, 6, and 8 on a carrier 1, subframes 4, 5, and 6 on a carrier 2, and subframes 4 and 5 on a carrier 3. The carrier 1 is a primary component carrier. In this embodiment, it is assumed that the first subset consists of downlink subframes 4, 5, 6, and 8 on carriers 1 to 5. Therefore, the second downlink subframe set is a subset of the first subset. In this case, the first uplink channel resource corresponding to the first subset is used in this embodiment. That is, feedback information is sent by using a small resource format. In case that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset (case 2 for short), the channel resource determined by the processing module120is the second uplink channel resource. For example, the first subset consists of downlink subframes 4, 5, 6, and 8 on carriers 1 to 5, the second subset consists of downlink subframes 4, 5, 6, and 8 on carriers 1 to 10, and the second downlink subframe set consists of only subframes 4, 5, 6, and 8 on a carrier 6, subframes 4, 5, and 6 on a carrier 7, and subframes 4 and 5 on a carrier 8. That is, the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset. In this case, the second uplink channel resource corresponding to the second subset is used in this embodiment. That is, feedback information is sent by using a large resource format. In case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset (case 3 for short), the channel resource determined by the processing module120is the second uplink channel resource. For example, the first subset consists of downlink subframes 4, 5, 6, and 8 on carriers 1 to 5, the second subset consists of downlink subframes 4, 5, 6, and 8 on carriers 1 to 10, and the second downlink subframe set consists of subframes 4, 5, 6, and 8 on a carrier 1, subframes 4, 5, and 6 on a carrier 3, and subframes 4 and 5 on a carrier 6. That is, the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset. In this case, the second uplink channel resource corresponding to the second subset is used in this embodiment. That is, feedback information is sent by using a large resource format. In the subsequent descriptions, an example in which the n-resource-element channel format is a single-RB PUCCH format and the m-resource-element channel format is a dual-RB PUCCH format is used for description. The first uplink channel resource corresponds to the single-RB PUCCH format, and the second uplink channel resource corresponds to the dual-RB PUCCH format. Further, when determining, according to the resource indication information, the channel resource used for carrying the feedback information, the processing module120selects the channel resource from a resource set preconfigured by the access network device for the UE. Further, the access network device preconfigures a correspondence between a state of the resource indication information and a channel resource in the resource set for the UE. When determining, according to the resource indication information, the channel resource used for carrying the feedback information, the processing module120selects, according to the state of the resource indication information, the channel resource from the resource set preconfigured by the access network device for the UE. The access network device may preconfigure the resource set for the UE in the following three implementation manners: First Implementation Manner: The access network device preconfigures a second uplink channel resource set for the UE. The processing module120is further configured to: before determining the channel resource, obtain the second uplink channel resource set configured by the access network device. The second uplink channel resource is an uplink channel resource in the second uplink channel resource set, a part of each uplink channel resource in multiple uplink channel resources included in the second uplink channel resource set constitutes a first uplink channel resource set, and the first uplink channel resource is an uplink channel resource in the first uplink channel resource set. In this solution, in terms of time-frequency resource, a channel resource of an m-resource-element channel format includes a channel resource of a fallback n-resource-element channel format corresponding to the m-resource-element channel format. In this way, the channel resource of the n-resource-element channel format and the channel resource of the m-resource-element channel format that are orthogonal in terms of time-frequency resource may not need to be separately reserved, so that a base station does not need to perform blind detection on the channel resource of the n-resource-element channel format and the channel resource of the m-resource-element channel format that are orthogonal in terms of time-frequency resource. This reduces resource overheads of an uplink control channel, such as a PUCCH. As shown inFIG.4, the second uplink channel resource set preconfigured by the access network device for the UE includes four large resources inFIG.4, that is, channel resources of four dual-RBs inFIG.4. It should be noted that the four resources in this embodiment are merely an example, but are not used to limit the scope of this embodiment of the present disclosure. A person skilled in the art should understand that the access network device may configure more or fewer resources according to a requirement. A part of each element in the second uplink channel resource set constitutes the first resource set. For example, some resources of the four dual-RBs inFIG.4, that is, one RB in each of the four dual-RBs, constitute the first uplink channel resource set. The second uplink channel resource is selected from the second uplink channel resource set. That is, the second uplink channel resource is one of the channel resources of the four dual-RBs inFIG.4. The first uplink channel resource is selected from the first uplink channel resource set. For example, the first uplink channel resource is a single-RB channel resource in a dual-RB channel resource inFIG.4. For example, the first uplink channel resource may be a channel resource of an upper RB part of the second dual-RB channel resource inFIG.4. In this case, different states of the resource indication information may indicate different channel resources in the second uplink channel resource set. As shown inFIG.4, 00 indicates the first dual-RB in the second uplink channel resource set, 01 indicates the second dual-RB, 10 indicates the third dual-RB, and 11 indicates the fourth dual-RB. In the foregoing case 2 or case 3, it is configured that the processing module120specifically determines the second uplink channel resource from the second uplink channel resource set according to the channel resource indicator. For example, in case that the channel resource indicator is 10, the processing module120determines that the third dual-RB in the second uplink channel resource set is the second uplink channel resource. In the foregoing case 1, it is configured that the processing module120specifically determines the first uplink channel resource from the first uplink channel resource set according to the channel resource indicator. The first uplink channel resource is a part of an uplink channel resource indicated by the channel resource indicator, and the uplink channel resource is included in the second uplink channel resource set. Alternatively, it is configured that the processing module120specifically determines the first uplink channel resource according to the resource indication information. The first uplink channel resource is a part of an uplink channel resource that is in the second uplink channel resource set and that is indicated by the resource indication information. Which part of the uplink channel resource indicated by the resource indication information the first uplink channel resource is may be directly determined according to an identifier of the UE, preconfigured information, or a default setting. Optionally, which part of the uplink channel resource indicated by the channel resource indicator the processing module120uses may be determined according to the default setting. For example, an upper RB is used by default, or a lower RB is used by default. Alternatively, which part of the uplink channel resource indicated by the channel resource indicator the processing module120uses may be determined according to identification information of the user equipment or the preconfigured information. For example, the preconfigured information may indicate an upper half resource or a lower half resource of an uplink channel resource in the foregoing uplink channel resource set preconfigured by using RRC signaling, or may be preconfigured indication information used to indicate that an upper half resource or a lower half resource of an uplink channel resource in the foregoing uplink channel resource set is used as the first uplink channel resource. The processing module120may determine the upper half resource or the lower half resource as the first uplink channel resource according to the preconfigured information. For another example, when the processing module120determines the first uplink channel resource according to the identification information of the user equipment, if the identification information of the user equipment is an odd number, the processing module120determines an upper half part of the uplink channel resource indicated by the channel resource indicator as the first uplink channel resource. If the identification information of the user equipment is an even number, the processing module120determines a lower half part of the uplink channel resource indicated by the channel resource indicator as the first uplink channel resource; or vice versa. Certainly, the processing module120may determine the first uplink channel resource in another manner. This implementation manner is further described by using the following example. In this embodiment, 2-bit resource indication information is used as an example, and a channel format is a PUCCH format 3. A parsing situation of four states of the two bits needs to be preconfigured for the UE. For example, by receiving RRC dedicated signaling, the UE obtains the four states {00, 01, 10, 11} of the two bits in advance, which respectively indicate {dual-RB PUCCH format 3 channel resource 1, dual-RB PUCCH format 3 channel resource 2, dual-RB PUCCH format 3 channel resource 3, dual-RB PUCCH format 3 channel resource 4}, that is, the four dual-RB channel resources inFIG.4. Based on such preconfigured information, in this embodiment, if the UE receives data scheduled in a downlink subframe that is not included in the first subset (for example, the foregoing case 2 or case 3), the processing unit120feeds back an ACK/NACK by using the dual-RB PUCCH format 3 channel resource 2 indicated by the current state 01. In case that the second downlink subframe set is a subset of the first subset (for example, the foregoing case 1), and the two bits received by the UE indicate the state 01 in this case, the UE feeds back an ACK/NACK by using a single-RB PUCCH format 3 channel resource in the dual-RB PUCCH format 3 channel resource indicated by the state 01. Details are shown inFIG.4. Which specific single-RB PUCCH format 3 channel resource in the dual-RB PUCCH format 3 channel resource is to be used may be determined according to the UE identification information of the UE. The identification information may be a cell radio network temporary identifier (C-RNTI) of the UE. A specific determining manner may be a modulo operation. For example, which single-RB PUCCH format 3 channel resource is to be used may be determined according to C-RNTI mod 2=0 or 1. Alternatively, a single-RB PUCCH format 3 channel resource that is in the dual-RB PUCCH format 3 channel resource and that is preconfigured, for example, by using RRC dedicated signaling, is directly used. In the foregoing manner of determining a single-RB PUCCH format 3, the base station can make two UEs separately use different single-RB PUCCH format 3 channel resources in one dual-RB PUCCH format 3 channel resource, thereby improving resource utilization efficiency. Second Implementation Manner: The access network device preconfigures two uplink channel resource sets, that is, a first uplink channel resource set and a second uplink channel resource set, for the UE. The processing module120is further configured to: before determining the channel resource, obtain the first uplink channel resource set and the second uplink channel set that are preconfigured by the access network device. The first uplink channel resource is an uplink channel resource in the first uplink channel resource set, and the second uplink channel resource is an uplink channel resource in the second uplink channel resource set. As shown inFIG.5, the first uplink channel resource set includes channel resources of two single-RBs, that is, a single-RB 1 and a single-RB 2 inFIG.5, and the second uplink channel resource set includes channel resources of two dual-RBs, that is, a dual-RB 1 and a dual-RB 2 inFIG.5. In such a resource set configuration, states of the resource indication information may be grouped into two sets, that is, a first state set and a second state set. The first state set indicates an uplink channel resource in the first uplink channel resource set, the second state set of the resource indication information indicates an uplink channel resource in the second uplink channel resource set, and the first state set does not intersect the second state set. An example shown inFIG.5is used for description. The first state set includes 00 and 01, which respectively indicate the single-RB 1 and the single-RB 2 in the first uplink channel resource set; and the second state set includes 10 and 11, which respectively indicate the dual-RB 1 and the dual-RB 2 in the second uplink channel resource set. Further, a correspondence between a state in the first state set and a channel resource in the first uplink channel resource set may be preconfigured by the access network device for the UE. Likewise, a correspondence between a state in the second state set and a channel resource in the second uplink channel resource set may be preconfigured by the access network device for the UE. The access network device may configure the two correspondences for the UE at a time, or may separately configure the two correspondences for the UE. Specifically, the access network device may configure the two correspondences for the UE by using radio resource control (RRC) dedicated signaling. In the foregoing case 1, the channel resource determined by the processing module120is the first uplink channel resource. When the state of the resource indication information is a state in the first state set, it is configured that the processing module120determines the channel resource in the following manner: determining the first uplink channel resource from the first uplink channel set according to the state in the first state set of the resource indication information. Alternatively, in the foregoing case 2 or case 3, the channel resource determined by the processing module120is the second uplink channel resource. When the state of the resource indication information is a state in the second state set, it is configured that the processing module120determines the channel resource in the following manner: determining the second uplink channel resource from the second uplink channel set according to the state in the second state set of the resource indication information. The following further describes this implementation manner by using the example shown inFIG.5. In this embodiment, 2-bit resource indication information is used as an example. A parsing situation of four states of the two bits needs to be preconfigured for the UE. For example, by receiving RRC dedicated signaling, the UE obtains the four states {00, 01, 10, 11} of the two bits in advance, which respectively indicate {single-RB PUCCH format 3 channel resource 1, single-RB PUCCH format 3 channel resource 2, dual-RB PUCCH format 3 channel resource 1, dual-RB PUCCH format 3 channel resource 2}. Based on such preconfigured information, as shown inFIG.5, if the two bits received by the UE indicate the state 01 in this case, it is determined that an ACK/NACK is sent on the single-RB PUCCH format 3 channel resource 2 by using a single-RB PUCCH format. In this implementation manner, in case that the second downlink subframe set is a subset of the first subset, and a channel resource indicated by the resource indication information is a downlink channel resource corresponding to an m-resource-element channel format, the processing unit120still determines to send the feedback information by using a second downlink channel resource. Specifically, the second downlink subframe set includes only a downlink subframe in the first subset in this case. However, if a state of two bits of the resource indication information received by the UE is 10, it means that an ACK/NACK is fed back by using the dual-RB PUCCH channel resource 1. Therefore, the UE determines to send the ACK/NACK on the dual-RB PUCCH format 3 channel resource 1 by using a dual-RB PUCCH format. In this case, the UE also finds that the UE misses detecting a PDCCH in a downlink subframe that is not included in the first subset. Considering that the second downlink subframe set includes only a downlink subframe in the first subset, if there is no missed detection, the base station instructs the UE to use a single-RB PUCCH channel resource. On the contrary, if the UE finds that the second downlink subframe set is a subset of the first subset and uses the single-RB PUCCH format 3, the base station expects the UE to feed back an ACK/NACK by using a dual-RB PUCCH format 3 in a case of missed detection, but the UE actually feeds back the ACK/NACK by using the single-RB PUCCH format 3. Therefore, misunderstanding between the UE and the base station is caused, and further, the base station fails to decode the ACK/NACK. If the base station allocates, to another UE, a channel resource that has not been allocated to the UE from a perspective of the base station, but the UE currently performs an ACK/NACK feedback by using the single-RB PUCCH format 3 channel resource that has been allocated to another UE, interference to a PUCCH format 3 of the another UE is further caused. Therefore, in this embodiment, different states of the resource indication information are used to instruct to use the single-RB PUCCH format 3 or the dual-RB PUCCH format 3, so that the UE feeds back an ACK/NACK by using the dual-RB PUCCH format 3 provided that the UE determines that the resource indication information instructs to use the dual-RB PUCCH format 3. This resolves the foregoing problem of PUCCH channel resource ambiguity caused due to missed detection on a control channel. Third Implementation Manner: This implementation manner is similar to the second implementation manner. The access network device preconfigures two uplink channel resource sets, that is, a first uplink channel resource set and a second uplink channel resource set, for the UE. As shown inFIG.6, a difference lies in that the first uplink channel resource set includes channel resources of four single-RBs, that is, a single-RB 1, a single-RB 2, a single-RB 3, and a single-RB 4 inFIG.6, and the second uplink channel resource set includes channel resources of four dual-RBs, that is, a dual-RB 1, a dual-RB 2, a dual-RB 3, and a dual-RB 4 inFIG.6. In such a resource set configuration, four states of the resource indication information indicate channel resources corresponding to the first uplink channel resource set and/or the second uplink channel resource set. Therefore, it is configured that the processing module120may determine the channel resource in the following two manners: Manner 1: The processing module120determines, from the first uplink channel resource set according to the resource indication information, a third uplink channel resource indicated by the resource indication information, and determines, from the second uplink channel resource set according to the resource indication information, a fourth uplink channel resource indicated by the resource indication information; determines that the second downlink subframe set is a subset of the first subset; and determines that the third uplink channel resource is the first uplink channel resource; orthe processing module120determines, from the first uplink channel resource set according to the resource indication information, a fifth uplink channel resource indicated by the resource indication information, and determines, from the second uplink channel resource set according to the resource indication information, a sixth uplink channel resource indicated by the resource indication information; determines that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset; and determines that the sixth uplink channel resource is the second uplink channel resource; orthe processing module120determines, from the first uplink channel resource set according to the resource indication information, a fifth uplink channel resource indicated by the resource indication information, and determines from the second uplink channel resource set according to the resource indication information, a sixth uplink channel resource indicated by the resource indication information; determines that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset; and determines that the sixth uplink channel resource is the second uplink channel resource. Manner 2: The processing module120determines that the second downlink subframe set is a subset of the first subset; and determines, from the first uplink channel resource set according to the resource indication information, the first uplink channel resource indicated by the resource indication information; orthe processing module120determines that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset; and determines, from the second uplink channel resource set according to the state of the resource indication information, the second uplink channel resource indicated by the resource indication information; orthe processing module120determines that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset; and determines, from the second uplink channel resource set according to the resource indication information, the second uplink channel resource indicated by the resource indication information. Specifically, in this implementation manner, if the second downlink subframe set includes only a downlink subframe in the first subset, the UE may determine to use a single-RB PUCCH channel resource. However, a single-RB PUCCH channel resource needs to be determined according to a state of two bits of the resource indication information. Alternatively, according to the state of the resource indication information, the UE first determines a single-RB PUCCH channel resource from a first channel resource set and then determines a dual-RB PUCCH channel resource from a second channel resource set. As for which one of the two channel resources is finally used, according to that the second downlink subframe set includes only a downlink subframe in the first subset, it may be determined that the foregoing single-RB PUCCH channel resource is used to send an ACK/NACK. In this solution, a correspondence between a state of the resource indication information and a channel resource set also needs to be preconfigured for the UE. For example, by receiving RRC dedicated signaling, the UE obtains four states {00, 01, 10, 11} of the two bits in advance, which are separately a single-RB PUCCH channel resource set {single-RB PUCCH format 3 channel resource 1, single-RB PUCCH format 3 channel resource 2, single-RB PUCCH format 3 channel resource 3, single-RB PUCCH format 3 channel resource 4} or a dual-RB PUCCH channel resource set {dual-RB PUCCH format 3 channel resource 1, dual-RB PUCCH format 3 channel resource 2, dual-RB PUCCH format 3 channel resource 3, dual-RB PUCCH format 3 channel resource 4}. Details are shown inFIG.6. When m is greater than n, the foregoing embodiment based on an m-resource-element channel format and an n-resource-element channel format, including solution 1 about an indication manner of resource indication information, is completely applicable to this solution of p codebook sizes and q codebook sizes. In addition, the p-codebook-size channel format and the q-codebook-size channel format are further applicable to a case in which m is equal to n. That is, a solution based on a codebook-size channel format is a superordinate solution of a resource-element channel format. Solution 2 In case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the processing module120is the first uplink channel resource; orin case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the processing module120is the second uplink channel resource. A difference between this solution and solution 1 lies in that in case that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset (that is, case 2), the channel resource determined by the processing module120is the first uplink channel resource. For example, the first subset consists of downlink subframes 4, 5, 6, and 8 on carriers 1 to 5, the second subset consists of downlink subframes 4, 5, 6, and 8 on carriers 1 to 10, and the second downlink subframe set consists of only subframes 4, 5, 6, and 8 on a carrier 6, subframes 4, 5, and 6 on a carrier 7, and subframes 4 and 5 on a carrier 8. In this case, without loss of generality, the first subset may be considered as a type of first subset. If the first subset consists of downlink subframes 4, 5, 6, and 8 on carriers 6 to 10, feedback information is also sent by using a fallback resource format in case 2 in this embodiment. Certainly, it should be noted that different channel resource sets or a same channel resource set may be configured for a first subset consisting of downlink subframes 4, 5, 6, and 8 on carriers 1 to 5 and a subset consisting of downlink subframes 4, 5, 6, and 8 on carriers 6 to 10. However, a channel resource set corresponding to a small channel format is configured in both cases. In solution 2, for a resource set configuration manner and various channel resource determining manners, refer to descriptions of solution 1. For brevity, details are not repeatedly described in this specification. It should be noted that in case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the processing module120is the first uplink channel resource, and reference may be made to descriptions of the foregoing case 1 in solution 1. In case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, reference may be made to descriptions of the foregoing case 2 and case 3. Optionally, in case that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset, the UE determines the first uplink channel resource, and then sends an ACK/NACK on the first uplink channel resource by using a single-RB PUCCH format 3. For example, the second downlink subframe set includes subframes 4, 5, 6, and 8 on a carrier 6, subframes 4, 5, and 6 on a carrier 7, and subframes 4 and 5 on a carrier 8. It can be learned that all downlink subframes in the second downlink subframe set are subframes on a secondary component carrier. In this case, this solution is similar to case 1. In this embodiment, before the UE receives a scheduled data channel in the second downlink subframe set, solution 2 further includes: the UE determines the first uplink channel resource set corresponding to the first subset and the second uplink channel resource set that is in the second subset other than the first subset. The first uplink channel resource set includes at least one single-RB PUCCH format 3 PUCCH channel resource, and the second uplink channel resource set includes at least one single-RB PUCCH format 3 PUCCH channel resource. Preferably, a single-RB PUCCH format 3 channel resource included in the first uplink channel resource set and a single-RB PUCCH format 3 channel resource included in the second uplink channel resource set may be completely different or partly the same, that is, may be independently configured. Certainly, in this embodiment, an independent configuration further includes: a PUCCH channel resource included in the first uplink channel resource set may be the same as a single-RB PUCCH format 3 channel resource included in the second uplink channel resource set. Optionally, the UE may obtain the first uplink channel resource set and the second uplink channel resource set by receiving RRC signaling sent by the base station. For example, the first uplink channel resource set and the second uplink channel resource set each include two single-RB PUCCH format 3 channel resources, which are respectively {channel 11, channel 12} and {channel 21, channel 22}. Two states of two bits of the foregoing resource indication information on a control channel for scheduling an actually scheduled downlink subframe in the first subset are used to respectively indicate the channel 11 and the channel 12, and two states of two bits of the resource indication information on a control channel for scheduling an actually scheduled downlink subframe that is in the second subset other than the first subset are used to respectively indicate the channel 21 and the channel 22. Herein, {channel 11, channel 12} and {channel 21, channel 22} are not completely the same. That is, {channel 11, channel 12} and {channel 21, channel 22} are either completely different, that is, completely independently configured; or partly the same. For example, the channel 11 and the channel 21 are a same channel, but the channel 12 and the channel 22 are different. Therefore, flexible scheduling can be implemented when multiple UEs perform statistical multiplexing on PUCCH format 3 channel resources. For example, in the first subset, because {channel 11, channel 12} are completely occupied by another UE, {channel 11, channel 12} cannot be used for feeding back an ACK/NACK corresponding to a downlink subframe in the first subset, and consequently, a subframe in the first subset cannot be scheduled. However, {channel 21, channel 22} and {channel 11, channel 12} are not completely the same, and {channel 21, channel 22} may be not occupied by another UE, that is, channels in the set {channel 21, channel 22} are available. Therefore, a subframe that is in the second subset but not included in the first subset can be scheduled in this case. Dual-RB PUCCH channel resources 13 and 14 indicated by the other two states of resource indication information corresponding to the first subset need to be completely the same, and dual-RB PUCCH channel resources 23 and 24 indicated by the other two states of resource indication information corresponding to a subframe that is in the second subset but not included in the first subset need to be completely the same, that is, the dual-RB PUCCH format 3 channel resources 13 and 14 are the same, and the dual-RB PUCCH format 3 channel resources 23 and 24 are the same. An ACK/NACK codebook and an ACK/NACK codebook size in this case are similar to those in case 1. The UE determines an ACK/NACK codebook size corresponding to a subframe that is in the second subset but not included in the first subset, and encodes an ACK/NACK codebook according to the codebook size. Optionally, for the resource configuration manner in the foregoing second implementation manner, to avoid missed detection by the UE, if the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset, and the resource indication information indicates a channel resource of an m-resource-element format, the processing module120determines the second uplink channel resource, and then sends the feedback information on the second uplink channel resource by using the m-resource-element format such as a dual-RB PUCCH format 3. For details, refer to descriptions of the foregoing second implementation manner. An ACK/NACK codebook and an ACK/NACK codebook size in this case are similar to those in case 2. The UE determines an ACK/NACK codebook size corresponding to the second subset, and encodes an ACK/NACK codebook according to the codebook size. In addition, this embodiment (including all implementation manners) is described by using two subsets as an example. However, the solution in this embodiment may also be applied to a case of multiple subsets. Corresponding uplink channel resource sets are separately configured for different subsets. Optionally, based on the foregoing embodiment (including all implementation manners), in an embodiment, some time-frequency resources of the first uplink channel resource corresponding to the n-resource-element channel format overlap some time-frequency resources of the second uplink channel resource corresponding to the m-resource-element channel format; or time-frequency resources of the first uplink channel resource corresponding to the n-resource-element channel format are some time-frequency resources of the second uplink channel resource corresponding to the m-resource-element channel format. The n-resource-element channel format and the m-resource-element channel format whose time-frequency resources overlap are distinguished by using an orthogonal code. Therefore, resource overheads of a PUCCH can be reduced, that is, the n-resource-element channel format and the m-resource-element channel format that are orthogonal in time and frequency do not need to be separately reserved. Specifically, the UE determines an ACK/NACK codebook size corresponding to the first subset, and encodes an ACK/NACK codebook according to the codebook size. For example, if the resource indication information instructs to use a single-RB PUCCH format 3 PUCCH channel resource for an ACK/NACK feedback, the PUCCH channel is used to feed back an ACK/NACK corresponding to an actually scheduled downlink subframe in the second downlink subframe set, but a quantity of original unencoded ACK/NACK bits that need to be fed back, that is, a codebook size, needs to be determined according to all downlink subframes in the first subset. In this example, there are 20 downlink subframes included in the first subset in total, and there are nine actually scheduled downlink subframes in the second downlink subframe set. In this case, when the ACK/NACK is fed back, a quantity of original unencoded bits, that is, an ACK/NACK codebook, needs to be determined according to the maximum quantity of 20 bits, that is, a codebook size in this case, and ACK/NACK bits are arranged according to a carrier number and a subframe number. Zero filling may be performed for occupation at a location of an ACK/NACK corresponding to a downlink subframe that is not actually scheduled currently. In this example, if scheduling is performed by using a single codeword, one downlink subframe corresponds to one ACK/NACK bit. If scheduling is performed by using a dual codeword, one downlink subframe corresponds to two ACK/NACK bits, but space binding, that is, a logical AND operation, may be performed on the two ACK/NACK bits corresponding to the subframe, to compress the two ACK/NACK bits into one bit. In all embodiments of the present disclosure, the control channel that carries the resource indication information is a control channel for scheduling the first downlink subframe in the second downlink subframe set, and the first downlink subframe is a subframe on a secondary component carrier, or a downlink subframe that is on a primary component carrier and that is scheduled by using a control channel with a downlink assignment index (DAI) field value greater than 1. Specifically, a sequence of scheduling subframes on a carrier determines an ascending order of DAI values of control channels. For example, DAI values of four control channels used for scheduling subframes 4, 5, 6, and 8 on a primary component carrier are respectively 1, 2, 3, and 4. In this case, the first downlink subframe includes subframes 5, 6, and 8 on a carrier 1, subframes 4, 5, and 6 on a carrier 2, and subframes 4 and 5 on a carrier 3, that is, the first downlink subframe set excludes only a subframe 4 that is on the primary component carrier and that is scheduled by using a control channel with a DAI value 1. In addition, states of resource indication information on these control channels for scheduling the first downlink subframe need to be the same. For example, the states all are the states 01. This avoids a case in which the UE cannot determine a PUCCH format 3 channel resource because of different states indicated by different received resource indication information. When m is greater than n, the following embodiment based on an m-resource-element channel format and an n-resource-element channel format, including descriptions about solution 2 about an indication manner of resource indication information and the like, is applicable to this solution of p codebook sizes and q codebook sizes. In addition, the p-codebook-size channel format and the q-codebook-size channel format are further applicable to a case in which m is equal to n. That is, a solution based on a codebook-size channel format is a superordinate solution of a resource-element channel format. As shown inFIG.7, another embodiment of the present disclosure provides an access network device700, including a receiving module710, a processing module720, and a sending module730. The sending module710is configured to: under control of the processing module720, send downlink control information to UE, and send a data channel scheduled by using the downlink control information to the UE. The processing module720is configured to: control the sending module710to send the downlink control information to the UE, control the sending module710to send the data channel scheduled by using the downlink control information to the UE, determine an uplink subframe used for receiving feedback information corresponding to the data channel, and determine a channel resource. A second downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, the first subset is a proper subset of the second subset, the channel resource is a first uplink channel resource or a second uplink channel resource, the first uplink channel resource corresponds to the first subset, and the second uplink channel resource corresponds to the second subset. The receiving module730is configured to: receive, on the channel resource in the uplink subframe determined by the processing module720, the feedback information sent by using a channel format. The channel format is an n-resource-element channel format, and the first uplink channel resource carries feedback information of the n-resource-element channel format, or the channel format is an m-resource-element channel format, and the second uplink channel resource carries feedback information of the m-resource-element channel format; and m and n are natural numbers, and m>n. The present disclosure further provides an embodiment: An access network device sends downlink control information to user equipment UE. The access network device sends a data channel scheduled by using the downlink control information to the UE. The access network device determines an uplink subframe used for receiving feedback information corresponding to the data channel. A first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset. The access network device determines a channel resource. The channel resource is a first uplink channel resource or a second uplink channel resource, the first uplink channel resource corresponds to the first subset, and the second uplink channel resource corresponds to the second subset. The access network device receives, on the channel resource in the uplink subframe, the feedback information sent by using a channel format. The channel format is a p-codebook-size channel format, and the first uplink channel resource carries feedback information of the p-codebook-size channel format, or the channel format is a q-codebook-size channel format, and the second uplink channel resource carries feedback information of the q-codebook-size channel format; and p and q are natural numbers, and p>q. The p-codebook-size channel format or q-codebook-size channel format means that the channel format can support a feedback of an ACK/NACK of a maximum of p or q codebook sizes. A codebook size refers to a quantity of original unencoded ACK/NACK bits. Specifically, the p codebook sizes correspond to the first subset, and the q codebook sizes correspond to the second subset. That is, the p codebook sizes are determined according to a quantity of downlink subframes in the first subset, and the q codebook sizes are determined according to a quantity of downlink subframes in the second subset. Optionally, a channel resource occupied by the p-codebook-size channel format includes n resource elements, a channel resource occupied by the q-codebook-size channel format includes m resource elements, m and n are natural numbers, and m is greater than or equal to n. In this way, the p-codebook-size channel format may also be considered as an n-resource-element channel format, and the q-codebook-size channel format may also be considered as an m-resource-element channel format. When m is greater than n, the following embodiment based on an m-resource-element channel format and an n-resource-element channel format is completely applicable to this solution of p codebook sizes and q codebook sizes. In addition, the p-codebook-size channel format and the q-codebook-size channel format are further applicable to a case in which m is equal to n. That is, a solution based on a codebook-size channel format is a superordinate solution of a resource-element channel format. It should be noted that for brevity, for content that is in this embodiment and that is the same as that in the foregoing embodiment, refer to descriptions of the foregoing embodiment. Details are not repeatedly described herein. Further, the sending module710in this embodiment of the present disclosure is further configured to send subframe configuration information to the UE. The subframe configuration information is used to determine the uplink subframe associated with the first downlink subframe set. The subframe configuration information is sent to the UE in advance. The subframe configuration information may be an uplink-downlink subframe configuration in Table 2. Therefore, the UE can determine the uplink subframe according to the preconfigured subframe configuration information. Optionally, the sending module710in this embodiment of the present disclosure is further configured to send a division rule to the UE. The division rule is used to determine the first subset and the second subset that are included in the first downlink subframe set. The division rule may be a preconfigured rule in the foregoing embodiment. Certainly, the sending module710in this embodiment may not send the division rule. Both the access network device and the UE determine the first set and the second set according to a default rule. Alternatively, the sending module710may send the first subset and the second subset to the UE. It should be noted that for a relationship between the first set and the second set, and the first downlink subframe set, refer to descriptions of the foregoing embodiment. Details are not repeatedly described herein. Same as the foregoing embodiment, in this embodiment, the first subset corresponds to the first uplink channel resource, and the receiving module730is configured to receive the feedback information that is of the n-resource-element channel format and that is carried on the first uplink channel resource. The second subset corresponds to the second uplink channel resource, and the receiving module730is configured to receive the feedback information that is of the m-resource-element channel format and that is carried on the second uplink channel resource; and m and n are natural numbers, and m>n. Feedback information is received on a corresponding channel resource by using only one channel format each time. That is, for a large subset, the UE sends feedback information by using a large resource format, and for a small subset, the UE sends feedback information by using a small resource format. When m is greater than n, the foregoing embodiment based on an m-resource-element channel format and an n-resource-element channel format, including descriptions about a first downlink subframe set, a first subset, a second subset, and the like, is completely applicable to this solution of p codebook sizes and q codebook sizes. In addition, the p-codebook-size channel format and the q-codebook-size channel format are further applicable to a case in which m is equal to n. Details are as follows: When m is equal to n, that is, time-frequency resources occupied by the two channel formats have a same quantity of resource elements, such as one RB, or time-frequency resources occupied by the two channel formats completely overlap. For details, refer to descriptions of the foregoing embodiment. According to the foregoing embodiment, a downlink subframe set corresponding to an uplink subframe is divided into at least two subsets, a first subset is a proper subset of a second subset, and corresponding uplink channel resources are configured for the two subsets. For a large subset, feedback information is sent by using a large resource format, and for a small subset, feedback information is sent by using a small resource format, thereby resolving a problem of how to send feedback information when more carriers are configured. In addition, when there is a small quantity of instantaneously scheduled carriers, feedback information may be sent by using the fallback small resource format. Therefore, in this embodiment of the present disclosure, resource overheads can be reduced when feedback information such as an ACK/NACK is fed back. Further, it is configured that the sending module710sends, in the following manner, the data channel scheduled by using the downlink control information to the UE: sending, in a downlink subframe included in a second downlink subframe set, the data channel scheduled by using the downlink control information. For the second downlink subframe set, refer to descriptions of the foregoing embodiment. Details are not repeatedly described herein. When m is greater than n, the foregoing embodiment based on an m-resource-element channel format and an n-resource-element channel format, including descriptions about a second downlink subframe set and the like, is completely applicable to this solution of p codebook sizes and q codebook sizes. In addition, the p-codebook-size channel format and the q-codebook-size channel format are further applicable to a case in which m is equal to n. When the processing module720determines the channel resource, and the second downlink subframe set is different from the first subset, the second subset, and the first downlink subframe set, determined channel resources may be different, and used PUCCH formats may also be different. When m is greater than n, the following embodiment based on an m-resource-element channel format and an n-resource-element channel format, including a method for indicating a channel resource by using resource indication information, is completely applicable to this solution of p codebook sizes and q codebook sizes. In addition, the p-codebook-size channel format and the q-codebook-size channel format are further applicable to a case in which m is equal to n. In case that the second downlink subframe set is a subset of the first subset (case 1), the channel resource determined by the processing module is the first uplink channel resource; orin case that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset (case 2), the channel resource determined by the processing module720is the second uplink channel resource; orin case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset (case 3), the channel resource determined by the processing module is the second uplink channel resource. Likewise, this embodiment is still described by using an example in which the n-resource-element channel format is a single-RB PUCCH format and the m-resource-element channel format is a dual-RB PUCCH format. The first uplink channel resource corresponds to the single-RB PUCCH format, and the second uplink channel resource corresponds to the dual-RB PUCCH format. In case 1, when the second downlink subframe set includes only a downlink subframe in the first subset, the feedback information is sent by using a small resource format. In case 2 and case 3, that is, in a scenario in which the second downlink subset includes a downlink subframe that is in the second subset but not included in the first subset, the feedback information is sent by using a large resource format, that is, the second uplink channel resource corresponding to the second subset. Further, the downlink control information sent by the sending module710includes resource indication information, and the resource indication information is used to indicate the first uplink channel resource or the second uplink channel resource used for carrying the feedback information. Specifically, when the channel resource determined by the processing module720is the first uplink channel resource, the processing module720controls the sending module710to send the downlink control information. The resource indication information included in the downlink control information indicates the first uplink channel resource. When the channel resource determined by the processing module720is the second uplink channel resource, the processing module720controls the sending module710to send the downlink control information. The resource indication information included in the downlink control information indicates the second uplink channel resource. Further, the processing module720is further configured to preconfigure a resource set for the UE by using the sending module710, and the processing module720selects a channel resource from the resource set. Further, the processing module720is further configured to preconfigure a correspondence between a state of the resource indication information and a channel resource in the resource set for the UE by using the sending module710. In this way, after determining the channel resource, the processing module720further sends the resource indication information by using the sending module710. The state of the resource indication information corresponds to the determined channel resource. Same as the foregoing embodiment, the access network device may preconfigure the resource set for the UE in the following three implementation manners: Like the first implementation manner in the foregoing solution 1: The processing module720is further configured to send a second uplink channel resource set to the UE by using the sending module710. The second uplink channel resource is an uplink channel resource in the second uplink channel resource set, a part of each uplink channel resource in multiple uplink channel resources in the second uplink channel resource set constitutes a first uplink channel resource set, and the first uplink channel resource is an uplink channel resource in the first uplink channel resource set. It should be noted that in this embodiment of the present disclosure, the second uplink channel resource may be independently configured for different uplink subframes, thereby improving scheduling flexibility. Certainly, the same second uplink channel resource may be configured for the different uplink subframes. For example, a type of second uplink channel resource may be configured for an uplink subframe 2 in an uplink-downlink subframe configuration 2, and another type of second uplink channel resource may be configured for an uplink subframe 7 in the uplink-downlink subframe configuration 2. Same as the foregoing embodiment, in this solution, in terms of time-frequency resource, a channel resource of an m-resource-element channel format includes a channel resource of a fallback n-resource-element channel format corresponding to the m-resource-element channel format. In this way, the channel resource of the n-resource-element channel format and the channel resource of the m-resource-element channel format that are orthogonal in terms of time-frequency resource may not need to be separately reserved, so that a base station does not need to perform blind detection on the channel resource of the n-resource-element channel format and the channel resource of the m-resource-element channel format that are orthogonal in terms of time-frequency resource. This reduces resource overheads of an uplink control channel, such as a PUCCH. Details may be shown inFIG.4and are not repeatedly described herein. Optionally, the channel resource determined by the processing module720is the second uplink channel resource, and the resource indication information indicates the second uplink channel resource in the second uplink channel resource set. Optionally, the channel resource determined by the processing module720is the first uplink channel resource, and the resource indication information indicates an uplink channel resource that is in the second uplink channel resource set and that includes the first uplink channel resource; or the resource indication information indicates the first uplink channel resource in the first uplink channel resource set. Further, the resource indication information indicates the uplink channel resource that is in the second uplink channel resource set and that includes the first uplink channel resource, and it is configured that the processing module720determines the channel resource in the following manner: determining, according to identification information of the UE or preconfigured information, the first uplink channel resource in the uplink channel resource indicated by the resource indication information. For specific implementation, refer to descriptions of the foregoing embodiment. Considering that the foregoing solution in this embodiment may be affected by missed detection on a PDCCH by the UE, this embodiment further provides a solution for avoiding missed detection on a PDCCH by the UE. In case that the channel resource is the second uplink channel resource, it is configured that the receiving module730receives the feedback information in the following manner: receiving, on the second uplink channel resource indicated by the resource indication information in the uplink subframe, the feedback information sent by using the m-resource-element channel format. However, when the UE misses detecting a PDCCH, the UE may send the feedback information by using the n-resource-element channel format on some uplink channel resources of the second uplink channel resource indicated by the resource indication information. In this case, the receiving module730cannot receive, on the second uplink channel resource indicated by the resource indication information in the uplink subframe, the feedback information sent by the UE by using the m-resource-element channel format. Therefore, the processing module720is further configured to: determine, according to the identification information of the UE or the preconfigured information, a first uplink channel resource that is in the second uplink channel resource and that is indicated by the resource indication information, and control the receiving module730to receive, on the first uplink channel resource, the feedback information sent by using the n-resource-element channel format. Specifically, for example, the second downlink subframe set that is actually scheduled by the base station for the UE includes downlink subframes 4 on carriers 1 to 6. However, because the UE misses detecting a PDCCH in the downlink subframe 4 on the carrier 6, the UE determines that the second downlink subframe set is a subset of the first subset. In this case, the UE feeds back an ACK/NACK by using a fallback single-RB PUCCH format 3, but the base station expects the UE to feed back the ACK/NACK by using a dual-RB PUCCH format 3. To resolve the foregoing problem that the UE misses detecting a PDCCH, the base station may perform blind detection on a single-RB PUCCH format 3 channel resource and a dual-RB PUCCH format 3 channel resource. That is, the base station needs to detect a dual-RB PUCCH format 3 channel resource 2 indicated by the resource indication information, and further needs to detect a single-RB PUCCH format 3 channel resource 2 in the dual-RB PUCCH format 3 channel resource 2. The single-RB PUCCH format 3 channel resource 2 is a fallback PUCCH channel resource of the UE. The processing module720may control the receiving module730to perform blind detection on a part of feedback information sequence of the n-resource-element channel format and the m-resource-element channel format, and/or perform blind detection on a part of reference signal sequence of the n-resource-element channel format and the m-resource-element channel format. For example, the base station may perform blind detection on a part of ACK/NACK sequence of the single-RB PUCCH format 3 and the dual-RB PUCCH format 3, and/or perform blind detection on a part of reference signal sequence of the single-RB PUCCH format 3 and the dual-RB PUCCH format 3. Optionally, in the foregoing blind detection, the n-resource-element channel format and the m-resource-element channel format may use a same feedback information sequence, or may use different feedback information sequences, and the feedback information sequence may be a time-domain orthogonal code and/or a frequency-domain cyclic shift code. For example, the dual-RB PUCCH format 3 and a single-RB PUCCH format 3 in the dual-RB PUCCH format 3 may use a same ACK/NACK sequence, or may use different ACK/NACK sequences, and the ACK/NACK sequence may be a time-domain orthogonal code and/or a frequency-domain cyclic shift code; and/or the dual-RB PUCCH format 3 and a single-RB PUCCH format 3 in the dual-RB PUCCH format 3 may use a same reference signal sequence, or may use different reference signal sequences, and the reference signal sequence may be a time-domain orthogonal code and/or a frequency-domain cyclic shift code. In addition, in this solution, in terms of time-frequency resource, a dual-RB PUCCH format 3 channel resource includes a fallback single-RB PUCCH format 3 channel resource that corresponds to the dual-RB PUCCH format 3. In this way, the single-RB PUCCH format 3 channel resource and the dual-RB PUCCH format 3 channel resource that are orthogonal in terms of time-frequency resource may not need to be separately reserved, so that the base station does not need to perform blind detection on the channel resources of the single-RB PUCCH format 3 and the dual-RB PUCCH format 3 channel resource that are orthogonal in terms of time-frequency resource. This reduces resource overheads of a PUCCH. Like the second implementation manner in the foregoing solution 1: The processing module720is further configured to: before determining the channel resource, configure a first uplink channel resource set and a second uplink channel set for the UE by using the sending module710. The first uplink channel resource is an uplink channel resource in the first uplink channel resource set, and the second uplink channel resource is an uplink channel resource in the second uplink channel resource set. It should be noted that in this embodiment of the present disclosure, the first uplink channel resource and the second uplink channel resource may be independently configured for different subframes. For example, a type of second uplink channel resource may be configured for an uplink subframe 2 in an uplink-downlink subframe configuration 2, and another type of second uplink channel resource may be configured for an uplink subframe 7 in the uplink-downlink subframe configuration 2. Further, a first state set of the resource indication information indicates an uplink channel resource in the first uplink channel resource set, a second state set of the resource indication information indicates an uplink channel resource in the second uplink channel resource set, and the first state set does not intersect the second state set. The processing module720is further configured to preconfigure a correspondence between a state in the first state set and a channel resource in the first uplink channel resource set for the UE by using the sending module710. For details, refer to descriptions of the foregoing embodiment andFIG.5. In this implementation manner, different states of the resource indication information can indicate different channel resources. Therefore, in case that the second downlink subframe set is a subset of the first subset, and a channel resource indicated by the resource indication information is a downlink channel resource corresponding to an m-resource-element channel format, the UE still determines to send the feedback information by using a second downlink channel resource. Specifically, the second downlink subframe set includes only a downlink subframe in the first subset in this case. However, if a state of two bits of the resource indication information received by the UE is 10, it means that an ACK/NACK is fed back by using the dual-RB PUCCH channel resource 1. Therefore, the UE determines to send the ACK/NACK on the dual-RB PUCCH format 3 channel resource 1 by using a dual-RB PUCCH format. In this case, the UE also finds that the UE misses detecting a PDCCH in a downlink subframe that is not included in the first subset. Considering that the second downlink subframe set includes only a downlink subframe in the first subset, if there is no missed detection, the base station instructs the UE to use a single-RB PUCCH channel resource. On the contrary, if the UE finds that the second downlink subframe set is a subset of the first subset and uses the single-RB PUCCH format 3, the base station expects the UE to feed back an ACK/NACK by using a dual-RB PUCCH format 3 in a case of missed detection. In this embodiment, different states of the resource indication information are used to instruct to use the single-RB PUCCH format 3 or the dual-RB PUCCH format 3, so that the UE feeds back an ACK/NACK by using the dual-RB PUCCH format 3 provided that the UE determines that the resource indication information instructs to use the dual-RB PUCCH format 3. Therefore, this resolves the foregoing problem of PUCCH channel resource ambiguity caused due to missed detection on a control channel. Like the third implementation manner in the foregoing solution 1: Same as the foregoing embodiment, the processing unit720of the access network device is further configured to preconfigure two uplink channel resource sets, that is, a first uplink channel resource set and a second uplink channel resource set, for the UE by using the sending module710. As shown inFIG.6, a difference lies in that the first uplink channel resource set includes channel resources of four single-RBs, that is, a single-RB 1, a single-RB 2, a single-RB 3, and a single-RB 4 inFIG.6, and the second uplink channel resource set includes channel resources of four dual-RBs, that is, a dual-RB 1, a dual-RB 2, a dual-RB 3, and a dual-RB 4 inFIG.6. In such a resource set configuration, four states of the resource indication information indicate channel resources corresponding to the first uplink channel resource set and/or the second uplink channel resource set. For example, in case 1, the resource indication information indicates the first uplink channel resource in the first uplink channel resource set, and in case 2 and case 3, the resource indication information indicates the second uplink channel resource in the second uplink channel resource set. Similar to the foregoing implementation manner, the first uplink channel resource and the second uplink channel resource may be independently configured for different subframes. In this solution, the UE may miss detecting a PDCCH in case 1. To avoid this case, the access network device needs to perform blind detection on a channel resource corresponding to the n-resource-element channel format and a channel resource corresponding to the m-resource-element channel format. The processing module720is further configured to: if the receiving module730fails to receive, on the first uplink channel resource indicated by the resource indication information in the uplink subframe, the feedback information sent by using the n-resource-element channel format, determine a second uplink channel resource in the second uplink channel resource set indicated by the resource indication information; and control the receiving module730to receive, on the second uplink channel resource, the feedback information sent by using the m-resource-element channel format. For example, a base station may perform blind detection on a part of ACK/NACK sequence of a single-RB PUCCH format 3 and a dual-RB PUCCH format 3, and/or perform blind detection on a part of reference signal sequence of the single-RB PUCCH format 3 and the dual-RB PUCCH format 3. For details, refer to descriptions of the foregoing implementation manner. Details are not repeatedly described herein. When m is greater than n, the foregoing embodiment based on an m-resource-element channel format and an n-resource-element channel format, including different implementation manners in the solution about an indication manner of resource indication information, is completely applicable to this solution of p codebook sizes and q codebook sizes. In addition, the p-codebook-size channel format and the q-codebook-size channel format are further applicable to a case in which m is equal to n. Similar to the foregoing embodiment, in this embodiment (including all implementation manners), the processing module720determines the first subset and the second subset according to a preconfiguration. The preconfiguration may be independently performed for different uplink subframes. In addition, this embodiment is described by using two subsets as an example. However, the solution in this embodiment may also be applied to a case of multiple subsets. Corresponding uplink channel resource sets are separately configured for different subsets. It should be noted that all the foregoing embodiments are described by using a TDD uplink-downlink subframe configuration 2 as an example. Different uplink subframes (a subframe 2 and a subframe 7) in the uplink-downlink subframe configuration are associated with a same quantity of downlink subframes. This embodiment of the present disclosure may be further applied to another TDD uplink-downlink subframe configuration, such as a TDD uplink-downlink subframe configuration 1. In the TDD uplink-downlink configuration 1, an uplink subframe 2 and an uplink subframe 3 are separately associated with different quantities of downlink subframes. In carrier aggregation configuration, different uplink subframes need to support different maximum quantities of ACK/NACK bits. Assuming that carrier aggregation is performed on 20 carriers in the TDD uplink-downlink subframe configuration 1, a maximum of 40 ACK/NACK bits need to be fed back in the uplink subframe 2, and a maximum of 20 ACK/NACK bits need to be fed back in the uplink subframe 3. Therefore, for the uplink subframe 2, division needs to be performed to obtain the first subset and the second subset, but for the uplink subframe 3, the foregoing subset division is not necessary, and an ACK/NACK is directly carried by using a single-RB PUCCH format 3. Therefore, the solutions provided in this embodiment of the present disclosure are separately performed for different uplink subframes. Specifically, obtaining of the first subset and the second subset through division, configuration of channel resource sets of the m-resource-element channel format and the n-resource-element channel format, and indication of the resource indication information may be separately performed for the different uplink subframes. Optionally, in an embodiment, some time-frequency resources of the first uplink channel resource corresponding to the n-resource-element channel format overlap some time-frequency resources of the second uplink channel resource corresponding to the m-resource-element channel format; or time-frequency resources of the first uplink channel resource corresponding to the n-resource-element channel format are some time-frequency resources of the second uplink channel resource corresponding to the m-resource-element channel format. The n-resource-element channel format and the m-resource-element channel format whose time-frequency resources overlap use an orthogonal code. For details, refer to descriptions of the foregoing embodiment. In this way, a multiplexing capability of a channel resource of a channel format such as a PUCCH format 3 can be improved, and resource reservation overheads can be reduced. For example, a single-RB PUCCH format 3 channel resource may overlap a dual-RB PUCCH format 3 channel resource in terms of time-frequency resource. For example, the single-RB PUCCH format 3 channel resource may partly overlap the dual-RB PUCCH format 3 channel resource in terms of time-frequency resource, or time-frequency resources corresponding to the dual-RB PUCCH format 3 include time-frequency resources corresponding to the single-RB PUCCH format 3, as shown inFIG.8. In each timeslot, the single-RB PUCCH format 3 overlaps the dual-RB PUCCH format 3 on two frequency-domain RBs, and frequency division multiplexing may be performed on different channels of the single-RB PUCCH format 3 by using different RBs, such as an RB consisting of 12 upper subcarriers and an RB consisting of 12 lower subcarriers. Time division multiplexing is performed on channels of the single-RB PUCCH format 3 and the dual-RB PUCCH format 3 by using time-domain OCCs, such as an OCC 0/1/2, an OCC 3/4, an OCC 0/3/1, and an OCC 4/2 inFIG.8. In addition, the single-RB PUCCH format 3 and the dual-RB PUCCH format 3 use different reference signal sequences, and the reference signal sequence includes a time-domain orthogonal code and/or a frequency-domain cyclic shift code. The foregoing embodiment is described by using a single-RB PUCCH format 3 and a dual-RB PUCCH format 3 as an example. Certainly, this embodiment may be further applied to a PUCCH format 3 of more RBs. In addition, this solution may be extended to a PUCCH format of another resource element, such as a dual-RB PUCCH format and a quad-RB PUCCH format, or PUCCH formats of different quantities of sub RBs. Herein, a frequency-domain width of a sub RB may be less than a frequency-domain width of an RB. For example, a sub RB occupies four subcarriers, and occupies one timeslot or one subframe in a time domain. Alternatively, a time-domain width of a sub RB may be less than a timeslot. For example, a sub RB occupies three time-domain symbols, and occupies 12 subcarriers in a frequency domain, that is, a frequency-domain width of one RB. Alternatively, a sub RB occupies a smaller frequency-domain width and a smaller time-domain width than a current RB in both the time domain and the frequency domain. Therefore, the single-RB PUCCH format 3 and the dual-RB PUCCH format 3 in this embodiment may be extended to an m-resource-element PUCCH format and an n-resource-element PUCCH format. Both m and n are natural numbers, and m>n. Optionally, the first subset may partly overlap the second subset. For example, it is assumed that eight TDD carriers are configured for the UE, and each carrier corresponds to a TDD uplink-downlink subframe configuration 2. If the foregoing division manner in which carriers are equally divided and sets do not overlap is used, downlink subframes 4, 5, 6, and 8 on carriers 1 to 4 constitutes the first subset, and downlink subframes 4, 5, 6, and 8 on carriers 5 to 8 constitutes the second subset. In this way, if only the first subset is scheduled, an ACK/NACK is encoded by using 16 original ACK/NACK bits. If downlink subframes in the first subset and the second subset are scheduled, an ACK/NACK is encoded by using 32 original ACK/NACK bits. Alternatively, a division manner in which sets partly overlap may be used, that is, the first downlink subframe set consists of downlink subframes 4, 5, 6, and 8 on carriers 1 to 5, and the second downlink subframe set consists of downlink subframes 4, 5, 6, and 8 on carriers 4 and 8. In this case, if downlink subframes in only one set are scheduled, an ACK/NACK is encoded by using 20 original ACK/NACK bits, that is, a codebook and a codebook size are determined according to the scheduled first subset or the scheduled second subset. If downlink subframes in an overlapping part are scheduled, it may be predefined that a codebook and a codebook size are determined by using the first subset. If both sets are scheduled, an ACK/NACK is encoded by using 32 original ACK/NACK bits, that is, a codebook and a codebook size are determined according to a union set of the first subset and the second subset. Because more original bits are used for encoding during single-set scheduling, higher ACK/NACK transmission efficiency can be achieved by using partly-overlapping sets obtained through division. Otherwise, if carriers 1 to 5 or carriers 4 to 8 are scheduled by using non-overlapped sets obtained through division, encoding needs to be performed by using 32 original ACK/NACK bits. This embodiment of the present disclosure further provides an ACK/NACK transmission solution including set division in which three subsets overlap and there are multiple levels of fallback. This embodiment may be combined with the foregoing two embodiments. Specifically, the processing module120of the UE or the processing module720of the access network device may control a corresponding receiving module and a corresponding sending module to perform related operations. The foregoing 15 carriers in the TDD uplink-downlink configuration 2 are still used as an example, and the second downlink subframe set is also divided into three subsets. A first subset includes downlink subframes 4, 5, 6, and 8 on carriers 1 to 5, a second subset includes downlink subframes 4, 5, 6, and 8 on carriers 1 to 10, and a third subset includes downlink subframes 4, 5, 6, and 8 on carriers 1 to 15. It can be learned that the second downlink subframe set includes the first downlink subframe set, and the third downlink subframe set includes the second downlink subframe set. Certainly, other division based on an incompletely inclusive relationship is not excluded. A specific method may be directly obtained by extending the foregoing division method in which two sets are obtained. A channel format corresponding to the third subset is a k-resource-element channel format, and k>m. In this embodiment, an n-resource-element format is a single-RB format, an m-resource-element format is a dual-RB format, and a k-resource-element is a quad-RB format. The UE may determine, according to the resource indication information, to feed back an ACK/NACK by using a specific PUCCH channel resource. Specifically, four states included in 2-bit resource indication information may separately correspond to different downlink subframe sets. For example, a state 00 corresponds to a single-RB PUCCH format 3 channel resource in the first subset, a state 01 and a state 10 separately correspond to dual-RB PUCCH format 3 channel resources in the second subset, and a state 11 corresponds to a tri-RB PUCCH format 3 channel resource in the third subset. For ease of description, it is assumed that a downlink subframe included in the first subset is represented by a subframe i; downlink subframes included in the second subset are represented by subframes i and j, and the subframe j does not belong to the first subset; and downlink subframes included in the third subset are represented by subframe i, j, and k, and the subframe k does not belong to the first subset or the second subset. In this case, if the base station schedules only a downlink subframe i for the UE, the resource indication information indicates a state 00, and the UE determines, according to the state, to feed back a corresponding ACK/NACK by using a single-RB PUCCH format 3 channel resource indicated by the state, and in this case, an ACK/NACK codebook size is determined according to a quantity of downlink subframes i included in the first subset. If the base station schedules a downlink subframe j or downlink subframes i and j for the UE, but does not schedule a downlink subframe k, the resource indication information indicates a state 01 or a state 10, the UE determines, according to the state, to feed back a corresponding ACK/NACK by using a dual-RB PUCCH format 3 channel resource indicated by the state, and in this case, an ACK/NACK codebook size is determined according to quantities of downlink subframes i and downlink subframes j included in the second subset. If the base station schedules a downlink subframe k, downlink subframes i and k, downlink subframes j and k, or downlink subframes i, j, and k for the UE, the resource indication information indicates a state 11, the UE determines, according to the state, to feed back a corresponding ACK/NACK by using a tri-RB PUCCH format 3 channel resource indicated by the state, and in this case, an ACK/NACK codebook size is determined according to quantities of downlink subframes i, downlink subframes j, and downlink subframes k included in the third subset. Only an embodiment in which the resource indication information indicates a PUCCH format 3 channel resource of different quantities of RBs is provided herein, and the foregoing solution using the resource indication information is also applicable to another embodiment. For example, the foregoing solution is also applicable to the following embodiment: The resource indication information indicates four tri-RB PUCCH format 3 channel resources, and the UE performs, according to a received relationship between the first downlink subframe set and three subsets, fallback transmission on some resources of a tri-RB PUCCH format 3 channel resource indicated by the resource indication information, such as dual-RB PUCCH format 3 or single-RB PUCCH format 3 fallback transmission. When m is greater than n, the foregoing embodiment based on an m-resource-element channel format and an n-resource-element channel format, including an independent configuration for different uplink subframes, channel resource overlapping, set overlapping, three levels of fallback, and the like, is completely applicable to this solution of p codebook sizes and q codebook sizes. In addition, the p-codebook-size channel format and the q-codebook-size channel format are further applicable to a case in which m is equal to n. It should be noted that all embodiments of the present disclosure are described by using TDD CA as an example. In addition to TDD CA, the solutions in the embodiments of the present disclosure may be further applied to FDD CA and FDD+TDD CA. Solutions in FDD CA and FDD+TDD CA are similar to those in TDD CA. FIG.9shows a feedback information sending method according to an embodiment of the present disclosure. The method corresponds to the embodiment of the foregoing user equipment, and the foregoing user equipment can execute the method in this embodiment. Therefore, for same content, refer to descriptions of the foregoing embodiments. Details are not repeatedly described herein. This embodiment includes the following steps: Step901: User equipment UE receives downlink control information sent by an access network device. Step902: The UE receives a data channel scheduled by using the downlink control information. Step903: The UE determines an uplink subframe used for sending feedback information corresponding to the data channel, where a first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset. Step904: The UE determines a channel resource, where the channel resource is a first uplink channel resource or a second uplink channel resource, the first uplink channel resource corresponds to the first subset, and the second uplink channel resource corresponds to the second subset. Step905: The UE sends the feedback information on the channel resource in the uplink subframe by using a channel format, where the channel format is an n-resource-element channel format, and the UE sends the feedback information on the first uplink channel resource by using the n-resource-element channel format, or the channel format is an m-resource-element channel format, and the UE sends the feedback information on the second uplink channel resource by using the m-resource-element channel format; and m and n are natural numbers, and m>n. The present disclosure further provides the following embodiment: User equipment UE receives downlink control information sent by an access network device. The UE receives a data channel scheduled by using the downlink control information. The UE determines an uplink subframe used for sending feedback information corresponding to the data channel. A first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset. The UE determines a channel resource. The channel resource is a first uplink channel resource or a second uplink channel resource, the first uplink channel resource corresponds to the first subset, and the second uplink channel resource corresponds to the second subset. The UE sends the feedback information on the channel resource in the uplink subframe by using a channel format. The channel format is a p-codebook-size channel format, and the UE sends the feedback information on the first uplink channel resource by using the p-codebook-size channel format, or the channel format is a q-codebook-size channel format, and the UE sends the feedback information on the second uplink channel resource by using the q-codebook-size channel format; and p and q are natural numbers, and p>q. For specific descriptions of the p-codebook-size channel format and the q-codebook-size channel format, refer to descriptions of the foregoing embodiment. In descriptions of the following embodiment, the n-resource-element channel format may be directly replaced with the p-codebook-size channel format, the m-resource-element channel format may be directly replaced with the q-codebook-size channel format, and m may be greater than or equal to n. According to the foregoing embodiment, when more carriers are configured for UE, a maximum quantity of ACK/NACK bits exceeds a current bearer capability of a single-RB PUCCH format 3. In this embodiment of the present disclosure, a downlink subframe set corresponding to an uplink subframe is divided into at least two subsets, a first subset is a proper subset of a second subset, and corresponding uplink channel resources are configured for the two subsets. For a large subset, feedback information is sent by using a large resource format, and for a small subset, feedback information is sent by using a small resource format, thereby resolving a problem of how to send feedback information when more carriers are configured. In addition, when there is a small quantity of instantaneously scheduled carriers, feedback information may be sent by using the fallback small resource format. Therefore, in this embodiment of the present disclosure, resource overheads can be reduced when an ACK/NACK is fed back. Further, that the UE receives a data channel scheduled by using the downlink control information includes: the UE receives, on a downlink subframe included in a second downlink subframe set, the data channel scheduled by using the downlink control information. In an optional embodiment, in case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the UE is the first uplink channel resource; orin case that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset, the channel resource determined by the UE is the second uplink channel resource; orin case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the UE is the second uplink channel resource. In another optional embodiment, in case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the UE is the first uplink channel resource; orin case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the UE is the second uplink channel resource. In the solution in this embodiment, when the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset, similar to a case in which the second downlink subframe set is a subset of the first subset, the second downlink subframe set may be considered as a subset. The feedback information is sent by using a channel resource of the n-resource-element channel format, that is, the feedback information is sent by using the fallback n-resource-element channel format. Optionally, the downlink control information includes resource indication information; and that the UE determines a channel resource includes: the UE determines, according to the resource indication information, a channel resource used for carrying the feedback information. Optionally, before the UE determines the channel resource, the method further includes:the UE obtains a second uplink channel resource set that is configured by the access network device, where the second uplink channel resource is an uplink channel resource in the second uplink channel resource set, a part of each uplink channel resource in multiple uplink channel resources included in the second uplink channel resource set constitutes a first uplink channel resource set, and the first uplink channel resource is an uplink channel resource in the first uplink channel resource set. Further, the channel resource determined by the UE is the first uplink channel resource. That the UE determines a channel resource includes:the UE determines, from the second uplink channel resource set according to the resource indication information, an uplink channel resource indicated by the resource indication information; and the UE determines the first uplink channel resource from the uplink channel resource indicated by the resource indication information; orthe UE determines the first uplink channel resource from the first uplink channel resource set according to the resource indication information; orthe UE determines the first uplink channel resource according to the resource indication information, where the first uplink channel resource is a part of an uplink channel resource that is in the second uplink channel resource set and that is indicated by the resource indication information. In case that the channel resource determined by the UE is the second uplink channel resource, the UE determines the second uplink channel resource from the second uplink channel resource set according to the resource indication information. That the UE determines a channel resource includes:the UE determines, according to identification information of the UE or preconfigured information, the first uplink channel resource in the uplink channel resource indicated by the resource indication information, where the uplink channel resource indicated by the resource indication information is an uplink channel resource in the second uplink channel resource set. Optionally, before the UE determines the channel resource, the method further includes:the UE obtains a first uplink channel resource set and a second uplink channel set that are preconfigured by the access network device, where the first uplink channel resource is an uplink channel resource in the first uplink channel resource set, and the second uplink channel resource is an uplink channel resource in the second uplink channel resource set. In an embodiment, a first state set of the resource indication information indicates an uplink channel resource in the first uplink channel resource set, a second state set of the resource indication information indicates an uplink channel resource in the second uplink channel resource set, and the first state set does not intersect the second state set. Further, the channel resource determined by the UE is the first uplink channel resource, and that the UE determines a channel resource includes: the UE determines the first uplink channel resource from the first uplink channel set according to a state in the first state set of the resource indication information; orthe channel resource determined by the UE is the second uplink channel resource, and that the UE determines a channel resource includes: the UE determines the second uplink channel resource for the feedback information from the second uplink channel set according to a state in the second state set of the resource indication information. In another embodiment, a state of the resource indication information indicates an uplink channel resource in the first uplink channel resource set and/or an uplink channel resource in the second uplink channel resource set. Further, the channel resource determined by the UE is the first uplink channel resource, and that the UE determines a channel resource includes:the UE determines, from the first uplink channel resource set according to the resource indication information, a third uplink channel resource indicated by the resource indication information, and determines, from the second uplink channel resource set according to the resource indication information, a fourth uplink channel resource indicated by the resource indication information; the UE determines that the second downlink subframe set is a subset of the first subset; and the UE determines that the third uplink channel resource is the first uplink channel resource; orthe UE determines that the second downlink subframe set is a subset of the first subset; and the UE determines, from the first uplink channel resource set according to the resource indication information, the first uplink channel resource indicated by the resource indication information; orthe channel resource determined by the UE is the second uplink channel resource, and that the UE determines a channel resource includes:the UE determines, from the first uplink channel resource set according to the resource indication information, a fifth uplink channel resource indicated by the resource indication information, and determines, from the second uplink channel resource set according to the resource indication information, a sixth uplink channel resource indicated by the resource indication information; the UE determines that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset; and the UE determines that the sixth uplink channel resource is the second uplink channel resource; orthe UE determines that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset; and the UE determines, from the second uplink channel resource set according to the state of the resource indication information, the second uplink channel resource indicated by the resource indication information; orthe UE determines, from the first uplink channel resource set according to the resource indication information, a fifth uplink channel resource indicated by the resource indication information, and determines, from the second uplink channel resource set according to the resource indication information, a sixth uplink channel resource indicated by the resource indication information; the UE determines that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset; and the UE determines that the sixth uplink channel resource is the second uplink channel resource; orthe UE determines that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset; and the UE determines, from the second uplink channel resource set according to the resource indication information, the second uplink channel resource indicated by the resource indication information. Optionally, before the UE determines the channel resource, the method further includes:the UE obtains a first uplink channel resource set and a second uplink channel set that are preconfigured by the access network device, where the first uplink channel resource is an uplink channel resource in the first uplink channel resource set, and the second uplink channel resource is an uplink channel resource in the second uplink channel resource set. The downlink control information includes resource indication information, states of the resource indication information include a first state set and a second state set, the first state set indicates an uplink channel resource in the first uplink channel resource set, the second state set indicates an uplink channel resource in the second uplink channel resource set, and the first state set does not intersect the second state set. That the UE determines a channel resource includes:in case that the second downlink subframe set is a subset of the first subset, the UE determines, from the second uplink channel resource set according to the resource indication information, the second uplink channel resource indicated by the resource indication information. In this case, the UE determines the channel resource according to the resource indication information. When the channel resource indicated by the resource indication information is the second uplink channel resource, the second downlink subframe set that is used for sending a PDCCH and that is detected by the UE is a subset of the first subset. In this case, it indicates that missed detection occurs in the UE, and the UE still sends the feedback information by using the second uplink channel resource indicated by the resource indication information. Optionally, some time-frequency resources of the first uplink channel resource corresponding to the n-resource-element channel format overlap some time-frequency resources of the second uplink channel resource corresponding to the m-resource-element channel format; or time-frequency resources of the first uplink channel resource corresponding to the n-resource-element channel format are some time-frequency resources of the second uplink channel resource corresponding to the m-resource-element channel format, where the n-resource-element channel format and the m-resource-element channel format whose time-frequency resources overlap use an orthogonal code. Further, before the UE determines the channel resource, the method further includes: the UE determines the first subset and the second subset according to a preconfiguration. The preconfiguration may be independently performed for different uplink subframes. It should be noted that the foregoing solution in this embodiment may be used as a separate embodiment, independently of steps901to905. FIG.10shows a feedback information sending method according to an embodiment. The method corresponds to the embodiment of the foregoing user equipment, and the foregoing user equipment can execute the method in this embodiment. Therefore, for same content, refer to descriptions of the foregoing embodiments. Details are not repeatedly described herein. In addition, this embodiment is a separate solution of the foregoing embodiment corresponding toFIG.9. For details, refer to descriptions of the foregoing embodiment. This embodiment includes the following steps: Step1001: UE receives downlink control information sent by an access network device. Step1002: The UE receives a data channel scheduled by using the downlink control information. Step1003: The UE determines an uplink subframe used for sending feedback information corresponding to the data channel, where a first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset. Step1004: The UE determines a channel resource, where in case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the UE is the first uplink channel resource; or in case that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset, the channel resource determined by the UE is the second uplink channel resource; or in case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the UE is the second uplink channel resource. Step1005: The UE sends the feedback information on the channel resource in the uplink subframe by using a channel format, where the channel format is an n-resource-element channel format, and the UE sends the feedback information on the first uplink channel resource by using the n-resource-element channel format, or the channel format is an m-resource-element channel format, and the UE sends the feedback information on the second uplink channel resource by using the m-resource-element channel format; and m and n are natural numbers, and m>n. The present disclosure further provides the following embodiment: UE receives downlink control information sent by an access network device. The UE receives a data channel scheduled by using the downlink control information. The UE determines an uplink subframe used for sending feedback information corresponding to the data channel. A first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset. The UE determines a channel resource, where in case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the UE is the first uplink channel resource; or in case that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset, the channel resource determined by the UE is the second uplink channel resource; or in case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the UE is the second uplink channel resource. The UE sends the feedback information on the channel resource in the uplink subframe by using a channel format. The channel format is a p-codebook-size channel format, and the UE sends the feedback information on the first uplink channel resource by using the p-codebook-size channel format, or the channel format is a q-codebook-size channel format, and the UE sends the feedback information on the second uplink channel resource by using the q-codebook-size channel format; and p and q are natural numbers, and p>q. For specific descriptions of the p-codebook-size channel format and the q-codebook-size channel format, refer to descriptions of the foregoing embodiment. In descriptions of the following embodiment, the n-resource-element channel format may be directly replaced with the p-codebook-size channel format, the m-resource-element channel format may be directly replaced with the q-codebook-size channel format, and m may be greater than or equal to n. According to the foregoing embodiment, when more carriers are configured for UE, a maximum quantity of ACK/NACK bits exceeds a current bearer capability of a single-RB PUCCH format 3. In this embodiment of the present disclosure, a downlink subframe set corresponding to an uplink subframe is divided into at least two subsets, a first subset is a proper subset of a second subset, and corresponding uplink channel resources are configured for the two subsets. For a large subset, feedback information is sent by using a large resource format, and for a small subset, feedback information is sent by using a small resource format, thereby resolving a problem of how to send feedback information when more carriers are configured. In addition, when there is a small quantity of instantaneously scheduled carriers, feedback information may be sent by using the fallback small resource format. Therefore, in this embodiment of the present disclosure, resource overheads can be reduced when an ACK/NACK is fed back. FIG.11shows a feedback information sending method according to an embodiment. The method corresponds to the embodiment of the foregoing user equipment, and the foregoing user equipment can execute the method in this embodiment. Therefore, for same content, refer to descriptions of the foregoing embodiments. Details are not repeatedly described herein. In addition, this embodiment is a separate solution of the foregoing embodiment corresponding toFIG.9. For details, refer to descriptions of the foregoing embodiment. This embodiment includes the following steps: Step1101: UE receives downlink control information sent by an access network device. Step1102: The UE receives a data channel scheduled by using the downlink control information. Step1103: The UE determines an uplink subframe used for sending feedback information corresponding to the data channel, where a first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset. Step1104: The UE determines a channel resource, where in case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the UE is the first uplink channel resource; or in case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the UE is the second uplink channel resource. Step1105: The UE sends the feedback information on the channel resource in the uplink subframe by using a channel format, where the channel format is an n-resource-element channel format, and the UE sends the feedback information on the first uplink channel resource by using the n-resource-element channel format, or the channel format is an m-resource-element channel format, and the UE sends the feedback information on the second uplink channel resource by using the m-resource-element channel format; and m and n are natural numbers, and m>n. The present disclosure further provides the following embodiment: UE receives downlink control information sent by an access network device. The UE receives a data channel scheduled by using the downlink control information. The UE determines an uplink subframe used for sending feedback information corresponding to the data channel. A first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset. The UE determines a channel resource. In case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the UE is the first uplink channel resource; or in case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the UE is the second uplink channel resource. The UE sends the feedback information on the channel resource in the uplink subframe by using a channel format. The channel format is a p-codebook-size channel format, and the UE sends the feedback information on the first uplink channel resource by using the p-codebook-size channel format, or the channel format is a q-codebook-size channel format, and the UE sends the feedback information on the second uplink channel resource by using the q-codebook-size channel format; and p and q are natural numbers, and p>q. For specific descriptions of the p-codebook-size channel format and the q-codebook-size channel format, refer to descriptions of the foregoing embodiment. In descriptions of the following embodiment, the n-resource-element channel format may be directly replaced with the p-codebook-size channel format, the m-resource-element channel format may be directly replaced with the q-codebook-size channel format, and m may be greater than or equal to n. In case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the UE is the second uplink channel resource. In the solution in this embodiment, when the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset, similar to a case in which the second downlink subframe set is a subset of the first subset, the second downlink subframe set may be considered as a subset. The feedback information is sent by using a channel resource of the n-resource-element channel format, that is, the feedback information is sent by using the fallback n-resource-element channel format. FIG.12shows a feedback information receiving method according to an embodiment of the present disclosure. The method corresponds to the embodiment of the foregoing access network device, and the foregoing access network device can execute the method in this embodiment. Therefore, for same content, refer to descriptions of the foregoing embodiments. Details are not repeatedly described herein. This embodiment includes the following steps: Step1201: An access network device sends downlink control information to user equipment UE. Step1202: The access network device sends a data channel scheduled by using the downlink control information to the UE. Step1203: The access network device determines an uplink subframe used for receiving feedback information corresponding to the data channel, where a first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset. Step1204: The access network device determines a channel resource, where the channel resource is a first uplink channel resource or a second uplink channel resource, the first uplink channel resource corresponds to the first subset, and the second uplink channel resource corresponds to the second subset. Step1205: The access network device receives, on the channel resource in the uplink subframe, the feedback information sent by using a channel format, where the channel format is an n-resource-element channel format, and the first uplink channel resource carries feedback information of the n-resource-element channel format, or the channel format is an m-resource-element channel format, and the second uplink channel resource carries feedback information of the m-resource-element channel format; and m and n are natural numbers, and m>n. The present disclosure further provides the following embodiment: An access network device sends downlink control information to user equipment UE. The access network device sends a data channel scheduled by using the downlink control information to the UE. The access network device determines an uplink subframe used for receiving feedback information corresponding to the data channel. A first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset. The access network device determines a channel resource. The channel resource is a first uplink channel resource or a second uplink channel resource, the first uplink channel resource corresponds to the first subset, and the second uplink channel resource corresponds to the second subset. The access network device receives, on the channel resource in the uplink subframe, the feedback information sent by using a channel format. The channel format is a p-codebook-size channel format, and the first uplink channel resource carries feedback information of the p-codebook-size channel format, or the channel format is a q-codebook-size channel format, and the second uplink channel resource carries feedback information of the q-codebook-size channel format; and p and q are natural numbers, and p>q. For specific descriptions of the p-codebook-size channel format and the q-codebook-size channel format, refer to descriptions of the foregoing embodiment. In descriptions of the following embodiment, the n-resource-element channel format may be directly replaced with the p-codebook-size channel format, the m-resource-element channel format may be directly replaced with the q-codebook-size channel format, and m may be greater than or equal to n. According to the foregoing embodiment, when more carriers are configured for UE, a maximum quantity of ACK/NACK bits exceeds a current bearer capability of a single-RB PUCCH format 3. In this embodiment of the present disclosure, a downlink subframe set corresponding to an uplink subframe is divided into at least two subsets, a first subset is a proper subset of a second subset, and corresponding uplink channel resources are configured for the two subsets. For a large subset, feedback information is sent by using a large resource format, and for a small subset, feedback information is sent by using a small resource format, thereby resolving a problem of how to send feedback information when more carriers are configured. In addition, when there is a small quantity of instantaneously scheduled carriers, feedback information may be sent by using the fallback small resource format. Therefore, in this embodiment of the present disclosure, resource overheads can be reduced when an ACK/NACK is fed back. Further, that the access network device sends a data channel scheduled by using the downlink control information to the UE includes: the access network device sends, in a downlink subframe included in a second downlink subframe set, the data channel scheduled by using the downlink control information. In an optional solution, in case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the access network device is the first uplink channel resource; orin case that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset, the channel resource determined by the access network device is the second uplink channel resource; orin case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the access network device is the second uplink channel resource. In another optional solution, in case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the access network device is the first uplink channel resource; orin case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the access network device is the second uplink channel resource. Further, the downlink control information includes resource indication information, and the resource indication information is used to indicate the first uplink channel resource or the second uplink channel resource used for carrying the feedback information. Optionally, before the access network device determines the channel resource, the method further includes:the access network device sends information about a second uplink channel resource set to the UE, where the second uplink channel resource is an uplink channel resource in the second uplink channel resource set, a part of each uplink channel resource in multiple uplink channel resources in the second uplink channel resource set constitutes a first uplink channel resource set, and the first uplink channel resource is an uplink channel resource in the first uplink channel resource set. Further, the channel resource determined by the access network device is the first uplink channel resource, and the resource indication information indicates an uplink channel resource that is in the second uplink channel resource set and that includes the first uplink channel resource; or the resource indication information indicates the first uplink channel resource in the first uplink channel resource set; orthe channel resource determined by the access network device is the second uplink channel resource, and the resource indication information indicates the second uplink channel resource in the second uplink channel resource set. Further, the resource indication information indicates the uplink channel resource that is in the second uplink channel resource set and that includes the first uplink channel resource. That the access network device determines a channel resource includes:the access network device determines, according to identification information of the UE or preconfigured information, the first uplink channel resource in the uplink channel resource indicated by the resource indication information. Optionally, before the access network device determines the channel resource, the method further includes:the access network device sends information about a first uplink channel resource set and information about a second uplink channel set to the UE, where the first uplink channel resource is an uplink channel resource in the first uplink channel resource set, and the second uplink channel resource is an uplink channel resource in the second uplink channel resource set. Further, a first state set of the resource indication information indicates an uplink channel resource in the first uplink channel resource set, a second state set of the resource indication information indicates an uplink channel resource in the second uplink channel resource set, and the first state set does not intersect the second state set. Optionally, a state of the resource indication information indicates an uplink channel resource in the first uplink channel resource set and/or an uplink channel resource in the second uplink channel resource set. Further, that the access network device receives, on the channel resource in the uplink subframe, the feedback information sent by using a channel format includes: the access network device receives, on the second uplink channel resource indicated by the resource indication information in the uplink subframe, the feedback information sent by using the m-resource-element channel format. Further, after the access network device receives, on the channel resource in the uplink subframe, the feedback information sent by using the channel format, the method further includes:the access network device determines a first uplink channel resource in the second uplink channel resource according to the identification information of the UE or the preconfigured information, and receives, on the first uplink channel resource, the feedback information sent by using the n-resource-element channel format; orthe access network device determines a first uplink channel resource in the first uplink channel resource according to the resource indication information, and receives, on the first uplink channel resource, the feedback information sent by using the n-resource-element channel format. Further, some time-frequency resources of the first uplink channel resource corresponding to the n-resource-element channel format overlap some time-frequency resources of the second uplink channel resource corresponding to the m-resource-element channel format; or time-frequency resources of the first uplink channel resource corresponding to the n-resource-element channel format are some time-frequency resources of the second uplink channel resource corresponding to the m-resource-element channel format. The n-resource-element channel format and the m-resource-element channel format whose time-frequency resources overlap use an orthogonal code. Further, before the access network device determines the channel resource, the method further includes: the access network device determines the first subset and the second subset according to a preconfiguration. The preconfiguration may be independently performed for different uplink subframes. It should be noted that the foregoing solution in this embodiment may be used as a separate embodiment, independently of steps1201to1205. FIG.13shows a feedback information receiving method according to an embodiment. The method corresponds to the embodiment of the foregoing access network device, and the foregoing access network device can execute the method in this embodiment. Therefore, for same content, refer to descriptions of the foregoing embodiments. Details are not repeatedly described herein. In addition, this embodiment is a separate solution of the foregoing embodiment corresponding toFIG.12. For details, refer to descriptions of the foregoing embodiment. This embodiment includes the following steps: Step1301: An access network device sends downlink control information to UE. Step1302: The access network device sends, in a downlink subframe included in a second downlink subframe set, the data channel scheduled by using the downlink control information. Step1303: The access network device determines an uplink subframe used for receiving feedback information corresponding to the data channel, where a first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset. Step1304: The access network device determines a channel resource, where in case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the access network device is the first uplink channel resource; or in case that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset, the channel resource determined by the access network device is the second uplink channel resource; or in case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the access network device is the second uplink channel resource. Step1305: The access network device receives, on the channel resource in the uplink subframe, the feedback information sent by using a channel format, where the channel format is an n-resource-element channel format, and the first uplink channel resource carries feedback information of the n-resource-element channel format, or the channel format is an m-resource-element channel format, and the second uplink channel resource carries feedback information of the m-resource-element channel format; and m and n are natural numbers, and m>n. The present disclosure further provides the following embodiment: An access network device sends downlink control information to UE. The access network device sends, in a downlink subframe included in a second downlink subframe set, the data channel scheduled by using the downlink control information. The access network device determines an uplink subframe used for receiving feedback information corresponding to the data channel. A first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset. The access network device determines a channel resource. In case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the access network device is the first uplink channel resource; or in case that the second downlink subframe set includes only a downlink subframe that is in the second subset but does not belong to the first subset, the channel resource determined by the access network device is the second uplink channel resource; or in case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the access network device is the second uplink channel resource. The access network device receives, on the channel resource in the uplink subframe, the feedback information sent by using a channel format. The channel format is a p-codebook-size channel format, and the first uplink channel resource carries feedback information of the p-codebook-size channel format, or the channel format is a q-codebook-size channel format, and the second uplink channel resource carries feedback information of the q-codebook-size channel format; and p and q are natural numbers, and p>q. For specific descriptions of the p-codebook-size channel format and the q-codebook-size channel format, refer to descriptions of the foregoing embodiment. In descriptions of the following embodiment, the n-resource-element channel format may be directly replaced with the p-codebook-size channel format, the m-resource-element channel format may be directly replaced with the q-codebook-size channel format, and m may be greater than or equal to n. According to the foregoing embodiment, when more carriers are configured for UE, a maximum quantity of ACK/NACK bits exceeds a current bearer capability of a single-RB PUCCH format 3. In this embodiment of the present disclosure, a downlink subframe set corresponding to an uplink subframe is divided into at least two subsets, a first subset is a proper subset of a second subset, and corresponding uplink channel resources are configured for the two subsets. For a large subset, feedback information is sent by using a large resource format, and for a small subset, feedback information is sent by using a small resource format, thereby resolving a problem of how to send feedback information when more carriers are configured. In addition, when there is a small quantity of instantaneously scheduled carriers, feedback information may be sent by using the fallback small resource format. Therefore, in this embodiment of the present disclosure, resource overheads can be reduced when an ACK/NACK is fed back. Further, that the access network device sends a data channel scheduled by using the downlink control information to the UE includes: the access network device sends, in a downlink subframe included in a second downlink subframe set, the data channel scheduled by using the downlink control information. FIG.14shows a feedback information receiving method according to an embodiment. The method corresponds to the embodiment of the foregoing access network device, and the foregoing access network device can execute the method in this embodiment. Therefore, for same content, refer to descriptions of the foregoing embodiments. Details are not repeatedly described herein. In addition, this embodiment is a separate solution of the foregoing embodiment corresponding toFIG.12. For details, refer to descriptions of the foregoing embodiment. This embodiment includes the following steps: Step1401: An access network device sends downlink control information to UE. Step1402: The access network device sends, in a downlink subframe included in a second downlink subframe set, the data channel scheduled by using the downlink control information. Step1403: The access network device determines an uplink subframe used for receiving feedback information corresponding to the data channel, where a first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset. Step1404: The access network device determines a channel resource, where in case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the access network device is the first uplink channel resource; or in case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the access network device is the second uplink channel resource. Step1405: The access network device receives, on the channel resource in the uplink subframe, the feedback information sent by using a channel format, where the channel format is an n-resource-element channel format, and the first uplink channel resource carries feedback information of the n-resource-element channel format, or the channel format is an m-resource-element channel format, and the second uplink channel resource carries feedback information of the m-resource-element channel format; and m and n are natural numbers, and m>n. The present disclosure further provides the following embodiment: An access network device sends downlink control information to UE. The access network device sends, in a downlink subframe included in a second downlink subframe set, the data channel scheduled by using the downlink control information. The access network device determines an uplink subframe used for receiving feedback information corresponding to the data channel. A first downlink subframe set associated with the uplink subframe includes a first subset and a second subset, the first subset includes at least two downlink subframes, and the first subset is a proper subset of the second subset. The access network device determines a channel resource. In case that the second downlink subframe set is a subset of the first subset, the channel resource determined by the access network device is the first uplink channel resource; or in case that the second downlink subframe set includes a downlink subframe in the first subset and a downlink subframe that is in the second subset but does not belong to the first subset, but does not include a downlink subframe that is not included in the first subset or the second subset, the channel resource determined by the access network device is the second uplink channel resource. The access network device receives, on the channel resource in the uplink subframe, the feedback information sent by using a channel format. The channel format is a p-codebook-size channel format, and the first uplink channel resource carries feedback information of the p-codebook-size channel format, or the channel format is a q-codebook-size channel format, and the second uplink channel resource carries feedback information of the q-codebook-size channel format; and p and q are natural numbers, and p>q. For specific descriptions of the p-codebook-size channel format and the q-codebook-size channel format, refer to descriptions of the foregoing embodiment. In descriptions of the following embodiment, the n-resource-element channel format may be directly replaced with the p-codebook-size channel format, the m-resource-element channel format may be directly replaced with the q-codebook-size channel format, and m may be greater than or equal to n. According to the foregoing embodiment, when more carriers are configured for UE, a maximum quantity of ACK/NACK bits exceeds a current bearer capability of a single-RB PUCCH format 3. In this embodiment of the present disclosure, a downlink subframe set corresponding to an uplink subframe is divided into at least two subsets, a first subset is a proper subset of a second subset, and corresponding uplink channel resources are configured for the two subsets. For a large subset, feedback information is sent by using a large resource format, and for a small subset, feedback information is sent by using a small resource format, thereby resolving a problem of how to send feedback information when more carriers are configured. In addition, when there is a small quantity of instantaneously scheduled carriers, feedback information may be sent by using the fallback small resource format. Therefore, in this embodiment of the present disclosure, resource overheads can be reduced when an ACK/NACK is fed back. It should be noted that the processing module in the foregoing all embodiments of the present disclosure may be implemented by at least one processor. The processor herein may be a central processing unit (CPU), or may be another general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), another programmable logic device, a discrete gate, a transistor logic device, a discrete hardware component, or the like. The sending module may be implemented by a transmitter or a transceiver. The receiving module may be implemented by a receiver or a transceiver. In addition, the access network device and the user equipment in the foregoing embodiments of the present disclosure may further include a component such as a memory. The memory herein may include a read-only memory and a random access memory, and provides an instruction and data for a processor. A part of the memory may further include a nonvolatile random access memory. For example, the memory may further store information about a device type. The processor invokes instruction code in the memory, so as to control other modules of the network device and the user equipment in the embodiments of the present disclosure to execute the foregoing operations. It should be understood that “one embodiment” or “an embodiment” mentioned throughout this specification means that specific features, structures, or characteristics related to the embodiment are included in at least one embodiment of the present disclosure. Therefore, “in one embodiment” or “in an embodiment” throughout this specification does not necessarily refer to a same embodiment. In addition, these specific features, structures, or characteristics may be combined in one or more embodiments in any proper manner. It should be understood that sequence numbers of the foregoing processes do not mean execution sequences in various embodiments of the present disclosure. The execution sequences of the processes should be determined according to functions and internal logic of the processes, and should not be construed as any limitation on the implementation processes of the embodiments of the present disclosure. In addition, the terms “system” and “network” may be used interchangeably in this specification. The term “and/or” in this specification describes only an association relationship for describing associated objects and represents that three relationships may exist. For example, A and/or B may represent the following three cases: Only A exists, both A and B exist, and only B exists. In addition, the character “/” in this specification generally indicates an “or” relationship between the associated objects. It should be understood that in the embodiments of this application, “B corresponding to A” indicates that B is associated with A, and B may be determined according to A. However, it should further be understood that determining B according to A does not mean that B is determined according to A only; that is, B may also be determined according to A and/or other information. A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware, computer software, or a combination thereof. To clearly describe the interchangeability between the hardware and the software, the foregoing has generally described compositions and steps of each example according to functions. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of the present disclosure. In several embodiments provided in this application, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments. When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, the integrated unit may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the present disclosure essentially, or the part contributing to the prior art, or some of the technical solutions may be implemented in a form of a software product. The software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in the embodiments of the present disclosure. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc. The foregoing descriptions are merely specific implementation manners of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in the present disclosure shall fall within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
180,512
11863472
BEST MODE FOR CARRYING OUT THE INVENTION Terms used in the specification adopt general terms which are currently widely used as possible by considering functions in the present disclosure, but the terms may be changed depending on an intention of those skilled in the art, customs, and emergence of new technology. Further, in a specific case, there is a term arbitrarily selected by an applicant and in this case, a meaning thereof will be described in a corresponding description part of the present disclosure. Accordingly, it intends to be revealed that a term used in the specification should be analyzed based on not just a name of the term but a substantial meaning of the term and contents throughout the specification. Throughout this specification and the claims that follow, when it is described that an element is “connected” to another element, the element may be “directly connected” to the other element or “electrically connected” to the other element through a third element. Further, unless explicitly described to the contrary, the word “comprise” will be understood to imply the inclusion of stated elements but not the exclusion of any other elements unless otherwise stated. Moreover, limitations such as “more than or equal to” or “less than or equal to” based on a specific threshold may be appropriately substituted with “more than” or “less than”, respectively, in some exemplary embodiments. The following technology may be used in various wireless access systems, such as code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), orthogonal frequency division multiple access (OFDMA), single carrier-FDMA (SC-FDMA), and the like. The CDMA may be implemented by a wireless technology such as universal terrestrial radio access (UTRA) or CDMA2000. The TDMA may be implemented by a wireless technology such as global system for mobile communications (GSM)/general packet radio service (GPRS)/enhanced data rates for GSM evolution (EDGE). The OFDMA may be implemented by a wireless technology such as IEEE 802.11(Wi-Fi), IEEE 802.16(WiMAX), IEEE 802-20, evolved UTRA (E-UTRA), and the like. The UTRA is a part of a universal mobile telecommunication system (UMTS). 3rd generation partnership project (3GPP) long term evolution (LTE) is a part of an evolved UMTS (E-UMTS) using evolved-UMTS terrestrial radio access (E-UTRA) and LTE-advanced (A) is an evolved version of the 3GPP LTE. 3GPP new radio (NR) is a system designed separately from LTE/LTE-A, and is a system for supporting enhanced mobile broadband (eMBB), ultra-reliable and low latency communication (URLLC), and massive machine type communication (mMTC) services, which are requirements of IMT-2020. For the clear description, 3GPP NR is mainly described, but the technical idea of the present disclosure is not limited thereto. Unless otherwise specified in this specification, a base station may refer to a next generation node B (gNB) as defined in 3GPP NR. Furthermore, unless otherwise specified, a terminal may refer to a user equipment (UE). Hereinafter, in order to facilitate understanding of the description, each content is separately divided into embodiments and described, but each of the embodiments may be used in combination with each other. In the present disclosure, the configuration of the UE may indicate configuration by the base station. Specifically, the base station may transmit a channel or signal to the UE to configure an operation of the UE or a parameter value used in a wireless communication system. FIG.1illustrates an example of a wireless frame structure used in a wireless communication system. Referring toFIG.1, the wireless frame (or radio frame) used in the 3GPP NR system may have a length of 10 ms (ΔfmaxNf/100)*Tc). In addition, the wireless frame includes 10 subframes (SFs) having equal sizes. Herein, Δfmax=480*103Hz, Nf=4096, Tc=1/(Δfref*Nf,ref), Δfref=15*103Hz, and Nf,ref=2048. Numbers from 0 to 9 may be respectively allocated to 10 subframes within one subframe. Each subframe has a length of 1 ms and may include one or more slots according to a subcarrier spacing. More specifically, in the 3GPP NR system, the subcarrier spacing that may be used is 15*2μkHz, and can have a value of =0, 1, 2, 3, 4 as subcarrier spacing configuration. That is, 15 kHz, 30 kHz, 60 kHz, 120 kHz and 240 kHz may be used for subcarrier spacing. One subframe having a length of 1 ms may include 2μslots. In this case, the length of each slot is 2−μms. Numbers from 0 to 2μ-1 may be respectively allocated to 2μslots within one wireless frame. In addition, numbers from 0 to 10*2-1 may be respectively allocated to slots within one subframe. The time resource may be distinguished by at least one of a wireless frame number (also referred to as a wireless frame index), a subframe number (also referred to as a subframe number), and a slot number (or a slot index). FIG.2illustrates an example of a downlink (DL)/uplink (UL) slot structure in a wireless communication system. In particular,FIG.2shows the structure of the resource grid of the 3GPP NR system. There is one resource grid per antenna port. Referring toFIG.2, a slot includes a plurality of orthogonal frequency division multiplexing (OFDM) symbols in a time domain and includes a plurality of resource blocks (RBs) in a frequency domain. An OFDM symbol also means one symbol section. Unless otherwise specified, OFDM symbols may be referred to simply as symbols. One RB includes 12 consecutive subcarriers in the frequency domain. Referring toFIG.2, a signal transmitted from each slot may be represented by a resource grid including Nsize,μgrid,x*NRBscsubcarriers, and NslotsymbOFDM symbols. Here, x=DL when the signal is a DL signal, and x=UL when the signal is an UL signal. Nsize,μgrid,xrepresents the number of resource blocks (RBs) according to the subcarrier spacing constituent (x is DL or UL), and Nslotsymbrepresents the number of OFDM symbols in a slot. NRBscis the number of subcarriers constituting one RB and NRBsc=12. An OFDM symbol may be referred to as a cyclic shift OFDM (CP-OFDM) symbol or a discrete Fourier transform spread OFDM (DFT-s-OFDM) symbol according to a multiple access scheme. The number of OFDM symbols included in one slot may vary according to the length of a cyclic prefix (CP). For example, in the case of a normal CP, one slot includes 14 OFDM symbols, but in the case of an extended CP, one slot may include 12 OFDM symbols. In a specific embodiment, the extended CP can only be used at 60 kHz subcarrier spacing. InFIG.2, for convenience of description, one slot is configured with 14 OFDM symbols by way of example, but embodiments of the present disclosure may be applied in a similar manner to a slot having a different number of OFDM symbols. Referring toFIG.2, each OFDM symbol includes Nsize,μgrid,x*NRBscsubcarriers in the frequency domain. The type of subcarrier may be divided into a data subcarrier for data transmission, a reference signal subcarrier for transmission of a reference signal, and a guard band. The carrier frequency is also referred to as the center frequency (fc). One RB may be defined by NRBsc(e.g., 12) consecutive subcarriers in the frequency domain. For reference, a resource configured with one OFDM symbol and one subcarrier may be referred to as a resource element (RE) or a tone. Therefore, one RB can be configured with Nslotsymb*NRBscresource elements. Each resource element in the resource grid can be uniquely defined by a pair of indexes (k, l) in one slot. k may be an index assigned from 0 to Nsize,μgrid,x*NRBsc−1 in the frequency domain, and 1 may be an index assigned from 0 to Nslotsymb−1 in the time domain. In order for the UE to receive a signal from the base station or to transmit a signal to the base station, the time/frequency of the UE may be synchronized with the time/frequency of the base station. This is because when the base station and the UE are synchronized, the UE can determine the time and frequency parameters necessary for demodulating the DL signal and transmitting the UL signal at the correct time. Each symbol of a radio frame used in a time division duplex (TDD) or an unpaired spectrum may be configured with at least one of a DL symbol, an UL symbol, and a flexible symbol. A radio frame used as a DL carrier in a frequency division duplex (FDD) or a paired spectrum may be configured with a DL symbol or a flexible symbol, and a radio frame used as a UL carrier may be configured with a UL symbol or a flexible symbol. In the DL symbol, DL transmission is possible, but UL transmission is impossible. In the UL symbol, UL transmission is possible, but DL transmission is impossible. The flexible symbol may be determined to be used as a DL or an UL according to a signal. Information on the type of each symbol, i.e., information representing any one of DL symbols, UL symbols, and flexible symbols, may be configured with a cell-specific or common radio resource control (RRC) signal. In addition, information on the type of each symbol may additionally be configured with a UE-specific or dedicated RRC signal. The base station informs, by using cell-specific RRC signals, i) the period of cell-specific slot configuration, ii) the number of slots with only DL symbols from the beginning of the period of cell-specific slot configuration, iii) the number of DL symbols from the first symbol of the slot immediately following the slot with only DL symbols, iv) the number of slots with only UL symbols from the end of the period of cell specific slot configuration, and v) the number of UL symbols from the last symbol of the slot immediately before the slot with only the UL symbol. Here, symbols not configured with any one of a UL symbol and a DL symbol are flexible symbols. When the information on the symbol type is configured with the UE-specific RRC signal, the base station may signal whether the flexible symbol is a DL symbol or an UL symbol in the cell-specific RRC signal. In this case, the UE-specific RRC signal can not change a DL symbol or a UL symbol configured with the cell-specific RRC signal into another symbol type. The UE-specific RRC signal may signal the number of DL symbols among the Nslotsymbsymbols of the corresponding slot for each slot, and the number of UL symbols among the Nslotsymbsymbols of the corresponding slot. In this case, the DL symbol of the slot may be continuously configured with the first symbol to the i-th symbol of the slot. In addition, the UL symbol of the slot may be continuously configured with the j-th symbol to the last symbol of the slot (where i<j). In the slot, symbols not configured with any one of a UL symbol and a DL symbol are flexible symbols. The type of symbol configured with the above RRC signal may be referred to as a semi-static DL/UL configuration. In the semi-static DL/UL configuration previously configured with RRC signals, the flexible symbol may be indicated as a DL symbol, an UL symbol, or a flexible symbol through dynamic slot format information (SFI) transmitted on a physical DL control channel (PDCCH). In this case, the DL symbol or UL symbol configured with the RRC signal is not changed to another symbol type. Table 1 exemplifies the dynamic SFI that the base station can indicate to the UE. TABLE 1Symbol number in a slotindex0123456789101112130DDDDDDDDDDDDDD1UUUUUUUUUUUUUU2XXXXXXXXXXXXXX3DDDDDDDDDDDDDX4DDDDDDDDDDDDXX5DDDDDDDDDDDXXX6DDDDDDDDDDXXXX7DDDDDDDDDXXXXX8XXXXXXXXXXXXXU9XXXXXXXXXXXXUU10XUUUUUUUUUUUUU11XXUUUUUUUUUUUU12XXXUUUUUUUUUUU13XXXXUUUUUUUUUU14XXXXXUUUUUUUUU15XXXXXXUUUUUUUU16DXXXXXXXXXXXXX17DDXXXXXXXXXXXX18DDDXXXXXXXXXXX19DXXXXXXXXXXXXU20DDXXXXXXXXXXXU21DDDXXXXXXXXXXU22DXXXXXXXXXXXUU23DDXXXXXXXXXXUU24DDDXXXXXXXXXUU25DXXXXXXXXXXUUU26DDXXXXXXXXXUUU27DDDXXXXXXXXUUU28DDDDDDDDDDDDXU29DDDDDDDDDDDXXU30DDDDDDDDDDXXXU31DDDDDDDDDDDXUU32DDDDDDDDDDXXUU33DDDDDDDDDXXXUU34DXUUUUUUUUUUUU35DDXUUUUUUUUUUU36DDDXUUUUUUUUUU37DXXUUUUUUUUUUU38DDXXUUUUUUUUUU39DDDXXUUUUUUUUU40DXXXUUUUUUUUUU41DDXXXUUUUUUUUU42DDDXXXUUUUUUUU43DDDDDDDDDXXXXU44DDDDDDDXXXXXXU45DDDDDDXXUUUUUU46DDDDDXUDDDDDXU47DDXUUUUDDXUUUU48DXUUUUUDXUUUUU49DDDDXXUDDDDXXU50DDXXUUUDDXXUUU51DXXUUUUDXXUUUU52DXXXXXUDXXXXXU53DDXXXXUDDXXXXU54XXXXXXXDDDDDDD55DDXXXUUUDDDDDD56~256Reserved In Table 1, D denotes a DL symbol, U denotes a UL symbol, and X denotes a flexible symbol. As shown in Table 1, up to two DL/UL switching in one slot may be allowed. FIG.3is a diagram for explaining a physical channel used in a 3GPP system (e.g., NR) and a typical signal transmission method using the physical channel. If the power of the UE is turned on or the UE camps on a new cell, the UE performs an initial cell search (S101). Specifically, the UE may synchronize with the BS in the initial cell search. For this, the UE may receive a primary synchronization signal (PSS) and a secondary synchronization signal (SSS) from the base station to synchronize with the base station, and obtain information such as a cell ID. Thereafter, the UE can receive the physical broadcast channel from the base station and obtain the broadcast information in the cell. Upon completion of the initial cell search, the UE receives a physical downlink shared channel (PDSCH) according to the physical downlink control channel (PDCCH) and information in the PDCCH, so that the UE can obtain more specific system information than the system information obtained through the initial cell search (S102). Here, the system information received by the UE is cell-common system information for the UE to properly operate at the physical layer in Radio Resource Control (RRC), and is referred to as remaining system information (RSMI) or system information block (SIB)1. When the UE initially accesses the base station or does not have radio resources for signal transmission (when the UE is in RRC_IDLE mode), the UE may perform a random access procedure on the base station (operations S103to S106). First, the UE can transmit a preamble through a physical random access channel (PRACH) (S103) and receive a response message for the preamble from the base station through the PDCCH and the corresponding PDSCH (S104). When a valid random access response message is received by the UE, the UE transmits data including the identifier of the UE and the like to the base station through a physical uplink shared channel (PUSCH) indicated by the UL grant transmitted through the PDCCH from the base station (S105). Next, the UE waits for reception of the PDCCH as an indication of the base station for collision resolution. If the UE successfully receives the PDCCH through the identifier of the UE (S106), the random access process is terminated. During the random access process, the UE may obtain UE-specific system information necessary for the UE to properly operate at the physical layer in the RRC layer. When the UE obtains UE-specific system information from the RRC layer, the UE enters the RRC_CONNECTED mode. The RRC layer is used for message generation and management for control between a UE and a radio access network (RAN). More specifically, in the RRC layer, the base station and the UE may perform broadcasting of cell system information, delivery management of paging messages, mobility management and handover, measurement report and control thereof, UE capability management, and storage management including existing management necessary for all UEs in the cell. In general, since the update of the signal (hereinafter, referred to as RRC signal) transmitted from the RRC layer is longer than the transmission/reception period (i.e., transmission time interval, TTI) in the physical layer, the RRC signal may be maintained unchanged for a long period. After the above-described procedure, the UE receives PDCCH/PDSCH (S107) and transmits a physical uplink shared channel (PUSCH)/physical uplink control channel (PUCCH) (S108) as a general UL/DL signal transmission procedure. In particular, the UE may receive downlink control information (DCI) through the PDCCH. The DCI may include control information such as resource allocation information for the UE. Also, the format of the DCI may vary depending on the intended use. The uplink control information (UCI) that the UE transmits to the base station through UL includes a DL/UL ACK/NACK signal, a channel quality indicator (CQI), a precoding matrix index (PMI), a rank indicator (RI), and the like. Here, the CQI, PMI, and RI may be included in channel state information (CSI). In the 3GPP NR system, the UE may transmit control information such as HARQ-ACK and CSI described above through the PUSCH and/or PUCCH. FIGS.4aand4billustrate an SS/PBCH block for initial cell access in a 3GPP NR system. When the power is turned on or wanting to access a new cell, the UE may obtain time and frequency synchronization with the cell and perform an initial cell search procedure. The UE may detect a physical cell identity NcellID of the cell during a cell search procedure. For this, the UE may receive a synchronization signal, for example, a primary synchronization signal (PSS) and a secondary synchronization signal (SSS), from a base station, and synchronize with the base station. In this case, the UE can obtain information such as a cell identity (ID). Referring toFIG.4a, a synchronization signal (SS) will be described in more detail. The synchronization signal can be classified into PSS and SSS. The PSS may be used to obtain time domain synchronization and/or frequency domain synchronization, such as OFDM symbol synchronization and slot synchronization. The SSS can be used to obtain frame synchronization and cell group ID. Referring toFIG.4aand Table 2, the SS/PBCH block can be configured with consecutive20RBs (=240 subcarriers) in the frequency axis, and can be configured with consecutive 4 OFDM symbols in the time axis. In this case, in the SS/PBCH block, the PSS is transmitted in the first OFDM symbol and the SSS is transmitted in the third OFDM symbol through the 56th to 182th subcarriers. Here, the lowest subcarrier index of the SS/PBCH block is numbered from 0. In the first OFDM symbol in which the PSS is transmitted, the base station does not transmit a signal through the remaining subcarriers, i.e., 0th to 55th and 183th to 239th subcarriers. In addition, in the third OFDM symbol in which the SSS is transmitted, the base station does not transmit a signal through 48th to 55th and 183th to 191th subcarriers. The base station transmits a physical broadcast channel (PBCH) through the remaining RE except for the above signal in the SS/PBCH block. TABLE 2OFDM symbol number lSubcarrier number kChannelrelative to the start of anrelative to the start of anor signalSS/PBCH blockSS/PBCH blockPSS056, 57, . . . , 182SSS256, 57, . . . , 182Set to 000, 1, . . . , 55, 183, 184, . . . , 239248, 49, . . . , 55, 183, 184, . . . , 191PBCH1, 30, 1, . . . , 23920, 1, . . . , 47,192, 193, . . . , 239DM-RS for1, 30 + v, 4 + v, 8 + v, . . . , 236 + vPBCH20 + v, 4 + v, 8 + v, . . . , 44 + v192 + v, 196 + v, . . . , 236 + v The SS allows a total of 1008 unique physical layer cell IDs to be grouped into 336 physical-layer cell-identifier groups, each group including three unique identifiers, through a combination of three PSSs and SSSs, specifically, such that each physical layer cell ID is to be only a part of one physical-layer cell-identifier group. Therefore, the physical layer cell ID NcellID=3N(1)ID+N(2)IDcan be uniquely defined by the index N(1)IDranging from 0 to 335 indicating a physical-layer cell-identifier group and the index N(2)IDranging from 0 to 2 indicating a physical-layer identifier in the physical-layer cell-identifier group. The UE may detect the PSS and identify one of the three unique physical-layer identifiers. In addition, the UE can detect the SSS and identify one of the 336 physical layer cell IDs associated with the physical-layer identifier. In this case, the sequence dPSS(n) of the PSS is as follows. dPSS(n)=1−2x(m) m=(n+43NID(2))mod127 0≤n<127 Here, x(i+7)=(x(i+4)+x(i))mod2 and is given as, [x(6)x(5)x(4)x(3)x(2)x(1)x(0)]=[1 1 1 0 1 1 0] Further, the sequence dSSS(n) of the SSS is as follows. dS⁢S⁢S(n)=[1-2⁢x0((n+m0)⁢mod⁢127)⁢1-2⁢x1((n+m1)⁢mod⁢127)]m0=15⁢⌊NID(1)1⁢1⁢2⌋+5⁢NID(2)m1=NI⁢D(1)⁢mod⁢1120≤n<1⁢2⁢7 Here, x0(i+7)=(x0(i+4)+x0(i))⁢mod⁢2x1(i+7)=(x1(i+1)+x1(i))⁢mod⁢2 and is given as, [x0(6)⁢x0(5)⁢x0(4)⁢x0(3)⁢x0(2)⁢x0(1)⁢x0(0)]=[0⁢0⁢0⁢0⁢0⁢0⁢1][x1(6)⁢x1(5)⁢x1(4)⁢x1(3)⁢x1(2)⁢x1(1)⁢x1(0)]=[0⁢0⁢0⁢0⁢0⁢0⁢1] A radio frame with a 10 ms length may be divided into two half frames with a 5 ms length. Referring toFIG.4b, a description will be made of a slot in which SS/PBCH blocks are transmitted in each half frame. A slot in which the SS/PBCH block is transmitted may be any one of the cases A, B, C, D, and E. In the case A, the subcarrier spacing is 15 kHz and the starting time point of the SS/PBCH block is the ({2, 8}+14*n)-th symbol. In this case, n=0 or 1 at a carrier frequency of 3 GHz or less. In addition, it may be n=0, 1, 2, 3 at carrier frequencies above 3 GHz and below 6 GHz. In the case B, the subcarrier spacing is 30 kHz and the starting time point of the SS/PBCH block is {4, 8, 16, 20}+28*n. In this case, n=0 at a carrier frequency of 3 GHz or less. In addition, it may be n=0, 1 at carrier frequencies above 3 GHz and below 6 GHz. In the case C, the subcarrier spacing is 30 kHz and the starting time point of the SS/PBCH block is the ({2, 8}+14*n)-th symbol. In this case, n=0 or 1 at a carrier frequency of 3 GHz or less. In addition, it may be n=0, 1, 2, 3 at carrier frequencies above 3 GHz and below 6 GHz. In the case D, the subcarrier spacing is 120 kHz and the starting time point of the SS/PBCH block is the ({4, 8, 16, 20}+28*n)-th symbol. In this case, at a carrier frequency of 6 GHz or more, n=0, 1, 2, 3, 5, 6, 7, 8, 10, 11, 12, 13, 15, 16, 17, 18. In the case E, the subcarrier spacing is 240 kHz and the starting time point of the SS/PBCH block is the ({8, 12, 16, 20, 32, 36, 40, 44}+56*n)-th symbol. In this case, at a carrier frequency of 6 GHz or more, n=0, 1, 2, 3, 5, 6, 7, 8. FIGS.5aand5billustrate a procedure for transmitting control information and a control channel in a 3GPP NR system. Referring toFIG.5a, the base station may add a cyclic redundancy check (CRC) masked (e.g., an XOR operation) with a radio network temporary identifier (RNTI) to control information (e.g., downlink control information (DCI)) (S202). The base station may scramble the CRC with an RNTI value determined according to the purpose/target of each control information. The common RNTI used by one or more UEs can include at least one of a system information RNTI (SI-RNTI), a paging RNTI (P-RNTI), a random access RNTI (RA-RNTI), and a transmit power control RNTI (TPC-RNTI). In addition, the UE-specific RNTI may include at least one of a cell temporary RNTI (C-RNTI), and the CS-RNTI. Thereafter, the base station may perform rate-matching (S206) according to the amount of resource(s) used for PDCCH transmission after performing channel encoding (e.g., polar coding) (S204). Thereafter, the base station may multiplex the DCI(s) based on the control channel element (CCE) based PDCCH structure (S208). In addition, the base station may apply an additional process (S210) such as scrambling, modulation (e.g., QPSK), interleaving, and the like to the multiplexed DCI(s), and then map the DCI(s) to the resource to be transmitted. The CCE is a basic resource unit for the PDCCH, and one CCE may include a plurality (e.g., six) of resource element groups (REGs). One REG may be configured with a plurality (e.g., 12) of REs. The number of CCEs used for one PDCCH may be defined as an aggregation level. In the 3GPP NR system, an aggregation level of 1, 2, 4, 8, or 16 may be used.FIG.5bis a diagram related to a CCE aggregation level and the multiplexing of a PDCCH and illustrates the type of a CCE aggregation level used for one PDCCH and CCE(s) transmitted in the control area according thereto. FIG.6illustrates a control resource set (CORESET) in which a physical downlink control channel (PDCCH) may be transmitted in a 3GPP NR system. The CORESET is a time-frequency resource in which PDCCH, that is, a control signal for the UE, is transmitted. In addition, a search space to be described later may be mapped to one CORESET. Therefore, the UE may monitor the time-frequency domain designated as CORESET instead of monitoring all frequency bands for PDCCH reception, and decode the PDCCH mapped to CORESET. The base station may configure one or more CORESETs for each cell to the UE. The CORESET may be configured with up to three consecutive symbols on the time axis. In addition, the CORESET may be configured in units of six consecutive PRBs on the frequency axis. In the embodiment ofFIG.6, CORESET #1is configured with consecutive PRBs, and CORESET #2and CORESET #3are configured with discontinuous PRBs. The CORESET can be located in any symbol in the slot. For example, in the embodiment ofFIG.5, CORESET #1starts at the first symbol of the slot, CORESET #2starts at the fifth symbol of the slot, and CORESET #9starts at the ninth symbol of the slot. FIG.7illustrates a method for setting a PDCCH search space in a 3GPP NR system. In order to transmit the PDCCH to the UE, each CORESET may have at least one search space. In the embodiment of the present disclosure, the search space is a set of all time-frequency resources (hereinafter, PDCCH candidates) through which the PDCCH of the UE is capable of being transmitted. The search space may include a common search space that the UE of the 3GPP NR is required to commonly search and a Terminal-specific or a UE-specific search space that a specific UE is required to search. In the common search space, UE may monitor the PDCCH that is set so that all UEs in the cell belonging to the same base station commonly search. In addition, the UE-specific search space may be set for each UE so that UEs monitor the PDCCH allocated to each UE at different search space position according to the UE. In the case of the UE-specific search space, the search space between the UEs may be partially overlapped and allocated due to the limited control area in which the PDCCH may be allocated. Monitoring the PDCCH includes blind decoding for PDCCH candidates in the search space. When the blind decoding is successful, it may be expressed that the PDCCH is (successfully) detected/received and when the blind decoding fails, it may be expressed that the PDCCH is not detected/not received, or is not successfully detected/received. For convenience of explanation, a PDCCH scrambled with a group common (GC) RNTI previously known to UEs so as to transmit DL control information to the one or more UEs is referred to as a group common (GC) PDCCH or a common PDCCH. In addition, a PDCCH scrambled with a specific-terminal RNTI that a specific UE already knows so as to transmit UL scheduling information or DL scheduling information to the specific UE is referred to as a specific-UE PDCCH. The common PDCCH may be included in a common search space, and the UE-specific PDCCH may be included in a common search space or a UE-specific PDCCH. The base station may signal each UE or UE group through a PDCCH about information (i.e., DL Grant) related to resource allocation of a paging channel (PCH) and a downlink-shared channel (DL-SCH) that are a transmission channel or information (i.e., UL grant) related to resource allocation of a uplink-shared channel (UL-SCH) and a hybrid automatic repeat request (HARQ). The base station may transmit the PCH transport block and the DL-SCH transport block through the PDSCH. The base station may transmit data excluding specific control information or specific service data through the PDSCH. In addition, the UE may receive data excluding specific control information or specific service data through the PDSCH. The base station may include, in the PDCCH, information on to which UE (one or a plurality of UEs) PDSCH data is transmitted and how the PDSCH data is to be received and decoded by the corresponding UE, and transmit the PDCCH. For example, it is assumed that the DCI transmitted on a specific PDCCH is CRC masked with an RNTI of “A”, and the DCI indicates that PDSCH is allocated to a radio resource (e.g., frequency location) of “B” and indicates transmission format information (e.g., transport block size, modulation scheme, coding information, etc.) of “C”. The UE monitors the PDCCH using the RNTI information that the UE has. In this case, if there is a UE which performs blind decoding the PDCCH using the “A” RNTI, the UE receives the PDCCH, and receives the PDSCH indicated by “B” and “C” through the received PDCCH information. Table 3 shows an embodiment of a physical uplink control channel (PUCCH) used in a wireless communication system. TABLE 3PUCCH formatLength in OFDM symbolsNumber of bits01-2≤214-14≤221-2>234-14>244-14>2 PUCCH may be used to transmit the following UL control information (UCI).Scheduling Request (SR): Information used for requesting a UL UL-SCH resource.HARQ-ACK: A Response to PDCCH (indicating DL SPS release) and/or a response to DL transport block (TB) on PDSCH. HARQ-ACK indicates whether information transmitted on the PDCCH or PDSCH is received. The HARQ-ACK response includes positive ACK (simply ACK), negative ACK (hereinafter NACK), Discontinuous Transmission (DTX), or NACK/DTX. Here, the term HARQ-ACK is used mixed with HARQ-ACK/NACK and ACK/NACK. In general, ACK may be represented by bit value 1 and NACK may be represented by bit value 0.Channel State Information (CSI): Feedback information on the DL channel. The UE generates it based on the CSI-Reference Signal (RS) transmitted by the base station. Multiple Input Multiple Output (MIMO)-related feedback information includes a Rank Indicator (RI) and a Precoding Matrix Indicator (PMI). CSI can be divided into CSI part 1 and CSI part 2 according to the information indicated by CSI. In the 3GPP NR system, five PUCCH formats may be used to support various service scenarios, various channel environments, and frame structures. PUCCH format 0 is a format capable of transmitting 1-bit or 2-bit HARQ-ACK information or SR. PUCCH format 0 can be transmitted through one or two OFDM symbols on the time axis and one PRB on the frequency axis. When PUCCH format 0 is transmitted in two OFDM symbols, the same sequence to the two symbols may be transmitted through different RBs. In this case, the sequence may be a cyclic shift (CS) sequence from the base sequence used for PUCCH format 0. Through this, the UE can obtain a frequency diversity gain. Specifically, the UE may determine a cyclic shift (CS) value mcsaccording to the Mbitbit UCI (Mbit=1 or 2). In addition, a sequence in which a base sequence of length 12 is cyclically shifted based on a predetermined CS value mcsmay be mapped to 1 OFDM symbol and 12 REs of 1 RB and transmitted. When the number of cyclic shifts available to the UE is 12 and Mbit=1, 1 bit UCI 0 and 1 may be mapped to two cyclic shifted sequences having a difference of 6 cyclic shift values, respectively. In addition, when Mbit=2, 2 bits UCI 00, 01, 11, and 10 may be mapped to four cyclic shifted sequences in which the difference in cyclic shift values is 3, respectively. PUCCH format 1 may deliver 1-bit or 2-bit HARQ-ACK information or SR. PUCCH format 1 may be transmitted through consecutive OFDM symbols on the time axis and one PRB on the frequency axis. Here, the number of OFDM symbols occupied by PUCCH format 1 may be one of 4 to 14. More specifically, UCI, which is Mbit=1, may be BPSK-modulated. The UE may modulate UCI, which is Mbit=2, with quadrature phase shift keying (QPSK). A signal is obtained by multiplying a modulated complex valued symbol d(0) by a sequence of length 12. In this case, the sequence may be a base sequence used for PUCCH format 0. The UE spreads the even-numbered OFDM symbols to which PUCCH format 1 is allocated through the time axis orthogonal cover code (OCC) to transmit the obtained signal. PUCCH format 1 determines the maximum number of different UEs multiplexed in the one RB according to the length of the OCC to be used. A demodulation reference signal (DMRS) may be spread with OCC and mapped to the odd-numbered OFDM symbols of PUCCH format 1. PUCCH format 2 may deliver UCI exceeding 2 bits. PUCCH format 2 may be transmitted through one or two OFDM symbols on the time axis and one or a plurality of RBs on the frequency axis. When PUCCH format 2 is transmitted in two OFDM symbols, the sequences which are transmitted in different RBs through the two OFDM symbols may be same each other. Here, the sequence may be a plurality of modulated complex valued symbols d(0), . . . , d(Msymbol−1). Here, Msymbolmay be Mbit/2. Through this, the UE may obtain a frequency diversity gain. More specifically, Mbitbit UCI (Mbit>2) is bit-level scrambled, QPSK modulated, and mapped to RB(s) of one or two OFDM symbol(s). Here, the number of RBs may be one of 1 to 16. PUCCH format 3 or PUCCH format 4 may deliver UCI exceeding 2 bits. PUCCH format 3 or PUCCH format 4 may be transmitted through consecutive OFDM symbols on the time axis and one PRB on the frequency axis. The number of OFDM symbols occupied by PUCCH format 3 or PUCCH format 4 may be one of 4 to 14. Specifically, the UE modulates Mbitbits UCI (Mbit>2) with π/2-Binary Phase Shift Keying (BPSK) or QPSK to generate a complex valued symbol d(0) to d(Msymb−1). Here, when using π/2-BPSK, Msymb=Mbit, and when using QPSK, Msymb=Mbit/2. The UE may not apply block-unit spreading to the PUCCH format 3. However, the UE may apply block-unit spreading to one RB (i.e., 12 subcarriers) using PreDFT-OCC of a length of 12 such that PUCCH format 4 may have two or four multiplexing capacities. The UE performs transmit precoding (or DFT-precoding) on the spread signal and maps it to each RE to transmit the spread signal. In this case, the number of RBs occupied by PUCCH format 2, PUCCH format 3, or PUCCH format 4 may be determined according to the length and maximum code rate of the UCI transmitted by the UE. When the UE uses PUCCH format 2, the UE may transmit HARQ-ACK information and CSI information together through the PUCCH. When the number of RBs that the UE may transmit is greater than the maximum number of RBs that PUCCH format 2, or PUCCH format 3, or PUCCH format 4 may use, the UE may transmit only the remaining UCI information without transmitting some UCI information according to the priority of the UCI information. PUCCH format 1, PUCCH format 3, or PUCCH format 4 may be configured through the RRC signal to indicate frequency hopping in a slot. When frequency hopping is configured, the index of the RB to be frequency hopped may be configured with an RRC signal. When PUCCH format 1, PUCCH format 3, or PUCCH format 4 is transmitted through N OFDM symbols on the time axis, the first hop may have floor (N/2) OFDM symbols and the second hop may have ceiling(N/2) OFDM symbols. PUCCH format 1, PUCCH format 3, or PUCCH format 4 may be configured to be repeatedly transmitted in a plurality of slots. In this case, the number K of slots in which the PUCCH is repeatedly transmitted may be configured by the RRC signal. The repeatedly transmitted PUCCHs must start at an OFDM symbol of the constant position in each slot, and have the constant length. When one OFDM symbol among OFDM symbols of a slot in which a UE should transmit a PUCCH is indicated as a DL symbol by an RRC signal, the UE may not transmit the PUCCH in a corresponding slot and delay the transmission of the PUCCH to the next slot to transmit the PUCCH. Meanwhile, in the 3GPP NR system, the UE may perform transmission/reception using a bandwidth less than or equal to the bandwidth of the carrier (or cell). To this end, the UE may be configured with a bandwidth part (BWP) consisting of a continuous bandwidth of a portion of the bandwidth of the carrier. A UE operating according to TDD or operating in an unpaired spectrum may receive up to four DL/UL BWP pairs for one carrier (or cell). In addition, the UE may activate one DL/UL BWP pair. A UE operating according to FDD or operating in a paired spectrum may receive up to 4 DL BWPs on a downlink carrier (or cell) and up to 4 UL BWPs on an uplink carrier (or cell). The UE may activate one DL BWP and UL BWP for each carrier (or cell). The UE may not receive or transmit in time-frequency resources other than the activated BWP. The activated BWP may be referred to as an active BWP. The base station may indicate an activated BWP among the BWPs configured by the UE through downlink control information (DCI). The BWP indicated through DCI is activated, and other configured BWP(s) are deactivated. In a carrier (or cell) operating in TDD, the base station may include a bandwidth part indicator (BPI) indicating the BWP activated in the DCI scheduling the PDSCH or PUSCH to change the DL/UL BWP pair of the UE. The UE may receive a DCI scheduling a PDSCH or a PUSCH and may identify a DL/UL BWP pair activated based on the BPI. In the case of a downlink carrier (or cell) operating in FDD, the base station may include a BPI indicating the activated BWP in the DCI scheduling the PDSCH to change the DL BWP of the UE. In the case of an uplink carrier (or cell) operating in FDD, the base station may include a BPI indicating the activated BWP in the DCI scheduling the PUSCH to change the UL BWP of the UE. FIG.8is a conceptual diagram illustrating carrier aggregation. The carrier aggregation is a method in which the UE uses a plurality of frequency blocks or cells (in the logical sense) configured with UL resources (or component carriers) and/or DL resources (or component carriers) as one large logical frequency band in order for a wireless communication system to use a wider frequency band. One component carrier may also be referred to as a term called a Primary cell (PCell) or a Secondary cell (SCell), or a Primary SCell (PScell). However, hereinafter, for convenience of description, the term “component carrier” is used. Referring toFIG.8, as an example of a 3GPP NR system, the entire system band may include up to 16 component carriers, and each component carrier may have a bandwidth of up to 400 MHz. The component carrier may include one or more physically consecutive subcarriers. Although it is shown inFIG.8that each of the component carriers has the same bandwidth, this is merely an example, and each component carrier may have a different bandwidth. Also, although each component carrier is shown as being adjacent to each other in the frequency axis, the drawings are shown in a logical concept, and each component carrier may be physically adjacent to one another, or may be spaced apart. Different center frequencies may be used for each component carrier. Also, one common center frequency may be used in physically adjacent component carriers. Assuming that all the component carriers are physically adjacent in the embodiment ofFIG.8, center frequency A may be used in all the component carriers. Further, assuming that the respective component carriers are not physically adjacent to each other, center frequency A and the center frequency B can be used in each of the component carriers. When the total system band is extended by carrier aggregation, the frequency band used for communication with each UE can be defined in units of a component carrier. UE A may use 100 MHz, which is the total system band, and performs communication using all five component carriers. UEs B1˜B5can use only a 20 MHz bandwidth and perform communication using one component carrier. UEs C1and C2may use a 40 MHz bandwidth and perform communication using two component carriers, respectively. The two component carriers may be logically/physically adjacent or non-adjacent. UE C1represents the case of using two non-adjacent component carriers, and UE C2represents the case of using two adjacent component carriers. FIG.9is a drawing for explaining signal carrier communication and multiple carrier communication. Particularly,FIG.9(a)shows a single carrier subframe structure andFIG.9(b)shows a multi-carrier subframe structure. Referring toFIG.9(a), in an FDD mode, a general wireless communication system may perform data transmission or reception through one DL band and one UL band corresponding thereto. In another specific embodiment, in a TDD mode, the wireless communication system may divide a radio frame into a UL time unit and a DL time unit in a time domain, and perform data transmission or reception through a UL/DL time unit. Referring toFIG.9(b), three 20 MIz component carriers (CCs) can be aggregated into each of UL and DL, so that a bandwidth of 60 MIz can be supported. Each CC may be adjacent or non-adjacent to one another in the frequency domain.FIG.9(b)shows a case where the bandwidth of the UL CC and the bandwidth of the DL CC are the same and symmetric, but the bandwidth of each CC can be determined independently. In addition, asymmetric carrier aggregation with different number of UL CCs and DL CCs is possible. A DL/UL CC allocated/configured to a specific UE through RRC may be called as a serving DL/UL CC of the specific UE. The base station may perform communication with the UE by activating some or all of the serving CCs of the UE or deactivating some CCs. The base station can change the CC to be activated/deactivated, and change the number of CCs to be activated/deactivated. If the base station allocates a CC available for the UE as to be cell-specific or UE-specific, at least one of the allocated CCs can be deactivated, unless the CC allocation for the UE is completely reconfigured or the UE is handed over. One CC that is not deactivated by the UE is called as a Primary CC (PCC) or a primary cell (PCell), and a CC that the base station can freely activate/deactivate is called as a Secondary CC (SCC) or a secondary cell (SCell). Meanwhile, 3GPP NR uses the concept of a cell to manage radio resources. A cell is defined as a combination of DL resources and UL resources, that is, a combination of DL CC and UL CC. A cell may be configured with DL resources alone, or a combination of DL resources and UL resources. When the carrier aggregation is supported, the linkage between the carrier frequency of the DL resource (or DL CC) and the carrier frequency of the UL resource (or UL CC) may be indicated by system information. The carrier frequency refers to the center frequency of each cell or CC. A cell corresponding to the PCC is referred to as a PCell, and a cell corresponding to the SCC is referred to as an SCell. The carrier corresponding to the PCell in the DL is the DL PCC, and the carrier corresponding to the PCell in the UL is the UL PCC. Similarly, the carrier corresponding to the SCell in the DL is the DL SCC and the carrier corresponding to the SCell in the UL is the UL SCC. According to UE capability, the serving cell(s) may be configured with one PCell and zero or more SCells. In the case of UEs that are in the RRC_CONNECTED state but not configured for carrier aggregation or that do not support carrier aggregation, there is only one serving cell configured only with PCell. As mentioned above, the term “cell” used in carrier aggregation is distinguished from the term “cell” which refers to a certain geographical area in which a communication service is provided by one base station or one antenna group. That is, one component carrier may also be referred to as a scheduling cell, a scheduled cell, a primary cell (PCell), a secondary cell (SCell), or a primary SCell (PScell). However, in order to distinguish between a cell referring to a certain geographical area and a cell of carrier aggregation, in the present disclosure, a cell of a carrier aggregation is referred to as a CC, and a cell of a geographical area is referred to as a cell. FIG.10is a diagram showing an example in which a cross carrier scheduling technique is applied. When cross carrier scheduling is set, the control channel transmitted through the first CC may schedule a data channel transmitted through the first CC or the second CC using a carrier indicator field (CIF). The CIF is included in the DCI. In other words, a scheduling cell is set, and the DL grant/UL grant transmitted in the PDCCH area of the scheduling cell schedules the PDSCH/PUSCH of the scheduled cell. That is, a search area for the plurality of component carriers exists in the PDCCH area of the scheduling cell. A PCell may be basically a scheduling cell, and a specific SCell may be designated as a scheduling cell by an upper layer. In the embodiment ofFIG.10, it is assumed that three DL CCs are merged. Here, it is assumed that DL component carrier #0is DL PCC (or PCell), and DL component carrier #1and DL component carrier #2are DL SCCs (or SCell). In addition, it is assumed that the DL PCC is set to the PDCCH monitoring CC. When cross-carrier scheduling is not configured by UE-specific (or UE-group-specific or cell-specific) higher layer signaling, a CIF is disabled, and each DL CC can transmit only a PDCCH for scheduling its PDSCH without the CIF according to an NR PDCCH rule (non-cross-carrier scheduling, self-carrier scheduling). Meanwhile, if cross-carrier scheduling is configured by UE-specific (or UE-group-specific or cell-specific) higher layer signaling, a CIF is enabled, and a specific CC (e.g., DL PCC) may transmit not only the PDCCH for scheduling the PDSCH of the DL CC A using the CIF but also the PDCCH for scheduling the PDSCH of another CC (cross-carrier scheduling). On the other hand, a PDCCH is not transmitted in another DL CC. Accordingly, the UE monitors the PDCCH not including the CIF to receive a self-carrier scheduled PDSCH depending on whether the cross-carrier scheduling is configured for the UE, or monitors the PDCCH including the CIF to receive the cross-carrier scheduled PDSCH. On the other hand,FIGS.9and10illustrate the subframe structure of the 3GPP LTE-A system, and the same or similar configuration may be applied to the 3GPP NR system. However, in the 3GPP NR system, the subframes ofFIGS.9and10may be replaced with slots. FIG.11is a block diagram showing the configurations of a UE and a base station according to an embodiment of the present disclosure. In an embodiment of the present disclosure, the UE may be implemented with various types of wireless communication devices or computing devices that are guaranteed to be portable and mobile. The UE may be referred to as a User Equipment (UE), a Station (STA), a Mobile Subscriber (MS), or the like. In addition, in an embodiment of the present disclosure, the base station controls and manages a cell (e.g., a macro cell, a femto cell, a pico cell, etc.) corresponding to a service area, and performs functions of a signal transmission, a channel designation, a channel monitoring, a self diagnosis, a relay, or the like. The base station may be referred to as next Generation NodeB (gNB) or Access Point (AP). As shown in the drawing, a UE100according to an embodiment of the present disclosure may include a processor110, a communication module120, a memory130, a user interface140, and a display unit150. First, the processor110may execute various instructions or programs and process data within the UE100. In addition, the processor110may control the entire operation including each unit of the UE100, and may control the transmission/reception of data between the units. Here, the processor110may be configured to perform an operation according to the embodiments described in the present disclosure. For example, the processor110may receive slot configuration information, determine a slot configuration based on the slot configuration information, and perform communication according to the determined slot configuration. Next, the communication module120may be an integrated module that performs wireless communication using a wireless communication network and a wireless LAN access using a wireless LAN. For this, the communication module120may include a plurality of network interface cards (NICs) such as cellular communication interface cards121and122and an unlicensed band communication interface card123in an internal or external form. In the drawing, the communication module120is shown as an integral integration module, but unlike the drawing, each network interface card can be independently arranged according to a circuit configuration or usage. The cellular communication interface card121may transmit or receive a radio signal with at least one of the base station200, an external device, and a server by using a mobile communication network and provide a cellular communication service in a first frequency band based on the instructions from the processor110. According to an embodiment, the cellular communication interface card121may include at least one NIC module using a frequency band of less than 6 GHz. At least one NIC module of the cellular communication interface card121may independently perform cellular communication with at least one of the base station200, an external device, and a server in accordance with cellular communication standards or protocols in the frequency bands below 6 GHz supported by the corresponding NIC module. The cellular communication interface card122may transmit or receive a radio signal with at least one of the base station200, an external device, and a server by using a mobile communication network and provide a cellular communication service in a second frequency band based on the instructions from the processor110. According to an embodiment, the cellular communication interface card122may include at least one NIC module using a frequency band of more than 6 GHz. At least one NIC module of the cellular communication interface card122may independently perform cellular communication with at least one of the base station200, an external device, and a server in accordance with cellular communication standards or protocols in the frequency bands of 6 GHz or more supported by the corresponding NIC module. The unlicensed band communication interface card123transmits or receives a radio signal with at least one of the base station200, an external device, and a server by using a third frequency band which is an unlicensed band, and provides an unlicensed band communication service based on the instructions from the processor110. The unlicensed band communication interface card123may include at least one NIC module using an unlicensed band. For example, the unlicensed band may be a band of 2.4 GHz, 5 GHz, 6 GHz, 7 GHz, or above 52.6 GHz. At least one NIC module of the unlicensed band communication interface card123may independently or dependently perform wireless communication with at least one of the base station200, an external device, and a server according to the unlicensed band communication standard or protocol of the frequency band supported by the corresponding NIC module. The memory130stores a control program used in the UE100and various kinds of data therefor. Such a control program may include a prescribed program required for performing wireless communication with at least one among the base station200, an external device, and a server. Next, the user interface140includes various kinds of input/output means provided in the UE100. In other words, the user interface140may receive a user input using various input means, and the processor110may control the UE100based on the received user input. In addition, the user interface140may perform an output based on instructions from the processor110using various kinds of output means. Next, the display unit150outputs various images on a display screen. The display unit150may output various display objects such as content executed by the processor110or a user interface based on control instructions from the processor110. In addition, the base station200according to an embodiment of the present disclosure may include a processor210, a communication module220, and a memory230. First, the processor210may execute various instructions or programs, and process internal data of the base station200. In addition, the processor210may control the entire operations of units in the base station200, and control data transmission and reception between the units. Here, the processor210may be configured to perform operations according to embodiments described in the present disclosure. For example, the processor210may signal slot configuration and perform communication according to the signaled slot configuration. Next, the communication module220may be an integrated module that performs wireless communication using a wireless communication network and a wireless LAN access using a wireless LAN. For this, the communication module220may include a plurality of network interface cards such as cellular communication interface cards221and222and an unlicensed band communication interface card223in an internal or external form. In the drawing, the communication module220is shown as an integral integration module, but unlike the drawing, each network interface card can be independently arranged according to a circuit configuration or usage. The cellular communication interface card221may transmit or receive a radio signal with at least one of the UE100, an external device, and a server by using a mobile communication network and provide a cellular communication service in the first frequency band based on the instructions from the processor210. According to an embodiment, the cellular communication interface card221may include at least one NIC module using a frequency band of less than 6 GHz. The at least one NIC module of the cellular communication interface card221may independently perform cellular communication with at least one of the UE100, an external device, and a server in accordance with the cellular communication standards or protocols in the frequency bands less than 6 GHz supported by the corresponding NIC module. The cellular communication interface card222may transmit or receive a radio signal with at least one of the UE100, an external device, and a server by using a mobile communication network and provide a cellular communication service in the second frequency band based on the instructions from the processor210. According to an embodiment, the cellular communication interface card222may include at least one NIC module using a frequency band of 6 GHz or more. The at least one NIC module of the cellular communication interface card222may independently perform cellular communication with at least one of the base station100, an external device, and a server in accordance with the cellular communication standards or protocols in the frequency bands 6 GHz or more supported by the corresponding NIC module. The unlicensed band communication interface card223transmits or receives a radio signal with at least one of the base station100, an external device, and a server by using the third frequency band which is an unlicensed band, and provides an unlicensed band communication service based on the instructions from the processor210. The unlicensed band communication interface card223may include at least one NIC module using an unlicensed band. For example, the unlicensed band may be a band of 2.4 GHz, 5 GHz, 6 GHz, 7 GHz, or above 52.6 GHz. At least one NIC module of the unlicensed band communication interface card223may independently or dependently perform wireless communication with at least one of the UE100, an external device, and a server according to the unlicensed band communication standards or protocols of the frequency band supported by the corresponding NIC module. FIG.11is a block diagram illustrating the UE100and the base station200according to an embodiment of the present disclosure, and blocks separately shown are logically divided elements of a device. Accordingly, the aforementioned elements of the device may be mounted in a single chip or a plurality of chips according to the design of the device. In addition, a part of the configuration of the UE100, for example, a user interface140, a display unit150and the like may be selectively provided in the UE100. In addition, the user interface140, the display unit150and the like may be additionally provided in the base station200, if necessary. FIG.12illustrates a method of scheduling a physical uplink shared channel in a time domain according to an embodiment of the present disclosure. A terminal may transmit uplink data to a base station through a PUSCH. The base station may schedule (PUSCH scheduling), for the terminal, to transmit uplink data through the PUSCH. i) In a dynamic grant (DG) method, the base station may perform PUSCH scheduling via DCI included in a PDCCH. Alternatively, ii) in a configured grant (CG) method, the terminal may transmit uplink data to the base station through a PUSCH according to a resource and a transmission method preconfigured for the terminal by the base station. In this case, DCI included in a PDCCH may include PUSCH scheduling information. For example, the DCI may include time domain information (time-domain resource assignment (TDRA)) and frequency domain information (frequency-domain resource assignment (FDRA)). The terminal may receive DCI transmitted in a control resource set and a search space, and may perform operations (e.g., uplink data transmission through the PUSCH) indicated via the DCI. In this case, a DCI format for PUSCH scheduling may be DCI formats 0_0, 0_1, and 0_2. DCI of DCI formats 0_0, 0_1, and 0_2 may include a TDRA field including time domain information of the PUSCH. In this case, the time domain information may include K2, which is an offset value between a slot in which the PDCCH is transmitted from the base station and a slot in which the terminal transmits the PUSCH. In addition, the DCI may include a start and length indication value (SLIV) which is a joint-coded value of a starting symbol index (S) of the PUSCH and a symbol length (L, number) of the PUSCH in a slot indicated by K2. If the terminal receives the DCI in slot n, a slot in which the PUSCH is scheduled may be a floor(n*2μPusCH/n*2μPDCCH)+K2 slot. PUSCH and PDCCH may refer to a subcarrier spacing (SCS) of a cell in which the PUSCH is scheduled and a cell in which the terminal receives the PDCCH, respectively. floor(x) is a function that returns a largest integer among integers equal to or smaller than x. In the present specification, slot n may refer to a slot indexed with index n. Referring toFIG.12(a), a subcarrier spacing of a cell in which the terminal receives a PDCCH and a cell in which a PUSCH is scheduled may be the same. In this case, if the terminal receives the PDCCH in slot n and is indicated that K2 is 4, a slot in which the PUSCH is scheduled may be slot n+K2, that is, slot n+4. As for a PUSCH scheduling type, there may be two mapping types of PUSCH mapping type A and PUSCH mapping type B. Depending on a PUSCH mapping type, the range of possible values for a starting symbol index and an SLIV of the PUSCH may vary. In PUSCH mapping type A, only resource allocation including a DMRS symbol is possible, and the DMRS symbol may be located in a third or fourth symbol of a slot according to a value indicated by a higher layer. That is, in the case of PUSCH mapping type A, an index (S) of a starting symbol of the PUSCH may be 0, and a length (L) of the PUSCH may have one of values from 4 to 14 (12 for an extended CP) according to a DMRS symbol position. In PUSCH mapping type B, a first symbol of the PUSCH may be a DMRS symbol. Accordingly, S may have a value from 0 to 13 (11 for an extended CP), and L may have one of values from 1 to 14 (12 for an extended CP). In addition, since one PUSCH cannot cross a slot boundary, the sum of S and L should be smaller than or equal to 14 (12 for an extended CP). Referring toFIG.12(b), the base station may schedule PUSCH mapping type A in which a third symbol is a DMRS symbol, an index (S) of a starting symbol is 0, and a length (L) is 7, may schedule PUSCH mapping type A in which a fourth symbol is a DMRS symbol, an index (S) of a starting symbol is 0, and a length (L) is 7, and may schedule PUSCH mapping type B in which a first symbol is a DMRS symbol, an index (S) of a starting symbol is 5, and a length (L) is 5. In this case, frequency domain information of the PUSCH indicated in the FDRA field of DCI format 0_0, 0_1, or 0_2 may be divided into two types according to frequency resource allocation types. FIG.13illustrates a method of scheduling a physical uplink shared channel in a frequency domain according to an embodiment of the present disclosure. Hereinafter, a frequency resource allocation type will be described with reference toFIG.13. i) Frequency resource allocation type 0 which is a first type may be a type in which an RBG is configured by bundling a certain number of PRBs according to the number of RBs included in a BWP configured (set) for a terminal, and whether to use the RBG is indicated via a bitmap in units of RBGs. That is, the terminal may determine whether to use a corresponding RBG via a bitmap transmitted from a base station. The number of PRBs included in one RBG may be set (configured) from a higher layer, and as the larger the number of RBs included in a BWP are set (configured) for the terminal, the more PRBs may be set (configured). Referring toFIG.13(a), a BWP size set (configured) for the terminal may be 72 PRBs, and one RBG may include 4 PRBs. In this case, the terminal may determine four PRBs as one RBG in ascending order from PRB0, and each RBG may be indexed from 0. That is, an RBG including PRBs0to PRB3may be indexed as RBG0, and an RBG including PRBs4through PRB7may be indexed as RBG1. Up to RBG17may be indexed in the same manner, wherein the base station may transmit 1 bit (0 or 1) per RBG, i.e., a total of 18 bits, to the terminal, and the terminal may determine, based on the received 18 bits, whether to use PRBs constituting a corresponding RBG. In this case, if a bit value is 0, the terminal may determine that a PUSCH is not scheduled for any PRB among the PRBs constituting the corresponding RBG. If the bit value is 1, the terminal may determine that a PUSCH is scheduled for all PRBs in the corresponding RBG. In this case, the bit value may be applied in reverse. ii) Frequency resource allocation type 1 which is a second type may be a type indicating information on consecutive PRBs allocated according to a size of an active BWP or an initial BWP of the terminal. The information on consecutive PRBs may be a resource indication value (RIV) value in which a start index (S) and a length (L) of the consecutive PRBs are jointly coded. Referring toFIG.13(b), when a BWP size is 50 PRBs, and a PUSCH is scheduled for the terminal from PRB2to PRB11among the 50 PRBs, a start index of consecutive PRBs may be 2 and a length may be 10. That is, the terminal may determine the start index and the length of consecutive PRBs in which the PUSCH is scheduled, based on an RIV value received from the base station. Specifically, the RIV may be calculated by NsizeBwP*(L−1)+S. NsizeBWPmay be the size of BWP configured for the terminal. For example, if the RIV value received by the terminal is 452, calculation of 452 is based on 452=50*(10−1)+2, and therefore the terminal may determine that the start index of consecutive PRBs in which the PUSCH is scheduled is 2 and the length is 10. Via DCI of DCI format 0_1 or 0_2 for scheduling of the PUSCH, the terminal may be configured, from a higher layer, to use only one of the aforementioned two frequency resource allocation types or dynamically use both the two types. If the terminal is configured to dynamically use the two types, the terminal may determine a type to be used, via 1 bit of a most significant bit (MSB) of an FDRA field of the DCI. There may be an uplink shared channel transmission method based on a configured grant for URLLC transmission, etc. The uplink shared channel transmission method based on a configured grant may be described as grant-free transmission. The uplink shared channel transmission method based on a configured grant may be a method in which, if the base station configures, for the terminal, available resources for uplink transmission via a higher layer (i.e., RRC signaling), the terminal may transmit an uplink shared channel by using the configured resources. The uplink shared channel transmission method based on a configured grant may be classified into two types depending on whether DCI indicates activation and release. i) Type 1 of the uplink shared channel transmission method based on a configured grant may be a method of configuring a transmission method and resources in advance via a higher layer. ii) Type 2 of the uplink shared channel transmission method based on a configured grant may be a method of configuring configured grant-based transmission via a higher layer, and configuring, via DCI, a method and resources for actual transmission. The uplink transmission method based on a configured grant may support URLLC transmission. Accordingly, uplink transmission may be repeatedly performed on multiple slots to ensure high reliability. In this case, a redundancy version (RV) sequence may be one of {0, 0, 0, 0}, {0, 2, 3, 1}, and {0, 3, 0, 3}, and an RV corresponding to a (mod(n−1, 4)+1)th value may be used in an nth repeated transmission. That is, an RV corresponding to a value obtained by adding 1 to a remainder of dividing n−1 by 4 may be used. In addition, the terminal configured to repeatedly transmit an uplink channel may start repeated transmission only in a slot having an RV value of 0. However, if an RV sequence is {0, 0, 0, 0} and an uplink channel is configured to be repeatedly transmitted in 8 slots, the terminal may not start repeated transmission in an 8th slot. The terminal may terminate repeated transmission when a UL grant having the same HARQ process ID is received or when the number of repeated transmissions configured via a higher layer is reached or a periodicity is exceeded. The UL grant may refer to DCI for PUSCH scheduling. As described above, in order to improve PUSCH transmission/reception reliability between a base station and a terminal in a wireless communication system, the base station may configure for the terminal to repeatedly transmit a PUSCH. FIG.14illustrates repeated transmission of a physical uplink shared channel according to an embodiment of the present disclosure. InFIG.14toFIG.27, actual #n refers to an actual PUSCH or PUCCH of index n, and combined #n refers to a combined PUSCH or PUCCH of index n. Repeated PUSCH transmission performed by a terminal may be of two types. i) First, repeated PUSCH transmission type A will be described. When a terminal receives DCI of DCI format 0_1 or 0_2 included in a PDCCH for PUSCH scheduling from a base station, the terminal may repeatedly transmit a PUSCH on K consecutive slots. A K value may be configured from a higher layer or may be a value included in a TDRA field of the DCI so as to be configured for the terminal. For example, referring toFIG.14A, the terminal may receive the PDCCH for PUSCH scheduling in slot n, and a K2 value may be configured from DCI included in the received PDCCH. In this case, if the K2 value is 2 and the K value is 4, the terminal may start repeated PUSCH transmission in slot n+K2, and may repeatedly transmit a PUSCH until slot n+K2+K−1. That is, the terminal starts repeated PUSCH transmission in slot n+2 and repeatedly transmits a PUSCH until slot n+5. In this case, time and frequency domain resources in which the PUSCH is transmitted in each slot may be the same as those indicated in the DCI. That is, the PUSCH may be transmitted in the same symbol and PRB(s) within a slot. ii) Next, repeated PUSCH transmission type B will be described. Repeated PUSCH transmission type B may be a type used for the terminal to perform low-latency repeated PUSCH transmission in order to satisfy URLLC requirements, etc. The terminal may be configured with a symbol (S) in which repeated PUSCH transmission starts and a length (L) of the repeated PUSCH transmission, via the TDRA field of the DCI transmitted by the base station. In this case, the starting symbol (S) and the length (L) may be for a temporarily obtained nominal PUSCH rather than an actual PUSCH actually transmitted by the terminal. A separate symbol may not exist between nominal PUSCHs configured to be repeatedly transmitted. That is, nominal PUSCHs may be consecutive in the time domain. The terminal may determine an actual PUSCH from the nominal PUSCHs. One nominal PUSCH may be determined to be one or multiple actual PUSCHs. The base station may configure, for the terminal, symbols unavailable for repeated PUSCH transmission type B. Symbols unavailable for repeated PUSCH transmission type B may be described as invalid symbols. The terminal may exclude invalid symbols from among resources configured to transmit nominal PUSCHs. As described above, nominal PUSCHs are configured to be repeatedly transmitted on consecutive symbols, but if invalid symbols are excluded, resources for nominal PUSCH transmission become inconsecutive. An actual PUSCH may be configured to be transmitted on consecutive symbols configured for one nominal PUSCH transmission except for invalid symbols. In this case, if consecutive symbols cross a slot boundary, an actual PUSCH actually transmitted based on the slot boundary may be divided. Invalid symbols may include downlink symbols configured for the terminal by the base station. Referring toFIG.14B, the terminal may be scheduled with PUSCH transmission having a length of 5 symbols starting from a 12th symbol of a first slot (slot n), and may be configured with 4 times of type B repeated transmission. In this case, resources scheduled for a first nominal PUSCH (nominal #1) may include symbol (n,11), symbol (n,12), symbol (n,13), symbol (n+1,0), and symbol (n+1,1). Resources scheduled for a second nominal PUSCH (nominal #2) may include symbol (n+1,2), symbol (n+1,3), symbol (n+1,4), symbol (n+1,5), and symbol (n+1,6). Resources scheduled for a third nominal PUSCH (nominal #3) may include symbol (n+1,7), symbol (n+1,8), symbol (n+1,9), symbol (n+1,10), and symbol (n+1,11). Resources scheduled for a fourth nominal PUSCH (nominal #4) may include symbol (n+1,12), symbol (n+1,13), symbol (n+2,0), symbol (n+2,1), and symbol (n+2,2). In this case, symbol (n,k) represents symbol k of slot n. That is, k may be a value starting from 0 to 13 for a normal CP, and may be a value from 0 to 11 for an extended CP. Invalid symbols may be configured to be symbols6and7of slot n+1. In this case, in order to determine an actual PUSCH, a last symbol of the second nominal PUSCH (nominal #2) may be excluded, and a first symbol of the third nominal PUSCH (nominal #3) may be excluded. The first nominal PUSCH (nominal #1) may be divided into two actually transmitted actual PUSCHs (actual #1and actual #2) by a slot boundary. Each of the second nominal PUSCH (nominal #2) and the third nominal PUSCH (nominal #3) may be distinguished into one actual PUSCH (actual #3and actual #4) by combining consecutive symbols except for an invalid symbol. Finally, the fourth nominal PUSCH (nominal #4) is divided into two actually transmitted (actual) PUSCHs (actual #5and actual #6) by a slot boundary. The terminal finally transmits actually transmitted (actual) PUSCHs. One actual PUSCH should include at least one DMRS symbol. Accordingly, when repeated PUSCH transmission type B is configure, if a total length of the actual PUSCH is one symbol, the actual PUSCH may be omitted without being transmitted. This is because the actual PUSCH with one symbol may not include information other than a DMRS. In order to obtain diversity gain in the frequency domain, frequency hopping may be configured for uplink channel transmission. For repeated PUSCH transmission type A, one of intra-slot frequency hopping, in which frequency hopping is performed within a slot, and inter-slot frequency hopping, in which frequency hopping is performed in each slot, may be configured for the terminal. If intra-slot frequency hopping is configured for the terminal, the terminal may divide the PUSCH in half in the time domain in a slot for transmitting the PUSCH and transmit one half of the PUSCH in a scheduled PRB, and may transmit the other half in a PRB obtained by adding an offset value to the scheduled PRB. In this case, two or four offset values may be configured according to an active BWP size via a higher layer, and one of the values may be configured for (indicated to) the terminal via DCI. If inter-slot frequency hopping is configured for the terminal, the terminal may transmit the PUSCH in a scheduled PRB in a slot having an even-numbered slot index, and may transmit the PUSCH in a PRB obtained by adding an offset value to the scheduled PRB in an odd-numbered slot. For repeated PUSCH transmission type B, one of inter-repetition frequency hopping, in which frequency hopping is performed at a nominal PUSCH boundary, and inter-slot frequency hopping, in which frequency hopping is performed in every slot, may be configured for the terminal. If inter-repetition frequency hopping is configured for the terminal, the terminal may transmit actual PUSCH(s) corresponding to an odd-numbered nominal PUSCH on a scheduled PRB, and the terminal may transmit actual PUSCH(s) corresponding to an even-numbered nominal PUSCH on a PRB obtained by adding an offset value to the scheduled PRB. In this case, two or four offset values may be configured according to an active BWP size via a higher layer, and one of the values may be configured for (indicated to) the terminal via DCI. If inter-slot frequency hopping is configured for the terminal, the terminal may transmit the PUSCH in a scheduled PRB in a slot having an even-numbered slot index, and may transmit the PUSCH in a PRB obtained by adding an offset value to the scheduled PRB in an odd-numbered slot. When the terminal performs repeated PUSCH transmission, if a symbol scheduled for PUSCH transmission in a specific slot overlaps with a semi-statically configured DL symbol or a symbol configured for reception of an SS/PBCH block, the terminal may not transmit an overlapping PUSCH on a slot including the overlapping symbol. In addition, the overlapping PUSCH may be delayed and may not be transmitted even on a subsequent slot. If the terminal receives DCI of DCI format 1_0, 1_1, or 1_2 for PUCCH scheduling, the terminal needs to transmit a PUCCH to the base station. In this case, the PUCCH may include uplink control information (UCI), and UCI may include at least one of HARQ-ACK, a scheduling request (SR), and channel state information (CSI). HARQ-ACK may be HARQ-ACK indicating whether the terminal has successfully received two types of channels. A first type may be HARQ-ACK for a PDSCH when the terminal is scheduled with the PDSCH via DCI of DCI format 1_0, 1_1, or 1_2. A second type may be HARQ-ACK for DCI when the DCI of DCI format 1_0, 1_1, or 1_2 is DCI indicating release of a semi-persistently scheduled (SPS) PDSCH. For PUCCH transmission including HARQ-ACK, a “PDSCH-to-HARQ_feedback timing indicator” field of DCI may indicate K1 which is information (value) for a slot in which the scheduled PUCCH is transmitted. Here, K1 may be a non-negative integer value. DCI of DCI format 1_0 may indicate one of {0, 1, 2, 3, 4, 5, 6, 7} as a K1 value. The K1 value that can be indicated in DCI of DCI format 1_1 or 1_2 may be set (configured) from a higher layer. A method of determining a slot in which a PUCCH including a first type HARQ-ACK is transmitted will be described. An uplink slot overlapping with a last symbol in which a PDSCH corresponding to HARQ-ACK is transmitted may exist. In this case, if an index of the overlapping uplink slot is m, the terminal may transmit a PUCCH including HARQ-ACK on slot m+K1. The index of the uplink slot may be a value determined based on a subcarrier spacing of a BWP in which the PUCCH is transmitted. If the terminal is configured with downlink slot aggregation, a last symbol in which a PDSCH is transmitted may refer to a last scheduled symbol within a last slot among slots in which the PDSCH is transmitted. FIG.15illustrates a method of scheduling a physical uplink control channel according to an embodiment of the present disclosure. Referring toFIG.15, a subcarrier spacing of a DL BWP in which a PDCCH is received, a subcarrier spacing of a DL BWP scheduled for a PDSCH, and a subcarrier spacing of a UL BWP in which a PUCCH is transmitted may be the same. A terminal may receive a PDCCH for scheduling of a PUCCH and a PDSCH from a base station in slot n. In this case, a K0 value and a K1 value may be configured (indicated) to be 2 and 3 respectively, by DCI included in the PDCCH received in slot. For example, if a last symbol in which the PDSCH is transmitted is symbol n+K0 (i.e., symbol n+2), the terminal may transmit HARQ-ACK for the PDSCH on slot n+2+K1 (i.e., slot n+5). In this case, HARQ-ACK for the PDSCH may be included in the PUCCH. FIG.16illustrates repeated transmission of a physical uplink control channel according to an embodiment of the present disclosure. In order to secure wide coverage in the NR system, a terminal may repeatedly transmit a long PUCCH on 2, 4, or 8 slots. In this case, a format of the long PUCCH may be PUCCH format 1, 3, or 4. If the terminal repeatedly transmits the PUCCH, the same UCI may be repeatedly transmitted in every slot. Referring toFIG.16, when PDSCH reception is terminated in slot n, and a K1 value is 2, the terminal may transmit the PUCCH on slot n+K1 (i.e., slot n+2). When a base station configures the number of repeated PUCCH transmission to be 4 (NrepeatPUCCH=4), the terminal may repeatedly transmit the PUCCH from slot n+2 to slot n+5. In this case, symbol configurations of repeatedly transmitted PUCCHs may be the same. That is, repetitively transmitted PUCCHs may start from the same symbol in each slot and may include the same number of symbols. Even for PUCCH transmission, frequency hopping may be applied to obtain diversity gain in the frequency domain. If intra-slot frequency hopping is applied, the terminal may divide the time domain of a slot for transmitting the PUCCH in half and transmit a half of the PUCCH on a first PRB and may transmit the other half of the PUCCH on a second PRB. The first PRB and the second PRB may be configured via a higher layer for configuration of PUCCH resources. If inter-slot frequency hopping is applied, the terminal may transmit the PUCCH on a first PRB of a slot having an even-numbered slot index and may transmit the PUCCH on a second PRB of a slot having an odd-numbered slot index. In addition, when the terminal performs repeated PUCCH transmission, if a symbol of a specific slot scheduled for PUCCH transmission overlaps with a semi-statically configured DL symbol or a symbol configured for reception of an SS/PBCH block, the terminal may not transmit the PUCCH on a slot including the overlapping symbol. The terminal may delay transmission of an untransmitted PUCCH so as to transmit the same on a subsequent slot. In this case, if a symbol of a slot for delayed PUCCH transmission does not overlap with a semi-statically configured DL symbol or a symbol configured for reception of an SS/PBCH block, the terminal may transmit the PUCCH. In the present specification, a problem related to repeated PUSCH or PUCCH transmission of a terminal for improving coverage performance may be described as a PUSCH or PUCCH coverage problem. FIG.17illustrates a problem that occurs when a terminal repeatedly transmits a PUSCH in a TDD situation according to an embodiment of the present disclosure. Referring toFIG.17, in a TDD situation, slot “D” may be a slot including all symbols that are downlink symbols, slot “U” may be a slot including all symbols that are uplink symbols, and slot “S” may be a slot other than slot “D” and slot “U”. In this case, slot “S” may include at least one flexible symbol. Repeated PUSCH transmission type B may be configured for slot “S” and slot “U”. Even if a base station configures for (indicates to) a terminal that a length of a nominal PUSCH is 6 symbols, a length of an actual PUSCH may be 2, 3, or 4 symbols due to a slot boundary and an invalid symbol. Each repeatedly transmitted actual PUSCH may include one DMRS symbol. If one DMRS symbol is mapped per actual PUSCH, a data symbol transmitted in the actual PUSCH may have a length of 1, 2, or 3 symbols. Compared to 6-symbol PUSCH transmission, the terminal needs to use a higher code rate when transmitting a transport block (TB) of the same number of bits. Therefore, even if repeated transmission is configured to improve coverage performance, because a high code rate is used, there is a problem in securing coding gain. That is, the terminal repeatedly transmitting a PUSCH according to repeated PUSCH transmission type B does not solve a coverage problem. In addition, since a PUSCH including a small number of symbols should include at least one DMRS symbol, a DMRS overhead becomes greater as the number of symbols constituting an actual PUSCH becomes fewer, and therefore coverage performance for an uplink channel and signal transmitted by a terminal located at a cell-edge may be degraded. FIG.18illustrates a problem that occurs when a terminal repeatedly transmits a PUCCH in a TDD situation according to an embodiment of the present disclosure. Referring to case a ofFIG.18, repeated PUCCH transmission in a TDD situation may be configured on slot “S” and a slot “U”. A PUCCH having a total symbol length of 4 from symbol10to symbol13within a slot may be configured, and repeated PUCCH transmissions having the same position and length may be performed over two slots. That is, a first repeated PUCCH transmission may be performed on symbol10to symbol13in a first slot, and a second repeated PUCCH transmission may be performed on symbol10to symbol3in a second slot. In this case, a zeroth symbol to a ninth symbol in the second slot may not be used for repeated PUCCH transmission. Therefore, when a UL symbol available for repeated PUCCH transmission is restricted, a coverage problem may occur. For repeated PUCCH transmission with high reliability, a restricted UL symbol (symbol unavailable for repeated PUCCH transmission) needs to be used. Hereinafter, a solution for improving coverage performance according to repeated PUSCH transmission type B and repeated PUCCH transmission described with reference toFIG.17andFIG.18will be described. In order to solve a coverage problem that occurs during repeated PUSCH transmission, multiple actual PUSCHs may be combined and transmitted. Hereinafter, for convenience of description, an actual PUSCH may not be actually transmitted, and a PUSCH determined according to a method described below may be actually transmitted. One or multiple actual PUSCHs may be combined to constitute combined actual PUSCH(s), and the combined actual PUSCH(s) may be transmitted. Actual PUSCHs consecutive in the time domain may be combined to constitute one combined actual PUSCH. Being consecutive in the time domain may refer to a case in which there is no symbol between two consecutive actual PUSCHs. When the terminal combines and transmits repeatedly transmitted PUSCHs, the total number of symbols of PUSCHs including repetition transmission should not exceed a preconfigured number of symbols. That is, the total number of symbols of a combined actual PUSCH transmitted for coverage improvement may not exceed a preconfigured number of symbols. The preconfigured number of symbols may be a value configured for the terminal by the base station. In addition, the preconfigured number of symbols may be a maximum number of symbols constituting a slot. The maximum number of symbols constituting a slot may be 14 for a normal CP and may be 12 for an extended CP. FIG.19illustrates a method of combining repeatedly transmitted PUSCHs according to an embodiment of the present disclosure. Referring toFIG.19(a), a preconfigured number of symbols may be 14. Actual PUSCH #1to actual PUSCH #3may be combined to constitute combined PUSCH #1, and actual PUSCH #4and actual PUSCH #5may be combined to constitute combined PUSCH #2. Actual PUSCH #1to actual PUSCH #6include a total of 15 symbols. Accordingly, a second symbol (symbol13in a second slot) is a symbol exceeding 14 symbols, i.e., the preconfigured number of symbols, and may be thus dropped. Therefore, a first symbol (symbol12in the second slot) of actual PUSCH #6includes one symbol, and may be thus dropped according to PUSCH mapping type B. Referring toFIG.19(b), the number of symbols constituting a PUSCH may not be restricted. Therefore, two symbols (symbols12and13in the second slot) of actual PUSCH #6are consecutive symbols and may be combined to constitute combined PUSCH #3, and a terminal may also transmit combined PUSCH #3to a base station. FIG.20illustrates a method of combining repeatedly transmitted PUSCHs according to an embodiment of the present disclosure. When configuring the described combined PUSCH, actual PUSCHs may be combined in consideration of a slot boundary. Referring toFIG.20A, a preconfigured number of symbols may be 14. Consecutive symbols from symbol10of a first slot, in which actual PUSCHs are transmitted, may be combined, wherein the symbols may be combined based on slot boundaries. That is, actual PUSCH #1may constitute combined PUSCH #1, subsequent actual PUSCH #2and actual PUSCH #3may constitute combined PUSCH #2, and actual PUSCH #4and actual PUSCH #5may constitute combined PUSCH #3. UnlikeFIG.19, since a slot boundary exists between actual PUSCH #1and actual PUSCH #2, combined PUSCH #1may include only actual #1. A second symbol (symbol13in a second slot) of actual PUSCH #6is a symbol exceeding 14 symbols, i.e., the preconfigured number of symbols, and may be thus dropped. Therefore, a first symbol (symbol12in the second slot) of actual PUSCH #6includes one symbol, and may be thus dropped according to PUSCH mapping type B. Referring toFIG.20B, the number of symbols constituting a PUSCH may not be restricted. Therefore, two symbols (symbols12and13in the second slot) of actual PUSCH #6are consecutive symbols and may be combined to constitute combined PUSCH #4, and a terminal may also transmit combined PUSCH #4to a base station. In this case, the number of symbols constituting the combined PUSCH may be restricted. For example, the restricted number of symbols may be 2 to 14. After generating one combined PUSCH by combining actual PUSCHs of a specific unit, the terminal may transmit the combined PUSCH. The specific unit may be at least one of a set of symbols, a slot, or a set of slots, for example, when the specific unit is a slot, actual PUSCHs in the slot may be combined to constitute one combined PUSCH. If the specific unit is a set of N symbols, the terminal may determine the set of symbols and combine actual PUSCHs in the set of symbols so as to configure one combined PUSCH. The set of symbols may be sequentially grouped by N symbols from a first symbol of a 10 ms radio frame or a slot. N may be a divisor of the number of symbols constituting a slot. For example, N may be 7 for a normal CP and may be 6 for an extended CP. The base station may configure (indicate), for the terminal, the number of actual PUSCHs constituting a combined PUSCH. A combined PUSCH may be configured by combining actual PUSCHs according to the configured number. For example, if the configured number is K, a combined PUSCH may be configured by combining K actual PUSCHs starting from a first actual PUSCH. If the total number of actual PUSCHs is not a multiple of K, one of combined PUSCHs may include actual PUSCHs of the number corresponding to a remainder obtained by dividing the total number of actual PUSCHs by K. Actual PUSCHs may be indexed according to a time sequence. A combined PUSCH may be configured by combining actual PUSCHs corresponding to (or included in) one nominal PUSCH. One nominal PUSCH may be divided into one or multiple actual PUSCHs due to a slot boundary or an invalid symbol. Multiple actual PUSCHs obtained by division of one nominal PUSCH may be combined to constitute one combined PUSCH. i) When multiple actual PUSCHs obtained by division of one nominal PUSCH are combined to constitute one combined PUSCH, a slot boundary may be considered. That is, a combined PUSCH may be configured by combining only actual PUSCHs in the same slot. In other words, actual PUSCHs in different slots constitute different combined PUSCHs. ii) When multiple actual PUSCHs obtained by division of one nominal PUSCH are combined to constitute one combined PUSCH, time continuity may be considered. That is, a combined PUSCH may be configured by only consecutive actual PUSCHs. In this case, actual PUSCHs consecutive in the time domain included in different slots may be combined to constitute one combined PUSCH. That is, actual PUSCHs inconsecutive in the time domain constitute different combined PUSCHs. If actual PUSCHs consecutive in the time domain constitute one combined PUSCH regardless of a slot boundary, the number of symbols constituting the combined PUSCH may be restricted. For example, the number of symbols constituting a combined PUSCH may be restricted to a maximum number of symbols constituting one slot or the number of symbols constituting a slot required for coverage extension. The base station may configure (indicate), for the terminal, a minimum number of symbols constituting a combined PUSCH. The base station may determine the minimum number of symbols constituting a combined PUSCH by considering at least one of a DMRS overhead, a TB size, and a code rate. That is, a combined PUSCH may be configured by combining actual PUSCHs so as to have a length greater than or equal to the minimum number. For example, when the minimum number is M and lengths of actual PUSCHs are A1, A2, and A3, respectively, if A1 is smaller than M, since the minimum number of symbols constituting a combined PUSCH is not satisfied, the actual PUSCH of length A1 may be combined with the actual PUSCH of length A2 to constitute the combined PUSCH. If A1+A2 is still smaller than M, a combined PUSCH may be configured by combining the actual PUSCH of length A3. In other words, if a length of an actual PUSCH or a length of a combined PUSCH is greater than or equal to M, additional actual PUSCH may not be combined. The base station may configure (indicate), for the terminal, a maximum number of symbols constituting a combined PUSCH. The base station may determine the maximum number of symbols constituting a combined PUSCH by considering at least one of a DMRS overhead, a TB size, and a code rate. In this case, the maximum number may be 14 symbols. That is, a combined PUSCH may be configured by combining actual PUSCHs so as to have a length smaller than or equal to the maximum number. For example, when the maximum number is M and lengths of actual PUSCHs are A1, A2, and A3, respectively, if A1 is smaller than M, but A1+A2 is greater than M, since A1+A2 exceeds the maximum number of symbols, the actual PUSCH of length A1 may not be combined with the actual PUSCH of length A2. If A1+A2 is smaller than M, since A1+A2 does not exceed the maximum number of symbols, the actual PUSCH of length A1 may be combined with the actual PUSCH of length A2 to constitute a combined PUSCH. Whether to combine the actual PUSCH of length A3 may also be determined in the same manner. Accordingly, the length of the combined PUSCH may be maintained below a certain symbol length. In other words, the terminal may not transmit a combined PUSCH exceeding a certain length. The base station may configure (indicate), for the terminal, a minimum length of an actual PUSCH to be coupled. For example, for repeated PUSCH transmission type B, an actual PUSCH having a length of one symbol may be dropped or omitted without being transmitted. Therefore, the actual PUSCH that is dropped or omitted may be transmitted in combination with another actual PUSCH. For example, if a minimum length of an actual PUSCH is M and lengths of actual PUSCHs are A1, A2, and A3, an actual PUSCH having a length smaller than M from among A1, A2, and A3 may be combined with another adjacent actual PUSCH to constitute a combined PUSCH. In this case, the number of combined actual PUSCHs may be two. i) An actual PUSCH having a length smaller than the minimum length may be combined with an actual PUSCH having a shorter length from among two adjacent actual PUSCHs. For example, actual PUSCH #2ofFIG.17may be combined with actual PUSCH #3having a shorter length from among actual PUSCH #1and actual PUSCH #3. The terminal may efficiently use a dropped or omitted resource by combining a dropped or omitted actual PUSCH with another actual PUSCH and transmitting the same. In addition, by combining actual PUSCHs, a DMRS overhead may be reduced, resulting in an increase in a data transmission rate. ii) An actual PUSCH having a length smaller than the minimum length may be combined with an actual PUSCH having a longer length among two adjacent actual PUSCHs. For example, actual PUSCH #2ofFIG.17may be combined with actual PUSCH #1having a longer length among actual PUSCH #1and actual PUSCH #3. A PUSCH may be transmitted in a resource of a longer time domain, and this is effective in extending coverage. iii) An actual PUSCH having a length smaller than the minimum length may be combined with an actual PUSCH located earlier in time among two adjacent actual PUSCHs. Since a PUSCH is transmitted for a long time from an earlier time domain resource, coverage is extended and delay is reduced. iv) An actual PUSCH having a length smaller than the minimum length may be combined with an actual PUSCH located later in time among two adjacent actual PUSCHs. For PUSCH transmission that is not sensitive to a delay, a PUSCH may be transmitted on a long time resource, which is advantageous for coverage extension. A combined PUSCH may be configured by combining symbols included in nominal PUSCH(s). In this case, the described procedure of dividing nominal PUSCH(s) into actual PUSCHs may be omitted. That is, a combined PUSCH may be generated directly from nominal PUSCH(s). i) The base station may configure (indicate), for the terminal, a minimum number of symbols constituting a combined PUSCH. The terminal may determine the number of symbols included in nominal PUSCH(s). In this case, an invalid symbol may be excluded. A combined PUSCH may include the minimum number of symbols among symbols included in the nominal PUSCH(s). Since it is the minimum number, the combined PUSCH may include more symbols than the minimum number. A combined PUSCH may be configured in consideration of consecutive symbols and/or a slot boundary. Specifically, a combined PUSCH is configured by the minimum number of symbols among symbols included in nominal PUSCH(s), and if there are consecutive symbols subsequent to a last symbol among the minimum number of symbols, the combined PUSCH may be configured by additionally combining consecutive symbols. In this case, if the consecutive symbols cross a slot boundary, slots crossing the slot boundary may not be combined. That is, additionally combined symbols may be symbols within the same slot. ii) The base station may configure (indicate), for the terminal, a maximum number of symbols constituting a combined PUSCH. That is, if the number of symbols constituting a combined PUSCH exceeds the maximum number, an additional combined PUSCH may be newly configured. For example, the maximum number may be 14 or may be a maximum number of symbols constituting X slots. iii) The base station may configure (indicate), for the terminal, the number of configurable combined PUSCHs. The terminal may determine the number of symbols constituting nominal PUSCH(s). In this case, an invalid symbol may be excluded. For example, if the number of symbols constituting nominal PUSCH(s) is S and the number of configurable combined PUSCHs is Y, a combined PUSCH may include floor(S/Y) or ceil(S/Y) symbols. floor(x) is a function that returns a largest integer among integers equal to or smaller than x. floor(x) is a function that returns a smallest integer among integers equal to or larger than x. Hereinafter, a frequency hopping method for obtaining diversity gain when the terminal combines and transmits multiple actual PUSCHs will be described. i) The terminal may transmit an odd-numbered combined PUSCH in a first PRB(s) and may transmit an even-numbered combined PUSCH in a second PRB(s). The base station may configure, for the terminal, an offset value for a PRB interval of the first PRB(s) and the second PRB(s), and the terminal may transmit a combined PUSCH, based on the offset value. ii) The terminal may divide a combined PUSCH into two or more parts in the time domain, and may transmit the divided combined PUSCH via frequency hopping. For example, the combined PUSCH may be divided into two parts in the time domain. If the divided two parts are referred to as a first hop and a second hop, a difference between symbols constituting the first hop and the second hop may be configured to be minimum. If the number of symbols of the combined PUSCH is NPUsCHsymb, the number of symbols constituting the first hop may be floor(NPUSCHsymb/2), and the number of symbols constituting the second hop may be NPUsCHsymb−floor(NPUsCHsymb/2). Alternatively, the number of symbols constituting the first hop may be ceil(NPUsCHsymb/2), and the number of symbols constituting the second hop may be NPUSCHsymb−ceil(NPUSCHsymb/2). In this case, the first hop may be transmitted on the first PRB(s), and the second hop may be transmitted on the second PRB(s). The base station may configure, for the terminal, an offset value for a PRB interval of the first PRB(s) and the second PRB(s), and the terminal may transmit a combined PUSCH, based on the offset value. iii) The base station may configure, for the terminal, a minimum number of symbols per hop for transmission of a combined PUSCH. The terminal may transmit a combined PUSCH via frequency hopping by comparing the number of symbols constituting the combined PUSCH with the minimum number of symbols per hop. For example, if the number of symbols of the combined PUSCH is fewer than or equal to the minimum number of symbols per hop, the terminal may transmit the combined PUSCH without frequency hopping. Conversely, if the number of symbols of the combined PUSCH is more than the minimum number of symbols per hop, the terminal may transmit the combined PUSCH via divided two or more hops. In this case, a method of transmitting the divided two or more hops may be the same as ii) described above. Division may be performed into two or more hops, based on the minimum number of symbols per hop. That is, hops may be configured by bundling symbols constituting a combined PUSCH as many symbols as the minimum number of symbols. If the number of symbols in a combined PUSCH is not a multiple of the minimum number of symbols per hop, the number of symbols constituting any one of the divided hops may be equal to a remainder obtained by dividing the number of symbols constituting the combined PUSCH by the minimum number of symbols per hop. Frequency hopping described below may be applied regardless of a combined PUSCH. FIG.21toFIG.26illustrate a frequency hopping method of a repeatedly transmitted PUSCH, according to an embodiment of the present disclosure. Frequency hopping may be performed by dividing a total length of a repeatedly transmitted PUSCH in half in the time domain. i) A hopping boundary for frequency hopping may be determined by dividing a total length of a repeatedly transmitted PUSCH in half, and the repeated PUSCH may be transmitted based on the determined hopping boundary. If the total length of the repeatedly transmitted PUSCH is NPUsCHsymb, the number of PUSCH symbols constituting a first hop is floor(NPUSCHsymb/2), and the number of PUSCH symbols constituting a second hop may be NPusCHsymb− floor(NPUsCHsymb/2) (Method a). Alternatively, the number of PUSCH symbols constituting a first hop may be ceil(NPUsCHsymb/2), and the number of PUSCH symbols constituting a second hop may be NPUsCHsymb-ceiling(NPusCHsymb/2) (method b). For example, the total length of repeatedly transmitted PUSCHs may be the sum of lengths of respective actual PUSCHs. Referring toFIG.21, if repeated PUSCH transmission type B is configured, a length of all actual PUSCHs, which is obtained by adding lengths of respective actual PUSCHs, may be 15 (i.e., the sum of a length of actual PUSCH #1to a length of actual PUSCH #6). If described method a is applied, the number of symbols constituting a first hop may be 7 (from symbol10in a first slot to symbol2in a second slot). The number of symbols constituting a second hop may be 8 (symbol3in the second slot, symbols6to10in the second slot, and symbols12,13in the second slot). In this case, if a scheme of repeated PUSCH transmission type B is applied to the second hop, as described above, for a PUSCH including one symbol, the symbol is a DMRS symbol, and therefore the terminal may not transmit the PUSCH (a first symbol of the second hop) including one symbol. If described method b is applied, the first hop may include 8 symbols and the second hop may include 7 symbols. Accordingly, the terminal may transmit a PUSCH without a dropped symbol. As another example, if the base station and the terminal know about all of symbol configuration information, a configuration of an invalid symbol, etc., the terminal may determine a hopping boundary so that a PUSCH including one symbol is not generated. That is, referring toFIG.21, if the terminal and the base station know about a symbol configuration, the terminal may configure the first hop with 8 symbols and configure the second hop with 7 symbols by applying method b, so as to transmit the PUSCH without a dropped symbol. In addition, a total length of a repeatedly transmitted PUSCH may be the same as a total length of a nominal PUSCH. Referring toFIG.22, a total length of nominal PUSCHs may be 18 symbols (Nominal #1to Nominal #3). A first hop may include 9 symbols (from symbol10in a first slot to symbol4in a second slot), and a second hop may include 9 symbols (from symbol5in the second slot to symbol13in the second slot). The terminal may transmit the first hop and the second hop via frequency hopping. ii) The total length of the repeatedly transmitted PUSCH in i) may be a length of one nominal PUSCH or a length of an actual PUSCH having a longest length from among actual PUSCHs. The first hop obtained by division via described i) and ii) may be transmitted on a first PRB(s), and the second hop may be transmitted on a second PRB(s). In the present specification, a PUSCH/PUCCH symbol or a PUSCH/PUCCH symbol may refer to a symbol in which a PUSCH/PUCCH is transmitted. Consecutive PUSCH symbols may constitute an identical hop. If the base station configures repeated PUSCH transmission for the terminal, symbols to which consecutive actual PUSCHs are allocated may constitute one hop. In this case, the number of symbols constituting one hop may be a variable value rather than a fixed value. Referring toFIG.23, eight consecutive symbols (symbol10in a first slot to symbol3in a second slot) from a starting symbol (symbol10in the first slot) to an invalid symbol (symbol4in the second slot) of a repeatedly transmitted PUSCH may constitute one hop (first hop). Five consecutive symbols (symbol6in the first slot to symbol10in the second slot) from a symbol (symbol6in the second slot) of a subsequent repeatedly transmitted PUSCH to a subsequent invalid symbol (symbol11in the second slot) may constitute another hop (second hop). Two consecutive symbols starting from a symbol (symbol12in the second slot) of a subsequent repeatedly transmitted PUSCH may constitute another hop (a third hop). In this case, the first hop and the third hop may be transmitted on the same frequency domain resource or may be transmitted on different frequency domain resources. Even if consecutive symbols are included in different slots, the consecutive symbols are included in one hop so that a DMRS overhead is reduced compared to a case in which one hop includes only symbols in the same slot. However, the number of hops may be increased due to inclusion of an invalid symbol in one slot, and a DMRS overhead may be increased if a DMRS needs to be assigned for each hop. However, in a situation where a channel delay spread and a channel change on the time axis are not large within one slot, frequency domain resources in which odd-numbered hops (e.g., a first hop, a third hop, etc.) are transmitted may be configured to be always the same, and frequency domain resources in which even-numbered hops (e.g., a second hop, a fourth hop, etc.) are transmitted may be configured to be always the same. By configuring frequency domain resources, in which odd-numbered/even-numbered hops are transmitted, to be always the same, a problem that a DMRS overhead is increased due to an increase in hops can be solved. Based on a slot boundary, consecutive PUSCH symbols may constitute one hop. Referring toFIG.24, four consecutive symbols (symbol10to symbol13in a first slot) from a starting symbol (symbol10in the first slot) of a repeatedly transmitted PUSCH to a slot boundary may constitute a first hop, four consecutive symbols (symbol0to symbol3in a second slot) from a subsequent PUSCH symbol (symbol0in the second slot) to an invalid symbol (symbol4in the second slot) may constitute a second hop, five consecutive symbols (symbol6to symbol10in the second slot) from a subsequent PUSCH symbol (symbol6in the second slot) to a subsequent invalid symbol (symbol11in the second slot) may constitute a third hop, and two consecutive symbols (symbol12and symbol13in the second slot) from a subsequent PUSCH symbol (symbol12in the second slot) may constitute a fourth hop. As described above, the odd-numbered hops may be transmitted on the same frequency domain resource, and the even-numbered hops may be transmitted on the same frequency domain resource. This is effective in terms of compatibility because characteristics of NR, in which a transmission unit is configured and scheduling is performed in units of slots, can be maintained. One frequency hop may include a predetermined specific number of symbols. In this case, the specific number of symbols may be a maximum number that may constitute one hop. In other words, if the number of consecutive symbols is fewer than the specific number of symbols, one hop may include the number of consecutive symbols fewer than the specific number of symbols. In this case, the preconfigured specific number may be a value configured for the terminal by the base station. The predetermined specific number may be equal to a length of a nominal PUSCH. Since the length of the nominal PUSCH is fixed, one hop may include the same number of symbols as that of the nominal PUSCH in chronological order. In this case, a downlink symbol or an invalid symbol may be excluded from symbols constituting one hop. Referring toFIG.25, the number of symbols of one nominal PUSCH is 6 symbols. If PUSCH symbols consecutive in chronological order in the time domain constitute one hop, a first hop may include 6 symbols (symbol10in a first slot to symbol1in a second slot), a second hop may include subsequent 6 symbols (symbols2,3,6,7,8, and9in the second slot), and a third hop may include the remaining symbols (symbols12and13in the second slot). In this case, since consecutive symbols may be transmitted via one hop, symbol10in the second slot has no neighboring symbol to be grouped with in one hop. Accordingly, if repeated PUSCH transmission type B is applied, since symbol10in the second slot corresponds to a PUSCH having a length of one symbol, the PUSCH may not be transmitted. In this case, the first hop and the third hop may be transmitted in the same frequency domain resource. As another example, the preconfigured specific number may be any one of divisors of the total number of symbols of a repeatedly transmitted PUSCH. The total number of symbols of actual PUSCHs is N, and N may be a natural number that is not a prime number. The number of symbols constituting one hop may be a number except for 1 and N among divisors of N. That is, one hop may include the specific number of consecutive or inconsecutive symbols. In addition, after configuring a hop with the specific number of consecutive symbols, if a PUSCH having one symbol exists, the PUSCH having one symbol may be dropped. Specifically, the specific number of symbols may be i) a largest number among the divisors of N, except for 1 and N. By determining the largest number as the number of symbols constituting one hop, a PUSCH may be transmitted for a longer period of the time domain via the same PRB, so that coverage can be extended. Referring toFIG.26(a), when the total number (N) of symbols of actual PUSCHs is 15, 5 which is a largest number of the divisors except for 1 and 15 may be determined as the number of symbols constituting one hop. That is, the terminal may configure one hop with five consecutive or inconsecutive PUSCH symbols in chronological order from a symbol (symbol10in a first slot) in which a repeatedly transmitted PUSCH starts. ii) The specific number of symbols may be a smallest number of the divisors of N, except for 1 and N. By determining the smallest number as the number of symbols constituting one hop, a hopping period may be shortened, and therefore transmission of hops on different PRBs may be performed frequently for a short period of the time domain. Referring toFIG.26(b), when the total number (N) of symbols of actual PUSCHs is 15, 3 which is a smallest number of the divisors of 15 except for 1 and 15 may be determined as the number of symbols constituting one hop. That is, the terminal may configure one hop with three consecutive or inconsecutive PUSCH symbols in chronological order from a symbol (symbol10in a first slot) in which a repeatedly transmitted PUSCH starts. In this case, since symbol6and symbol10in the second slot correspond to PUSCHs having a length of one symbol, so that the PUSCHs may not be transmitted. In other words, after configuring one hop with the specific number of symbols regardless of whether the symbols are consecutive or not, a PUSCH having a symbol length of one and having no consecutive symbol may not be transmitted. The base station may configure (indicate), for the terminal, a specific unit based on which frequency hopping may be performed. That is, PUSCH symbols included in the specific unit may constitute one hop, and frequency hopping may be performed based on a boundary of the specific unit. The specific unit may be at least one of a symbol set, a slot set, a symbol set determined according to a nominal PUSCH, and a slot set determined according to a nominal PUSCH. If the specific unit is a symbol set, the base station may configure (indicate), for the terminal, the number (N) of symbols constituting the symbol set. The terminal may generate a symbol set by grouping N symbols starting from a first symbol of a radio frame. Scheduled PUSCHs that are repeatedly transmitted may constitute one hop according to the symbol set. That is, a length of one symbol set may be a length of one hop. PUSCHs included in an odd-numbered symbol set may be transmitted on a first PRB(s), and PUSCHs included in an even-numbered symbol set may be transmitted on a second PRB(s). If the specific unit is a symbol set determined according to a nominal PUSCH, the number (N) of symbols constituting the symbol set may be equal to the length of the nominal PUSCH. The terminal may generate a symbol set by grouping N symbols starting from a first symbol scheduled for the nominal PUSCH. In this case, the base station may configure (indicate), for the terminal, a natural number value (K) for adjustment of the number of symbols constituting the symbol set. The terminal may generate a symbol set by grouping N*K symbols starting from the first symbol scheduled for the nominal PUSCH. That is, the natural number K may extend the number of symbols included in the symbol set to a multiple of the length of the nominal PUSCH. Scheduled PUSCHs may constitute one hop according to the symbol set. That is, a length of one symbol set may be a length of one hop. PUSCHs included in an odd-numbered symbol set may be transmitted on a first PRB(s), and PUSCHs included in an even-numbered symbol set may be transmitted on a second PRB(s). If the specific unit is a slot set, the base station may configure (indicate), for the terminal, the number (N) of slots constituting the slot set. The terminal may generate a slot set by grouping N slots starting from a first slot of a radio frame. Scheduled PUSCHs may constitute one hop according to the slot set. That is, a length of one slot set may be a length of one hop. PUSCHs included in an odd-numbered symbol set may be transmitted on a first PRB(s), and PUSCHs included in an even-numbered symbol set may be transmitted on a second PRB(s). If the specific unit is a slot set determined according to a nominal PUSCH, the base station may configure (indicate), for the terminal, the number (N) of slots constituting the slot set. The terminal may generate a slot set by grouping N slots starting from a first slot scheduled for the nominal PUSCH. Scheduled PUSCHs may constitute one hop according to the slot set. That is, a length of one slot set may be a length of one hop. PUSCHs included in an odd-numbered symbol set may be transmitted on a first PRB(s), and PUSCHs included in an even-numbered symbol set may be transmitted on a second PRB(s). Likewise, the first hop may be transmitted on the first PRB(s), and the second hop may be transmitted on the second PRB(s). i) Frequency hopping may be determined based on the number of slots scheduled for a nominal PUSCH. If the number of slots scheduled for the nominal PUSCH is NPUSCHslot, the number of slots constituting the first hop may be floor(NPUSCHslot/2), and the number of slots constituting the second hop may be NPUSCHslot−floor(NPusCHslot/2). Alternatively, the number of slots constituting the first hop may be ceil(NPUSCHslot/2), and the number of slots constituting the second hop may be NPusCHslot−ceil(NPUSCHslot/2). In this case, the first hop may be configured starting from the slot scheduled for the nominal PUSCH. ii) Frequency hopping may be determined based on the number of slots scheduled for actual PUSCHs. If the number of slots scheduled for actual PUSCHs is NPUSCHslot, the number of slots constituting the first hop and the number of slots constituting the second hop may be determined in the same manner as i) described above. In this case, although a nominal PUSCH is scheduled, a slot from which all nominal PUSCH symbols have been excluded due to an invalid symbol may not be included in the NPusCHslot. In this case, the first hop may be configured starting from the slot scheduled for the nominal PUSCH. iii) Frequency hopping may be determined based on the number of longest consecutive symbols among symbols consecutive in the time domain of actual PUSCHs. The actual PUSCH may be one or multiple repeatedly transmitted actual PUSCHs. That is, if the terminal is configured with repeated PUSCH transmission from the base station, frequency hopping may be determined based on actual PUSCHs. In this case, an actual PUSCH having the number of symbols fewer than the number of symbols configured for one hop by the terminal may not be hopped. For example, the terminal may configure one hop with as many symbols as the number of longest consecutive symbols of a PUSCH in the time domain. If the number of the longest PUSCH symbols is NPUSCHsymb,max, the numbers of symbols constituting the first hop and the second hop may be NPusCHsymb,max. That is, the terminal may transmit, on the first PRB(s), PUSCHs transmitted in NPusCHsymb,maxsymbols starting from the symbol scheduled for the PUSCH, and may transmit, on the second PRB(s), PUSCHs transmitted in subsequent NPUSCHsymb,maxsymbols. As another example, one hop may be configured with as many symbols as a certain number of symbols, the certain number being obtained by equally dividing the number of the longest PUSCH symbols in the time domain. If the number of the longest symbols is NPUSCHsymb,max, the number of symbols constituting the first hop is floor(NPusCHsymb,max/2), and the number of symbols constituting the second hop is NPUSCHsymb,max−floor(NPUSCHsymb,max/2). Alternatively, the number of symbols constituting the first hop may be ceil(NPusCHsymb,max/2), and the number of symbols constituting the second hop may be NPUSCHsymb,max-ceil(NPUSCHsymb,max/2). In this case, the first hop may be configured starting from a symbol scheduled for an actual PUSCH. iv) Frequency hopping may be determined based on the number of shortest consecutive symbols among symbols consecutive in the time domain of actual PUSCHs. There may be one actual PUSCH. That is, if the terminal is configured with PUSCH transmission from the base station, the terminal may determine frequency hopping based on actual PUSCHs. If the number of the shortest consecutive symbols is NPUSCHsymb,minthe numbers of symbols constituting the first hop and the second hop may be NPUSCHsymb,min. In this case, the first hop may be configured starting from a symbol scheduled for a PUSCH. Hereinafter, a method of determining the number and positions of symbols to which a DMRS symbol of a combined PUSCH is mapped will be described. A DMRS symbol described in the present specification may refer to a symbol to which a DMRS is mapped. FIG.27illustrates a method of determining a position of a symbol to which a DMRS included in a repeatedly transmitted PUSCH is mapped, according to an embodiment of the present disclosure. A terminal may determine a position of a DMRS symbol by considering, as one transmission group, all or some of consecutive PUSCH symbols constituting a combined PUSCH. In this case, by applying only PUSCH mapping type B, the terminal may always map a DMRS to a first symbol among consecutive PUSCH symbols constituting one transmission group. If abase station configures (indicates) additional DMRS symbols for the terminal, the base station may configure, for the terminal, the number of the additional DMRS symbols. A position of an additional DMRS symbol may be determined according to a PUSCH mapping type. One transmission group may be consecutive PUSCH symbols or hops. Referring toFIG.27(a), the numbers of symbols of combined PUSCH #1, combined PUSCH #2, and combined PUSCH #3, each of which is one transmission group, may be 8, 5, and 2, respectively. The terminal may map the additional DMRS to the position of the symbol according to the PUSCH mapping type, based on the number of additional DMRSs configured by the base station. In this case, the number of additional DMRSs may be configured via a higher layer. For example, if the number of additional DMRS symbols is 0, a DMRS is mapped to only a first symbol of each transmission group. If the number of additional DMRS symbols is 1, a first symbol and a seventh symbol of combined PUSCH #1, a first symbol and a fifth symbol of combined PUSCH #2, and a first symbol of combined PUSCH #3may be DMRS symbols. If the number of additional DMRS symbols is 2, a first symbol, a fourth symbol, and a seventh symbol of combined PUSCH #1, a first symbol and a fifth symbol of combined PUSCH #2, and a first symbol of combined PUSCH #3may be DMRS symbols. If the number of additional DMRS symbols is 3, a first symbol, a fourth symbol, and a seventh symbol of combined PUSCH #1, a first symbol and a fifth symbol of combined PUSCH #2, and a first symbol of combined PUSCH #3may be DMRS symbols. A PUSCH having a length of 1 in the time domain may not be transmitted. Referring toFIG.27(b), if repeated PUSCH transmission via frequency hopping is configured, the number of symbols constituting one hop (transmission group) may be up to seven. Accordingly, a position of a DMRS symbol may be determined regardless of whether frequency hopping is configured. That is, a DMRS symbol may be located in the same manner as in the case where frequency hopping is not configured (seeFIG.27(a)). Hereinafter, descriptions will be provided for a method of performing new repeated PUCCH transmission in order to solve a coverage problem (a problem that the number of UL symbols available for repeated transmission is restricted) occurring when repeated PUCCH transmission is performed. A PUCCH format used for repeated PUCCH transmission described below may be PUCCH format 1, 3, or 4 including 4 or more symbols. FIG.28toFIG.30illustrate a repeated PUCCH transmission method according to an embodiment of the present disclosure. InFIG.28, actual #n refers to an actual PUCCH of index n, and virtual #n refers to a virtual PUCCH of index n. A PUCCH may be repeatedly transmitted regardless of a slot boundary. That is, a PUCCH may be repeatedly transmitted on multiple slots as well as on one slot. In other words, a PUCCH may be repeatedly transmitted in symbols including a slot boundary. Based on the number of repeated PUCCH transmissions and the number of symbols for a PUCCH, which are configured from the base station, the terminal may determine a time domain (window) in which a nominal PUCCH is transmitted. A determined nominal PUCCH may be divided into actual PUCCHs, based on a slot boundary, a DL symbol, and an invalid symbol. Unlike repeated PUSCH transmission type B, in order to guarantee repeated PUCCH transmission as much as possible, invalid symbols in a nominal PUCCH may include a virtual symbol, and the included virtual symbol may be transmitted in a UL symbol immediately subsequent to a symbol enabling PUCCH transmission. Referring toFIG.28, nominal PUCCHs may be divided into actual PUCCH #1to actual PUCCH #6, based on slot boundaries, DL symbols, and invalid symbols. In this case, invalid symbols (symbols4,5, and11in a second slot) in the nominal PUCCHs include virtual PUCCH #1, and virtual PUCCH #1may be transmitted on an earliest symbol of subsequent transmittable UL symbols. An actual PUCCH may include fewer than 4 symbols. Therefore, the terminal needs to generate a combined PUCCH having a length of at least 4 symbols by combining each of actual PUCCHs. This is because a PUCCH format used for repeated PUCCH transmission should include 4 to 14 symbols. For example, if a first actual PUCCH has a length fewer than 4 symbols and there is a second actual PUCCH adjacent to the first actual PUCCH in the time domain, the first actual PUCCH and the second actual PUCCH may be combined. In this case, being adjacent refers to being consecutive, and refers to a case where no symbol exists between the first actual PUCCH and the second actual PUCCH. Referring toFIG.28, actual PUCCH #2and actual PUCCH #3are adjacent. Since two invalid symbols (symbols4and5in the second slot) exist between actual PUCCH #3and actual PUCCH #4, actual PUCCH #3and actual PUCCH #4are not adjacent. There may be two adjacent actual PUCCHs. Referring toFIG.28, actual PUCCH #2is adjacent to actual PUCCH #1and actual PUCCH #3. Therefore, the terminal may select one PUCCH to be combined from among two adjacent actual PUCCHs. i) An actual PUCCH having a shorter length among two adjacent actual PUCCHs may be selected. Referring toFIG.28, actual PUCCH #2may be combined with actual PUCCH #3having a shorter length among actual PUCCH #1and actual PUCCH #3. An actual PUCCH including 3 or fewer symbols may be dropped, but may be transmitted via being combined, without being dropped. In addition, due to a short actual PUCCH being combined, a PUCCH DMRS overhead can be reduced and a data transmission rate can be thus increased. ii) A longer actual PUCCH may be selected from among two adjacent actual PUCCHs. Referring toFIG.28, actual PUCCH #2may be combined with actual PUCCH #1having a longer length among actual PUCCH #1and actual PUCCH #3. Since a longer actual PUCCH is selected and combined, a PUCCH can be transmitted in a longer time resource, resulting in extending coverage. iii) An actual PUCCH earlier in time among two adjacent actual PUCCHs may be selected. Referring toFIG.28, actual PUCCH #2may be combined with actual PUCCH #1earlier in time among actual PUCCH #1and actual PUCCH #3. Since PUCCH transmission is possible for a longer time from a preceding time resource, coverage can be extended and a delay for UCI transmission including HARQ-ACK can be reduced. iv) An actual PUCCH subsequent in time among two adjacent actual PUCCHs may be selected. Referring toFIG.28, actual PUCCH #2may be combined with actual PUCCH #3subsequent in time among actual PUCCH #1and actual PUCCH #3. In a case of PUCCH transmission including UCI that is not sensitive to delay, combining with an actual PUCCH that is subsequent in time enables PUCCH transmission in a longer time resource, so that coverage can be extended. A length of a combined PUCCH configured by combining the first actual PUCCH and the second actual PUCCH may be 14 or fewer symbols. The first actual PUCCH and the second actual PUCCH are not combined in a way resulting in the number of symbols exceeding 14 symbols. In other words, if the actual PUCCH selected via i) to iv) is the second actual PUCCH, and a combined PUCCH configured by combining the first actual PUCCH and the second actual PUCCH exceeds 14 symbols, the third actual PUCCH, which is the other adjacent actual PUCCH to be combined with the first actual PUCCH, may be selected. In this case, if the length of the first actual PUCCH is 3 symbols or fewer, and there is no adjacent third actual PUCCH, the terminal may drop the first actual PUCCH without transmitting the same. When the terminal repeatedly transmits a PUCCH including a slot boundary, the length of the repeatedly transmitted PUCCH may not exceed a preconfigured number of symbols. The preconfigured number of symbols may be a value configured for the terminal by the base station. The configured number of symbols may be a value that the base station may configure for the terminal or a maximum number of symbols constituting a slot. As another embodiment, when a PUCCH is transmitted on a resource including a slot boundary, the length of the PUCCH may not be restricted. That is, the terminal may transmit the PUCCH to the base station on a resource including a slot boundary with no restriction on the number of symbols. However, if the number of symbols is from 4 to 14 both inclusive, the PUCCH may be transmitted using the described long PUCCH format. In addition, when the PUCCH is configured with a resource including a slot boundary, the number of symbols available for PUCCH transmission may exceed 14. In this case, since the existing PUCCH format includes only 14 or fewer symbols, a new PUCCH format using more than 14 consecutive symbols is required (hereinafter, described as an extended PUCCH format). That is, the terminal may transmit, to the base station, a PUCCH configured in a form of an extended PUCCH format. Since a DMRS symbol and a subsequent symbol in which UCI is transmitted are consecutive in existing PUCCH format 1, an extended PUCCH format may be configured by partially modifying existing PUCCH format 1. For example, a PUCCH including 15 symbols may have a structure in which, in addition to 1 symbol to which a DMRS is mapped in the existing PUCCH format 1, a DMRS is additionally mapped to a symbol consecutive to the 1 symbol. A PUCCH including 16 symbols may have a structure in which 1 symbol of a DMRS and 1 symbol for UCI transmission are added to the existing PUCCH format 1. In the extended PUCCH format partially modified from existing PUCCH format 3 or PUCCH format 4, a position of a symbol to which a DMRS is mapped may be determined according to an increased number of symbols. For example, if 1 to 3 symbols are increased, the increased symbols may be configured by being mapped in the order of a UCI symbol, a DMRS symbol, and a UCI symbol. That is, if one symbol is increased, the increased symbol may be a UCI symbol, if two symbols are increased, the increased symbols may be a UCI symbol and a DMRS symbol, and if three symbols are increased, the increased symbol may be a UCI symbol, a DMRS symbol, and a UCI symbol. If four or more symbols are increased, the same configuration as that for existing PUCCH format 3 or PUCCH format 4 including 4 to 14 symbols may be applied to the increased symbols. The base station may configure a resource area for transmission of a repeatedly transmitted PUCCH, wherein multiple starting symbols and multiple lengths may be configured in the resource area. For example, two starting symbols (S1 and S2) and two lengths (L1 and L2) may be configured in one resource area in which a PUCCH is transmitted. The terminal may determine, from S1 and L1, symbols in which a first repetition PUCCH is transmitted. The terminal may determine, from S2 and L2, symbols in which a second repetition PUCCH is transmitted. In this case, UCI may be included in the first repetition PUCCH and the second repetition PUCCH. In addition, the base station may also additionally configure information on a slot index. In this case, a slot indicated by the slot index may be a slot in which multiple starting symbols and multiple lengths are configured. In this case, the first repetition PUCCH may be transmitted on a first slot, and the second repetition PUCCH may be transmitted on a second slot. If information on the slot index is not configured, the first repetition PUCCH may be transmitted on the first slot determined based on a K1 value, and the second repetition PUCCH may be transmitted on the second slot subsequent to the first slot. In this case, the second slot may be a slot immediately after the first slot. In addition, the second slot may be an earliest slot, in which PUCCH transmission is possible, after the first slot. That is, if the slot immediately after the first slot does not include a UL resource available for PUCCH transmission, the second PUCCH may be transmitted in a slot including a UL resource. As described above, the K1 value may be a value indicated by DCI. The base station may configure multiple PUCCH resources for the terminal, and one starting symbol and one length may be configured in each PUCCH resource. The terminal may determine symbols corresponding to the one starting symbol and one length from among the symbols of each slot in which a PUCCH is repeatedly transmitted, and may determine whether the determined symbols are available for PUCCH transmission. Repeated PUCCH transmission may be performed in a period having a longest consecutive symbol period from among the symbols available for PUCCH transmission. Referring toFIG.29, a base station may configure, for a terminal, a starting symbol (S) of 4 and a length (L) of 10, and may configure the terminal to repeatedly transmit a PUCCH during two slots. In other words, the base station configures PUCCH transmission to be performed using symbols4to13. However, there may be a case in which a PUCCH cannot be transmitted during a symbol period based on the starting symbol and length in the slot, which are configured by the base station. Symbol0to symbol9of a first slot are unavailable for PUCCH transmission. In this case, a first repetition PUCCH may be transmitted on symbols10to13which are longest consecutive symbols among the consecutive symbols available for PUCCH transmission within the configured symbol period. If a flexible symbol is also available for PUCCH transmission, the first repetition PUCCH may be transmitted on symbols8to13. In the same way, a second repetition PUCCH may be transmitted in symbols6to10of a second slot. If there is no symbol available for PUCCH transmission in a specific slot or if an available symbol period is less than 4 symbols, the specific slot is not used for repeated PUCCH transmission. That is, the number of repeated PUCCH transmissions is not deducted. Repeated PUCCH transmission may be performed simultaneously on an inter-slot and an intra-slot. If the base station configures, for the terminal, repeated PUCCH transmission on an inter-slot and repeated PUCCH transmission on an intra-slot, a resource of a PUCCH a repeatedly transmitted in an intra-slot and a resource of a PUCCH a repeatedly transmitted in an inter-slot may be configured. Alternatively, an additional PUCCH resource may be configured in addition to a PUCCH resource configured for an intra-slot. That is, a PUCCH transmitted in an intra-slot is a first repeatedly transmitted PUCCH, and an intra-slot resource for a second repeatedly transmitted PUCCH may be additionally configured. In this case, a start position of the second repeatedly transmitted intra-slot resource may be determined by “a starting symbol position of inter-slot PUCCH—the number of symbols of inter-slot PUCCH”, and the number of symbols may be configured to be equal to that of inter-slot PUCCH. Referring toFIG.30, a PUCCH with a starting symbol of symbol10and a length of 4 symbols may be configured for inter-slot repeated transmission. In this case, since intra-slot repeated transmission of an inter-slot repeated transmission PUCCH is possible from symbol6in a second slot, inter-slot repeated PUCCH transmission and intra-slot repeated PUCCH transmission may be performed simultaneously on the second slot. Hereinafter, descriptions will be provided for a frequency hopping method for acquiring diversity gain when repeated PUCCH transmission is performed to solve a coverage problem. The terminal may determine, based on a specific boundary, a frequency hopping boundary for performing of repeated PUCCH transmission. Information for determination of a specific boundary is as follows. i) A specific boundary may be determined based on a boundary of repeated PUCCH transmission. The terminal may transmit each repeatedly transmitted PUCCH via frequency hopping. Referring toFIG.28, a hopping boundary may be a boundary of a nominal PUCCH, a boundary of an actual PUCCH, or a boundary of a combined PUCCH boundary. A PUCCH may be repeatedly transmitted by hopping for each of one nominal PUCCH, one actual PUCCH, or one combined PUCCH. Referring toFIG.29, the terminal may transmit PUCCH repetition #1of the first slot and PUCCH repetition #2of the second slot via frequency hopping in different frequency domains. Referring toFIG.30, repeated PUCCH transmission boundaries between inter-slots and between intra-slots may be frequency hopping boundaries. The terminal may transmit a PUCCH of a first slot and a PUCCH of a second slot in different frequency domains. In this case, an intra-slot repeated transmission PUCCH added in the second slot may be configured with the same hop as that for an inter-slot repeated transmission PUCCH in the second slot, so as to be transmitted in the same frequency domain. Alternatively, the intra-slot repeated transmission PUCCH in the second slot may be configured with the same hop as that for the inter-slot repeated transmission PUCCH in the first slot, so as to be transmitted in the same frequency domain. That is, each of multiple repeatedly transmitted PUCCHs transmitted in one slot may be transmitted in different frequency domains. In other words, the intra-slot PUCCH and the inter-slot PUCCH of the second slot may be transmitted in different frequency domains. ii) A slot boundary may be determined based on a semi-statically configured DL symbol, and an invalid symbol. Symbols available for consecutive/inconsecutive repeated PUCCH transmissions up to a slot boundary, a semi-static DL symbol, or an invalid symbol may be configured with the same hop. In other words, symbols available for consecutive/inconsecutive repeated PUCCH transmissions before a slot boundary, a semi-static DL symbol, or an invalid symbol and symbols available for consecutive/inconsecutive repeated PUCCH transmissions after the slot boundary, the semi-static DL symbol, or the invalid symbol may be configured with different hops. Referring toFIG.28, actual PUCCH #1, actual PUCCH #2, and actual PUCCH #3configured with resources before symbol4of the second slot, which is an invalid symbol, may be configured with a first hop. Actual PUCCH #4and actual PUCCH #5configured with consecutive symbols available for repeated PUCCH transmission after symbol4of the second slot may be configured with a second hop. In the same way, actual PUCCH #6may be configured with the first hop. Referring toFIG.29, since a slot boundary and an invalid symbol exist between PUCCH repetition #1and PUCCH repetition #2, PUCCH repetition #1and PUCCH repetition #2are configured with different hops. Referring toFIG.30, the inter-slot repeated transmission PUCCH of the first slot may be configured with a first hop, and the intra-slot repeated transmission PUCCH and inter-slot repeated transmission PUCCH of the second slot may be configured with a second hop. Different hops may be transmitted in different frequency domains. A hopping boundary may be determined based on a preconfigured number of symbols. That is, each of multiple hops may include the same number of symbols. The preconfigured number of symbols may be acquired based on PUCCH configuration information configured by the base station. i) Hops may be configured based on a value obtained by equally dividing the total number of symbols of repeatedly transmitted actual PUCCHs. Specifically, the number of symbols constituting the first hop may be floor(NrepeatPUCCH/2) or ceil(NrepeatPUCCH/2), and the number of symbols constituting the second hop may be NrepeatPUCCH/2-floor(NrepeatPUCCH/2) or NrepeatPUCCH/2−ceil(NrepeatPUCCH/2). NrepeatPUCCHrefers to the total number of symbols of actual PUCCHs. Referring toFIG.28, since the total number of symbols of actual PUCCHs is 15, the first hop may include 7 symbols (symbol10in the first slot to symbol2in the second slot), and the second hop may include 8 symbols (symbols3,6to10, and12and13in the second slot). Referring toFIG.29, since the total number of symbols constituting a PUCCH is 9, the first hop may include 4 symbols (symbol10to symbol13in the first slot) and the second hop may include 5 symbols (symbol6to symbol10in the second slot). Referring toFIG.30, since the total number of symbols constituting a PUCCH is 12, the first hop may include 6 symbols (symbol10to symbol13in the first slot and symbols6and7in the second slot), and the second hop may include 6 symbols (symbol8to symbol13in the second slot). Alternatively, if a length of consecutive symbols included in one hop is two or fewer, the consecutive symbols of two or fewer may be included in another hop. In this case, another hop including the two or fewer symbols may include symbols adjacent to the two or fewer consecutive symbols, and may be a hop transmittable in the same frequency domain. Referring toFIG.30, symbols6and7in the second slot of the first hop may be included in the second hop and transmitted. ii) One hop may be configured based on the number of fewest consecutive symbols among all symbols of repeatedly transmitted PUCCHs. Referring toFIG.28, the number of fewest consecutive symbols is 2 (actual PUCCH #2, #3, #6). Therefore, one hop may include two symbols. Referring toFIG.29, the number of fewest consecutive symbols is 4 (PUCCH repetition #1). Therefore, the first hop may include 4 symbols (symbols10to13in the first slot), and the second hop may include 4 symbols (symbol6to symbol9in the second slot). If the first hop and the second hop are configured in this way, symbol10of the second slot remains, and the terminal may not transmit a PUCCH including one symbol. That is, the terminal may drop symbol10of the second slot. Referring toFIG.30, the number of fewest consecutive symbols is 4. Therefore, the first hop may include 4 symbols (symbols10to13in the first slot), the second hop may include 4 symbols (symbols6to9in the second slot), and the third hop may include 4 symbols (symbols10to13in the second slot). iii) One hop may be configured with a preconfigured number of symbols. In this case, the preconfigured number of symbols may be a value configured for the terminal by the base station. Alternatively, the preconfigured number of symbols may be the number of symbols constituting one PUCCH, that is, the number of symbols of a repeatedly transmitted PUCCH. Referring toFIG.28, the preconfigured number of symbols may be 6. Therefore, the first hop may include 6 symbols (symbol10in the first slot to symbol1in the second slot), the second hop may include 6 symbols (symbols2,3, and6to9in the second slot), and the third hop may include 3 symbols (symbols10,12, and13in the second slot). In this case, the first hop and the third hop may be transmitted on the same frequency domain resource or may be transmitted on different frequency domain resources. Referring toFIG.29, the preconfigured number of symbols may be the number of symbols of a first configured PUCCH (10inFIG.29). Accordingly, all symbols of PUCCH repetition #1and PUCCH repetition #2may be configured in one hop. Referring toFIG.30, the preconfigured number of symbols may be the number of symbols of one PUCCH (4inFIG.30). Therefore, the first hop may include 4 symbols (symbols10to13in the first slot), the second hop may include 4 symbols (symbols6to9in the second slot), and the third hop may include 4 symbols (symbols10to13in the second slot). In this case, the first hop and the third hop may be transmitted on the same frequency domain resource or may be transmitted on different frequency domain resources. iv) One hop may be configured based on the number of longest consecutive symbols among all symbols of repeatedly transmitted PUCCHs. For example, a value calculated by equally dividing the number of longest consecutive symbols may be the number of symbols constituting one hop. Specifically, the number of symbols constituting the first hop may be floor(NrepeatPUCCH/2) or ceil(NrepeatPUCCH/2), and the number of symbols constituting the second hop may be NrepeatPUCCH−floor(NrepeatPUCCH/2) or NrepeatPUCCH−ceil(NrepeatPUCCH/2). NrepeatPUCCHmay be the number of longest consecutive symbols. A value corresponding to min(floor(NrepeatPUCCH/2), NrepeatPUCCH−floor(NrepeatPUCCH/2)) or max(floor(NrepeatPUCCH/2), NrepeatPUCCH− floor(NrepeatpUCCH/2)) may be the number of symbols constituting one hop. A value corresponding to min(ceiling(NrepeatpUCCH/2), NrepeatPUCCH−ceiling(NrepeatpUCCH/2)) or max(ceiling(NrepeatpUCCH/2), NrepeatPUCCH-ceiling(NrepeatPUCCH/2)) may be the number of symbols constituting one hop. max(a, b) is a function that returns a larger of a and b, and min(a, b) is a function that returns a smaller of a and b. Referring toFIG.28, the number of longest consecutive symbols is 8 which is the sum of the number of symbols of actual PUCCH #1and the number of symbols of actual PUCCH #2. Therefore, 4 which is a value obtained by equally dividing 8 may be the number of symbols constituting one hop. Referring toFIG.29, the number of longest consecutive symbols is 5 which is the number of symbols of PUCCH repetition #2. Therefore, 2 or 3 may be the number of symbols constituting one hop. If the number of consecutive symbols is fewer than the number of symbols constituting one hop, corresponding symbols are not hopped. Hereinafter, methods for solving a coverage problem without combining multiple PUSCHs will be described. FIG.31andFIG.32illustrate a method of repeated PUSCH transmission according to an embodiment of the present disclosure. A PUSCH may be transmitted on resources including a slot boundary. Resources including a slot boundary may be configured not to have lengths exceeding a predetermined length. That is, a PUSCH transmitted on resources including a slot boundary may be transmitted on resources with the number of symbols equal to or fewer than the preconfigured number of symbols. The preconfigured length may be a value configured for a terminal by a base station. Alternatively, the preconfigured length may be a maximum number of symbols constituting a slot. The length of resources including a slot boundary may not be restricted. That is, the terminal may transmit a PUSCH with no restriction on the number of symbols. In this case, the base station may configure a position of a DMRS included in the PUSCH. For example, if the length of resources including a slot boundary is 14 symbols or fewer, DMRS mapping may be performed in the same way as in the existing PUSCH structure. If the length of resources including a slot boundary exceeds 14 symbols, the existing PUSCH structure including 1 to 14 symbols may be equally applied to a symbol exceeding 14 symbols. That is, if the length of resources including a slot boundary is 15 or 28 symbols, and PUSCH mapping type B is applied, a front-loaded DMRS may be mapped to a first symbol (i.e., a 15th symbol) among symbols exceeding 14 symbols. In addition, if an additional DMRS is further configured, the additional DMRS may be mapped by equally applying a DMRS position, which is applied to the existing PUSCH structure including 2 to 14 symbols, to symbols exceeding 14 symbols. The base station may configure the terminal to repeatedly transmit a PUSCH on resources including a slot boundary. In this case, the terminal may repeatedly transmit a PUSCH, based on a specific boundary. i) A specific boundary may be a slot boundary. That is, the terminal may repeatedly transmit a PUSCH by determining a slot boundary as a basis for repeated transmission. Referring toFIG.31, a PUSCH may be repeatedly transmitted on 6 symbols including a slot boundary. If 6 symbols starting from symbol12of slot n include a slot boundary, a PUSCH may be repeatedly transmitted in symbol12of slot n to symbol3of slot n+1. ii) A specific boundary may be a virtual slot boundary. A virtual slot boundary is a slot boundary newly defined regardless of an existing slot boundary, and may be defined when a PUSCH is transmitted on resources including the existing slot boundary. Referring toFIG.32, the base station may configure the terminal to repeatedly transmit a PUSCH having a length of 6 symbols from symbol12of slot n−1 during 2 slots. In this case, a first symbol (symbol12in slot n−1) of the repeatedly transmitted PUSCH may be a start point of a virtual slot boundary. In addition, the PUSCH may be transmitted as many times as a configured number of repeated transmissions. That is, a symbol in which PUSCH transmission starts may be the first symbol of the virtual slot. The maximum number of symbols constituting a virtual slot may be greater than or equal to 14 for a normal CP and 12 for an extended CP. In order to improve coverage of a PUCCH and a PUSCH, DMRSs included in different PUCCHs that are repeatedly transmitted and different PUSCHs that are repeatedly transmitted may be jointed and used for channel estimation. Conventionally, a DMRS included in a repeatedly transmitted 1st PUCCH is used for channel estimation for decoding the 1st PUCCH, and a DMRS included in a repeatedly transmitted second PUCCH is used for channel estimation for decoding the second PUCCH. That is, DMRSs included in different PUCCHs are used only to decode the PUCCHs including the respective DMRSs. Hereinafter, descriptions will be provided for a method in which the base station performs channel estimation (hereinafter, it may be described as joint channel estimation) by jointing DMRSs included in different PUCCHs/PUSCHs. The method described below is described based on a PUCCH for convenience of description, but it is obvious that the method is also applicable to a PUSCH. Joint channel estimation conditionsSame starting PRB index: Start positions of PRBs to which DMRSs included in different repeatedly transmitted PUCCHs are mapped should be the same in the frequency domain.Same number of PRBs: The number of PRBs to which DMRSs included in different repeatedly transmitted PUCCHs are mapped should be the same in the frequency domain.Phase continuity: DMRSs included in different repeatedly transmitted PUCCHs need to maintain the same phase.Same beamforming: DRMSs included in different repeatedly transmitted PUCCHs should be configured with the same beamforming.Same transmit power: DMRSs included in different repeatedly transmitted PUCCHs should be transmitted with the same transmit power.Same quasi-co-location (QCL): DMRSs included in different repeatedly transmitted PUCCHs need to have the same quasi-co-location (QCL). A first DMRS included in a repeatedly transmitted first PUCCH and a second DMRS included in a second PUCCH may be mapped to and transmitted on different symbols. That is, the first DMRS may be mapped to one of symbols scheduled for transmission of the first PUCCH, and the second DMRS may be mapped to one of symbols scheduled for transmission of the second PUCCH. In order for the base station to perform channel estimation by combining the first DMRS and the second DMRS, the above conditions should be satisfied. The base station may perform channel estimation by jointing the first DMRS and the second DMRS, and may receive the first PUCCH and the second PUCCH repeatedly transmitted based on a channel estimation result. Joint Channel Estimation Methods Hereinafter, a detailed method for joint channel estimation will be described. FIG.33illustrates a method of configuring a resource in which a PUCCH is repeatedly transmitted, according to an embodiment of the present disclosure. Referring toFIG.33, a base station may transmit the following information in order to configure a resource in which a PUCCH is transmitted.Starting symbol index: An index of a symbol in which PUCCH transmission starts in the time domain.Number of symbols: The number of symbols used for PUCCH transmission in the time domain. PUCCH format 0 or 2 is a format for PUCCH transmission in 1 symbol or 2 symbols. PUCCH format 1, 3, or 4 is a format for PUCCH transmission in 4 to 14 symbols. PUCCH format 0 or 2 may be described as a short PUCCH, and PUCCH format 1, 3, or 4 may be described as a long PUCCH.Starting PRB index: An index of a PRB in which PUCCH transmission starts in the frequency domain.Number of PRBs: The number of PRBs used for PUCCH transmission in the frequency domain. PUCCH format 0, 1, or 4 is a format for PUCCH transmission in 1 PRB. PUCCH format 2 is a format for PUCCH transmission in 1 PRB to 16 PRBs. PUCCH format 3 is a format for PUCCH transmission in 1 PRB and 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, and 16 PRBs.Max code rate: A maximum code rate available for PUCCH. The terminal is unable to transmit a PUCCH including UCI exceeding the maximum code rate. The terminal needs to determine the number of PRBs to be used in a PUCCH format for PUCCH transmission. First, the terminal may determine the number of bits (O bits) of UCI included in the PUCCH. UCI may include a cyclic redundancy code (CRC). In addition, the terminal may determine the number (N) of REs to which UCI is mapped per PRB. The UE may determine the number of REs except for a RE to which a DMRS is mapped. When a PUCCH is transmitted on M PRBs, a code rate may be calculated using O/(M*N*Q). Where Q may refer to a modulation order used for PUCCH transmission. In this case, the calculated code rate should be equal to or lower than the maximum code rate. That is, O/(M*N*Q)≤a maximum code rate should be satisfied. In PUCCH format 2 or 3 enabling use of multiple PRBs, the number of PRBs may be adjusted so that the code rate is equal to or lower than the maximum code rate. That is, among the number (M) of possible PRBs, a smallest number of PRBs satisfying O/(M*N*Q)≤a maximum code rate may be selected. In this case, a minimum value of selectable PRBs may be preconfigured, and the number of PRBs that are not fewer than the minimum value may be selected. The number (N) of REs may be determined based on the number of symbols used for PUCCH transmission. As the number of symbols used for PUCCH transmission increases, the number of REs may increase. Specifically, N may be given as a product of Ns,ctrland Nsymb-ucI. Nse,ctrlis the number of REs for transmitting UCI in one symbol corresponding to 1 PRB. Nsymb-ucIis the number of symbols for transmitting UCI. For PUCCH format 2, Nsc,ctrlmay be 8, and for PUCCH format 3, Nsc,ctrlmay be 12. For PUCCH format 2, Nsymb-UCImay be the number of symbols used for PUCCH transmission, and for PUCCH format 3, Nsymb-UCImay be the number of symbols used for PUCCH transmission, except for a symbol to which a DMRS is mapped. FIG.34illustrates that respective repeatedly transmitted PUCCHs are transmitted in the same symbol length (number of symbols), according to an embodiment of the present disclosure.FIG.35toFIG.37illustrate that respective repeatedly transmitted PUCCHs are transmitted in different symbol lengths according to an embodiment of the present disclosure. Referring toFIG.34, each of PUCCH0and PUCCH1may include the same UCI. In this case, a length (number of symbols) of a resource in which PUCCH0is transmitted may be the same as a length of a resource in which PUCCH1is transmitted. PUCCH0and PUCCH1may occupy the same PRB. The number of PRBs may be determined by the method described above. Each of PUCCH0and PUCCH1may include a symbol for transmitting a DMRS. A base station may perform channel estimation by jointing a DMRS of PUCCH0(mapped to a 12th symbol of slot n) and a DMRS of PUCCH1(mapped to a second symbol of slot n+1). In addition, the base station may receive UCI transmitted on PUCCH0and PUCCH1via joint channel estimation. Referring toFIG.35, each of PUCCH0and PUCCH1may include the same UCI. In this case, a length of a resource in which PUCCH0is transmitted may be different from a length of a resource in which PUCCH1is transmitted. PUCCH0may be transmitted on 4 symbols, and PUCCH1may be transmitted on 11 symbols. Since the lengths of resources in which PUCCH0and PUCCH1are transmitted are different, the number of PRBs occupied by PUCCH0and PUCCH1may be different from each other. For example, PUCCH0transmitted on 4 symbols may occupy more PRBs compared to PUCCH1transmitted on 11 symbols. The number of PRBs may be determined by the method described above. In overlapping PRBs among PRBs occupied by PUCCH0and PUCCH1, channel estimation may be possible by jointing DMRSs. However, since the DMRS for PUCCH1is not transmitted in non-overlapping PRBs, joint channel estimation may be impossible. Therefore, the base station may estimate different channels according to PRBs, and an error may occur in a channel estimation value. A method for overcoming this error will be described below. The method to be described later may not be applied when repeated PUCCH transmission is performed via frequency hopping. The number of PRBs of respective repeatedly transmitted PUCCHs may be calculated independently of each other. That is, the number of PRBs may be determined based on the number of symbols allocated to each repeatedly transmitted PUCCH. Method of Determining the Number of PRBs Method 1 i) A starting PRB index of each repeatedly transmitted PUCCH may be the same as a starting PRB index of a first repeatedly transmitted PUCCH. Referring toFIG.35, PUCCH0and PUCCH1include different numbers of PRBs, but a starting PRB index of PUCCH1is the same as a starting PRB index of PUCCH0. If a starting PRB index of a repeatedly transmitted PUCCH is determined to be a starting PRB index of a first transmitted PUCCH, there is a problem that joint channel estimation is possible for PRBs corresponding to a low frequency domain, but joint channel estimation is not possible for PRBs corresponding to a high frequency domain. ii) A last PRB index of each repeatedly transmitted PUCCH may be the same as a last PRB index of a first repeatedly transmitted PUCCH. A last PRB index is an index of a PRB corresponding to a highest frequency domain occupied by a PUCCH in the frequency domain, and may be calculated as the sum of a starting PRB index and the number of PRBs. Referring toFIG.36, PUCCH0and PUCCH1may include different numbers of PRBs. In this case, a last PRB index of PUCCH1is a last PRB index of PUCCH0. If a last PRB index of a repeatedly transmitted PUCCH is determined to be a last PRB index of a first transmitted PUCCH, there is a problem that joint channel estimation is possible for PRBs corresponding to a high frequency domain, but joint channel estimation is not possible for PRBs corresponding to a low frequency domain. iii) In the frequency domain of resources of respective repeatedly transmitted PUCCHs, center resources may match. Referring toFIG.37, PUCCH0and PUCCH1may have different starting symbol indices. In this case, the center of resources constituting PUCCH0in the frequency domain and the center of resources constituting PUCCH1in the frequency domain may be configured to match as much as possible. For example, the number of PRBs configured for PUCCH0may be M0, and a starting symbol index may be S0. In addition, the number of PRBs configured for PUCCH1may be M1, and a starting symbol index may be S1. In this case, S1 may be obtained by the sum of S0 and a value returned after applying, to a preconfigured function, a value obtained by dividing, by2, a difference between PRBs respectively configured for PUCCH0and PUCCH1. That is, S1 may be calculated as shown in Equation 1. S1=S0+f((M0−M1)/2)  [Equation 1] In this case, f(x) may be one of ceil(x), floor(x), and round(x). round(x) may return an integer value rounded to x. In this case, if M0 is greater than M1, S1 may be a negative number, so that S1 may be restricted to be an integer greater than or equal to 0. That is, S1 may be calculated with max{0, S0+f((M0−M1)/2)}. Since a resource starting from S1, in which PUCCH1is transmitted, may cross an active UL BWP boundary, S1 may be restricted to be a value at which a last PRB index of PUCCH1is located within the active UL BWP. That is, S1 may be calculated with min{NRB−M1, S0+f((M0−M1)/2)}. NRBmay be the number of PRBs included in the active UL BWP. iv) The base station may configure an offset value for the terminal. S1 may be calculated by S0+offset. That is, a staring PRB index may be determined using an offset within one frequency hop. When method 1 is used, joint channel estimation is not possible and only separate estimation is possible, for a PUCCH repeatedly transmitted in a non-overlapping PRB area. Method 2 The number of PRBs corresponding to respective repeatedly transmitted PUCCHs may be the same. FIG.38illustrates a case in which the same number of PRBs are configured for respective repeatedly transmitted PUCCHs according to an embodiment of the present disclosure. i) The same number of PRBs as the number of PRBs configured for a first repeatedly transmitted PUCCH may be configured for the remaining repeatedly transmitted PUCCHs. That is, the number of PRBs allocated to the repeatedly transmitted PUCCH may be determined based on the number of symbols configured for the first repeatedly transmitted PUCCH. In this case, the determined number of PRBs may be independent of the number of symbols allocated to each of repetitively transmitted PUCCHs. Referring toFIG.38, the number of PRBs allocated to PUCCH0may be determined based on 4 symbols used for PUCCH0transmission. The same number of PRBs as the number of PRBs allocated to PUCCH0may be allocated to PUCCH1. In this case, since the number of PRBs is determined in consideration of a maximum code rate for PUCCH0, it may not be suitable for a maximum code rate for PUCCH1. For example, if the number of symbols allocated to the first repeatedly transmitted PUCCH, which is earliest in time, is large, the maximum code rate may be satisfied even if the number of PRBs is small. Accordingly, if the number of symbols of a PUCCH repeatedly transmitted after the first repeated transmission is small, the maximum code rate may not be satisfied. ii) As described above, the same number of PRBs as the number of PRBs configured for the first repeatedly transmitted PUCCH may be configured for the remaining repeatedly transmitted PUCCHs. In this case, a code rate may be calculated for each repeatedly transmitted PUCCH. If a calculated code rate is greater than the maximum code rate, the terminal may not transmit a corresponding PUCCH. Resources configured for PUCCHs that are not transmitted may be used for repeated transmission of other adjacent PUCCHs. iii) A PRB configured for a repeatedly transmitted PUCCH may be determined using the number of PRBs configured for a PUCCH, to which a smallest number of symbols are allocated, from among repeatedly transmitted PUCCHs. That is, the terminal may identify the number of symbols allocated to each repeatedly transmitted PUCCH, and may determine the number of PRBs, based on the PUCCH to which the smallest number of symbols are allocated. The determined number of PRBs may be applied regardless of the number of symbols allocated to the repeated PUCCH transmission. Referring toFIG.38, 4 symbols (3 symbols are used for UCI transmission) may be allocated to PUCCH0, and 11 symbols (9 symbols are used for UCI transmission) may be allocated to PUCCH1. Accordingly, the number of PRBs of PUCCH0to which the smallest number of symbols are allocated may be the number of PRBs of PUCCH1. In this case, when the small number of symbols are determined, symbols mapped with a DMRS are excluded, and only symbols used for UCI transmission may be used. iv) A largest number of PRBs from among PRBs configured for each PUCCH may be used for all repeated PUCCH transmissions. Referring toFIG.38, if the number of PRBs configured for PUCCH0is M0 and the number of PRBs configured for PUCCH1is M1, a larger value in M0 and M1 may be selected. PRBs corresponding to the selected value may be configured for PUCCH0and PUCCH1. v) The same number of PRBs may be configured for each repeatedly transmitted PUCCH. That is, when scheduling repeated PUCCH transmission, the base station may perform scheduling so that the numbers of PRBs configured for respective repeatedly transmitted PUCCHs to be the same. Method 3 FIG.39andFIG.40illustrate PRBs for DMRS transmission, configured for each repeatedly transmitted PUCCH according to an embodiment of the present disclosure. In this case, the number of PRBs for DMRS transmission, which is configured for each repeatedly transmitted PUCCH may be the same. i) Referring toFIG.39, the number of PRBs, which does not exceed a maximum code rate, may be calculated for each repeatedly transmitted PUCCH. When the number of PRBs required for transmission of PUCCH0is M0 and the number of PRBs required for transmission of PUCCH1is M1, PRBs corresponding to a larger value in M0 and M1 may be used for DMRS transmission. That is, a DMRS included in PUCCH1may be transmitted via M0 PRBs. In other words, all DRMSs included in respective repeatedly transmitted PUCCHs may be transmitted via the same number of PRBs. In this case, UCI may be transmitted on PRBs required for each PUCCH transmission. UCI included in PUCCH1may be transmitted via M1 PRBs. ii) The number of PRBs for DMRS transmission, included in some PUCCHs among repeatedly transmitted PUCCHs may be the same. In this case, some PUCCHs may be PUCCHs adjacent in time. For example, the number of PRBs, which is configured to be the same, may be a larger number in the numbers of PRBs configured for two adjacent PUCCHs. As another example, the number of PRBs, which is configured to be the same, may be determined based on a time interval between symbols to which DMRSs are mapped. Referring toFIG.40, an interval between a DMRS symbol (a 12thsymbol in slot n) included in PUCCH0and a first DMRS symbol (a 3rdsymbol in slot n+1) included in PUCCH1may be equal to or greater than a certain value (window for DMRS extension). In this case, the number of PRBs to which the DMRSs included in PUCCH0and PUCCH1are to be mapped may be a larger value of the number of PRBs configured for PUCCH0and the number of PRBs configured for PUCCH1. In order for DMRSs included in a repeatedly transmitted PUCCH or PUSCH to be jointed and used for channel estimation, transmission power should be the same. Hereinafter, a method of equally configuring transmission power (transmit power control) will be described. According to 3GPP standards, a transmission power of a PUSCH may be determined as shown in Table 4. TABLE 4If a UE transmits a PUSCH on active UL BWP b of carrier f of serving cell c using parameter setconfiguration with index j and PUSCH power control adjustment state with index l, the UEdetermines the PUSCH transmission power PPUSCH,b,f,c(i, j, qd, l) in PUSCH transmission occasion i asPPUSCH,b,f,c(i,j,qd,l)=min⁢{PCMAX,f,c(i),PO_PUSCH,b,f,c(j)+10⁢log10(2μ·MRB,b,f,cPUSCH(i))+αb,f,c(j)·PLb,f,c(qd)+ΔTF,b,f,c(i)+fb,f,c(i,l)} That is, if the terminal transmits a PUSCH in an active UL BWP (b) of a carrier (f) of a serving cell (c), a transmission power may be determined as shown in Equation 2. PPUSCH,b,f,c(i,j,qd,l)=min⁢{PCMAX,f,c(i)PO_PUSCH,b,f,c(j)+10⁢log10(2μ·MRB,b,f,cPUSCH(i))+αb,f,c(j)·PLb,f,c(qd)+ΔTF,b,f,c(i)+fb,f,c(i,l)}[Equation⁢2] In this case, ΔTF, b, f, c(i) may be determined as shown in Equation 3. ΔTF,b,f,c(i)=10 log10((2BPREK,−1)·βoffsetPUSCH[Equation 3] Ks may be 1.25 or 0. If a PUSCH includes βoffsetPUSCHmay be 0. BPRE may be determined as shown in Equation 4. BPRE=∑r=0C-1Kr/NRE[Equation⁢4] C is the number of code blocks transmitted by a PUSCH, and Kr is the size (number of bits) of an r-th code block. NREis the number of REs allocated to a PUSCH and may be calculated as shown in Equation 5. NRE=MRB,b,f,cPUSCH(i)·∑j=0Nsymb,b,f,cPUSCH(i)-1Nsc,dataRB(i,j)[Equation⁢5] Nsymbb,f,cPUSCH(i) is the number of symbols allocated to an i-th PUSCH of an active UL BWP (b) of a carrier (f) of a cell (c). i is an index configured to a PUSCH. Nsc,dataRB(i,j) is a number obtained by excluding a subcarrier, in which a DMRS or a phase tracking reference signal (PTRS) is mapped to a j-th symbol of the i-th PUSCH, from the number of subcarriers constituting an RB. MRB,b,f,cPUSCH(i) is the number of PRBs allocated to the i-th PUSCH of the active UL BWP (b) of the carrier (f) of the cell (c). NREmay be changed according to Nsymbb,f,cPUSCH(i). According to NRE, ΔTF, b, f, c, (i) may be changed, and a PUSCH transmission power may be changed. Hereinafter, descriptions will be provided for a method of constantly maintaining a PUSCH transmission power for joint channel estimation using a DMRS. Method of Determining PUSCH Transmission Power i) The terminal may calculate a transmission power of a first repeatedly transmitted PUSCH. NREof Equation 5 may be calculated using the number of symbols for transmission of the first repeatedly transmitted PUSCH. That is, Nsymb,f,cPUSCH(i) may be the number of symbols for transmission of the first repeatedly transmitted PUSCH. The transmission power of the first repeatedly transmitted PUSCH may be equally applied to all or some of the remaining repeatedly transmitted PUSCHs. That is, the transmission power of the first repeatedly transmitted PUSCH is applied regardless of the number of symbols for transmission of the remaining repeatedly transmitted PUSCHs. Some PUSCHs may be PUSCHs which are adjacent in time to the first repeatedly transmitted PUSCH and transmitted on the same PRB (i.e., the same hop). Alternatively, some PUSCHs may be PUSCHs including a DMRS, in which joint channel estimation using the DMRS is possible. ii) The terminal may calculate a transmission power of a PUSCH transmitted on the smallest number of symbols among repeatedly transmitted PUSCHs. In this case, the calculated transmission power of the PUSCH may be used as the transmission power of all or some of the remaining repeatedly transmitted PUSCHs. Specifically, NREof Equation 5 may be calculated using the number of symbols of the PUSCH transmitted on the smallest number of symbols. That is, Nsymnbb,f,cPUSCH(i) may be the number of symbols of the PUSCH transmitted on the smallest number of symbols. iii) The terminal may calculate a transmission power, based on an average of NREs. In this case, NREmay be the number of symbols for transmission of each repeatedly transmitted PUSCH. iv) The terminal may separately calculate transmission powers of respective repeatedly transmitted PUSCHs. In this case, a largest value among the calculated respective transmission powers may be the transmission power of all repeatedly transmitted PUSCHs. According to 3GPP standards, a transmission power of a PUCCH may be determined as shown in Equation 6. PPUSCH,b,f,c(i,qu,qd,l)=min⁢{PCMAX,f,c(i)PO_PUCCH,b,f,c(qu)+10⁢log10(2μ·MRB,b,f,cPUCCH(i))+PLb,f,c(qd)+ΔF_PUCCH(F)+ΔTF,b,f,c(i)+gb,f,c(i,l)}[Equation⁢2] MRB,b,f,cPUCCH(i) is the number of PRBs determined for PUCCH transmission, and may be a value that varies according to the number of symbols in which a PUCCH is transmitted. ΔTF, b, f, c(i) may be determined according to the number of symbols in which a repeatedly transmitted PUCCH is transmitted. Specifically, ΔTF, b, f, c(i) may be determined as shown in Equation 7 if a PUCCH format is PUCCH format 0 or 1, and may be determined as shown in Equation 8 or 9 in a case of PUCCH format 2, 3, or 4. ΔTF,b,f,c(i)=10⁢log10(NrefPUCCHNsymbPUCCH(i))+ΔUCI(i)[Equation⁢7] ΔTF,b,f,c(i)=10 log10(K1·(nHARQ-ACK(i)+OSR(i)+OCSi(i))/NRE(i))  [Equation 8]) ΔTF,b,f,c(i)=10 log10(2K2BPRE(i)−1)  [Equation 9] NsymbPUCCH(i) of Equation 7 is the number of symbols in which an i-th PUCCH is transmitted, and NrefPUCCHis 2 in a case of PUCCH format 0, and may be the number of symbols constituting one slot in a case of PUCCH format 1. Δuci(i) is 0 for PUCCH format 0, and may be calculated by 10 log10(OucI(i)) for PUCCH format 1, where OucI(i) may be the number of bits of UCI. Equation 8 applied to PUCCH formats 2, 3, and 4 may be applied if the number of bits of UCI is fewer than or equal to 11 bits, where K1in Equation 8 may be 6. NHARQ-ACK(i)+OSR(i)+OCSI(i) in Equation 8 may be the number of bits of UCI transmitted by a PUCCH, where NRE(i) indicating the number of REs may be calculated as shown in Equation 10. Equation 9 applied to PUCCH formats 2, 3, and 4 may be applied if the number of bits of UCI is greater than or equal to 11 bits, where K2in Equation 9 may be 2.4. BPRE(i)=(OACK(i)+OSR(i)+OCSI(i)+OCRC(i))/NRE(i) in Equation 9 may be satisfied, and OACK(i)+OSR(i)+OCSI(i)+OCRC(i) may be the number of bits of UCI transmitted by a PUCCH, where NRE(i) indicating the number of REs may be calculated as shown in Equation 10. NRE(i)=MRB,b,f,cPUCCH(i)·NSC,ctrlRB(i)·Nsymb-UCI,b,f,cPUCCH(i)  [Equation 10] Nsc,ctrland Nsymb-uCIhave been described above, and descriptions thereof are thus omitted. According to Equation 10, NREmay be a value proportional to Nsymb-UCI. That is, if the numbers of symbols in which respective repeatedly transmitted PUCCHs are different, transmission powers may be determined differently. A transmission power of a PUCCH may be determined according to the number of symbols in which the PUCCH is transmitted. Therefore, a method of, when the numbers of symbols in which respective repeatedly transmitted PUCCHs are different, determining transmission powers to be the same, for joint channel estimation of DMRSs included in respective PUCCHs is required. Method of Determining PUCCH Transmission Power i) The terminal may calculate a transmission power of a first repeatedly transmitted PUCCH. When calculating the transmission power, the terminal may use the number of symbols and the number of PRBs for the first repeatedly transmitted PUCCH. That is, if a PUCCH format is PUCCH format 0 or 1, NsymbPUCCH(i) may be the number of symbols of the first repeatedly transmitted PUCCH. If the PUCCH format is PUCCH format 2, 3, or 4, Nsymb-UCImay be the number of symbols of the first repeatedly transmitted PUCCH, and MRB,b,f,cPUCCH(i) may be the number of PRBs determined for transmission of the first repeatedly transmitted PUCCH. The transmission power of the first repeatedly transmitted PUCCH may be equally applied to all or some of the remaining repeatedly transmitted PUCCHs. That is, the transmission power of the first repeatedly transmitted PUCCH is applied regardless of the number of symbols for transmission of the remaining repeatedly transmitted PUCCHs. Some PUCCHs may be PUCCHs which are adjacent in time to the first repeatedly transmitted PUCCH and transmitted on the same PRB (i.e., the same hop). Alternatively, some PUCCHs may be PUCCHs including a DMRS, in which joint channel estimation using the DMRS is possible. ii) The terminal may separately calculate transmission powers of respective repeatedly transmitted PUCCHs. In this case, a largest value among the calculated respective transmission powers may be the transmission power of all repeatedly transmitted PUCCHs. Hereinafter, a method of interpreting a frequency hopping flag bit will be described. The base station may configure, for the terminal, a repeated PUSCH transmission mode of PUSCH repetition type-A or PUSCH repetition type-B. PUSCH repetition type-A may include i) inter-slot hopping and ii) intra-slot hopping. In inter-slot hopping, a PUSCH is transmitted on a different frequency hop in every slot, and intra-slot hopping indicates that the terminal divides a PUSCH configured in each slot in half and transmits the divided PUSCHs on a first frequency hop and a second frequency hop, respectively. The terminal may be configured with either inter-slot hopping or intra-slot hopping from the base station. PUSCH repetition type-B may include i) inter-slot hopping and ii) inter-repetition hopping. In inter-slot hopping, a PUSCH is transmitted on a different frequency hop in every slot, and inter-repetition hopping indicates that the terminal transmits repeated nominal PUSCHs on different frequency hops, respectively. The terminal may be configured with either inter-slot hopping or inter-repetition hopping from the base station. A frequency hopping flag with a size of 1 bit may exist in DCI for PUSCH scheduling. The terminal may identify whether to perform frequency hopping, based on the frequency hopping flag. If the base station configures inter-slot hopping of PUSCH repetition type-A for the terminal, the frequency hopping flag may indicate to the terminal whether to perform inter-slot hopping. However, if the number of repeated PUSCH transmissions is 1, the terminal may transmit a PUSCH only on one slot. That is, inter-slot hopping is not performed regardless of the frequency hopping flag. In other words, when inter-slot hopping is configured, if the number of repeated PUSCH transmissions is 1, whether to perform inter-repetition hopping may be determined according to a bit value of the frequency hopping flag. If the base station configures inter-slot hopping of PUSCH repetition type-B for the terminal, the frequency hopping flag may indicate to the terminal whether to perform inter-slot hopping. However, if repeatedly transmitted PUSCHs are transmitted only on the same slot, inter-slot hopping is not performed regardless of the frequency hopping flag. In other words, when inter-slot hopping is configured, if repeatedly transmitted PUSCHs are transmitted only on the same slot, whether to perform inter-repetition hopping may be determined according to a value of the frequency hopping flag. If the base station configures inter-repetition hopping of PUSCH repetition type-B for the terminal, the frequency hopping flag may indicate whether to perform inter-repetition hopping. However, if the number of repeated PUSCH transmissions is 1, the terminal may transmit only a repeated nominal PUSCH. In inter-repetition hopping, hopping is performed based on a repeated nominal PUSCH, so that, if the number of repeated PUSCH transmissions is 1, inter-repetition hopping is not performed regardless of a value of the frequency hopping flag. That is, if the number of repeated PUSCH transmissions is 1, whether to perform inter-slot hopping may be determined according to a value of the frequency hopping flag. When performing uplink transmission (e.g., PUSCH and PUCCH), the terminal may use frequency hopping in order to obtain diversity gain in the frequency domain. In an NR system, uplink transmission may be performed via up to 2 hops. Hops may refer to different frequency bands. Hereinafter, a method of determining a hop to obtain diversity gain in the frequency domain will be described. Hop Determination Method If intra-slot hopping is configured, the base station may configure (indicate), for the terminal, an index of a symbol in which uplink transmission starts and the number of consecutive symbols for the uplink transmission. Based on the index of the starting symbol and the number of consecutive symbols, the terminal may determine the number of symbols of a first hop and the number of symbols of a second hop. i) Specifically, if the number of consecutive symbols is N, the number of symbols of the first hop may be floor(N/2), and the number of symbols of the second hop may be N-floor(N/2). That is, the first hop may include floor(N/2) consecutive symbols from a symbol indicated by the index of the starting symbol, and the second hop may include N-floor(N/2) consecutive symbols subsequent to a last symbol of the first hop. The terminal may perform uplink transmission by configuring more hops than two hops in order to obtain diversity in a higher frequency domain. Specifically, in the following, descriptions will be provided for a method in which the terminal determines four hops when intra-slot hopping is configured. If the number of symbols configured for uplink transmission is N, the numbers of symbols included in the first hop, second hop, third hop, and fourth hop may be determined based on N. First, N may be divided into the number (N12) of symbols included in the first and second hops and the number (N34) of symbols included in the third and fourth hops. N12may be calculated with floor(N/2), and N34may be calculated with N-floor(N/2). Based on N12, the number (N1) of symbols included in the first hop and the number (N2) of symbols included in the second hop may be determined. Similarly, based on N34, the number (N3) of symbols included in the third hop and the number (N4) of symbols included in the fourth hop may be determined. Specifically, N1to N4may be calculated as shown in Equation 11. N1'2floor(N12/2) N2=12−floor(N12/2) N3=floor(N34/2) N4=N34−floor(N34/2)  [Equation 11] Equation 11 may be expressed as Equation 12. N1=floor(floor(N/2)/2) N2=floor(N/2)−floor(floor(N/2)/2) N3=floor((N−floor(N/2))/2) N4=N−floor(N/2)−floor((N−floor(N/2))/2)  [Equation 12] Table 5 shows the number of symbols included in the first to fourth hops according to the number N of symbols. TABLE 5# of symbols (N)1st hop (N1)2nd hop (N2)3rd hop (N3)4th hop (N4)8222292223102323112333123333133334143434 According to Table 5, the numbers of symbols included in the first hop to the fourth hop may differ by at most 1 symbol according to the number N of symbols. For example, the terminal may transmit two uplink channels with a length of 14 symbols starting from a first symbol of a slot, wherein a first uplink channel is transmitted via two hops, and a second uplink channel is transmitted via four hops. A first hop of the first uplink channel may include 7 symbols from a first symbol, and a second hop may include the remaining 7 symbols. That is, a boundary between the first hop and the second hop of the first uplink channel may be between a seventh symbol and an eighth symbol of the slot. In other words, the boundary between the first hop and the second hop of the first uplink channel may be a time point at which the seventh symbol ends and a time point at which the eighth symbol starts. A first hop of the second uplink channel may include 3 symbols from a first symbol, a second hop may include subsequent 4 symbols, a third hop may include 3 symbols subsequent to the second hop, and a fourth hop may include 4 symbols subsequent to the third hop. The second uplink channel may include the same boundary as the boundary of the first uplink channel. That is, a boundary between the second hop and the third hop of the second uplink channel is the same as the boundary between the first hop and the second hop of the first uplink channel. Therefore, frequency hopping may be performed on the same boundary, which is effective in terms of multiplexing between two uplink channels having the same length starting from the same symbol via frequency hopping. As another example, the first uplink channel may have a length of 7 symbols starting from a first symbol of a slot, and the second uplink channel may have a length of 14 symbols starting from a first symbol of a slot. In this case, the first uplink channel may be transmitted via two hops, and the second uplink channel may include four hops. A first hop of the first uplink channel may include 3 symbols from the first symbol, and a second hop may include the remaining 4 symbols. A boundary between two hops of the first uplink channel may be between a third symbol and a fourth symbol of the slot. In other words, the boundary between the two hops of the first uplink channel may be a time point at which the third symbol ends and a time point at which the fourth symbol starts. A first hop of the second uplink channel may include 3 symbols from a first symbol, a second hop may include subsequent 4 symbols, a third hop may include 3 symbols subsequent to the second hop, and a fourth hop may include 4 symbols subsequent to the third hop. Accordingly, the second uplink channel may include the same boundary as the first uplink channel. That is, a boundary between the first hop and the second hop of the second uplink channel may be the same as the boundary between the first hop and the second hop of the first uplink channel. Therefore, frequency hopping may be performed on the same boundary, which is effective in terms of multiplexing between two uplink channels having different lengths starting from the same symbol via frequency hopping. If an uplink channel is a PUSCH and the PUSCH is transmitted via up to 4 hops, each hop may include at least one DM-RS symbol. For example, when the PUSCH includes 14 symbols and is transmitted via 4 hops, a first hop may include 3 symbols, a second hop may include 4 symbols, a third hop may include 3 symbols, and a fourth hop may include 4 symbols, wherein each hop includes at least one symbol to which a DM-RS is mapped. In this case, if a PUSCH mapping type is PUSCH mapping type B, a DMRS may be mapped to a first symbol of each hop. However, in a case of PUSCH mapping type A, a position of a symbol to which a DMRS is mapped needs to be determined. If PUSCH mapping type A is configured, a DMRS may be mapped to the third symbol or fourth symbol of the slot. In this case, whether a DMRS is mapped to the third symbol or the fourth symbol may be indicated via a PBCH. For example, if PUSCH mapping type A is configured, the terminal may determine a hop overlapping with a symbol to which a DMRS needs to be mapped. In this case, if there is a hop overlapping with a symbol to which a DMRS needs to be mapped, a DMRS may be mapped in the corresponding hop and the PUSCH may be transmitted in the hop. That is, the DMRS may be mapped to the same position as the symbol, to which the DMRS needs to be mapped, within the overlapping hop. A position of a symbol, to which a DMRS is mapped, in a hop that does not overlap with a symbol in which a DMRS needs to be transmitted may be determined as in PUSCH mapping type B. That is, a DMRS may be mapped to a first symbol in a hop that does not overlap with a symbol to which a DMRS is mapped. Specifically, there may be a case where a PUSCH is configured by 14 symbols, a mapping type is PUSCH mapping type A, and a DMRS is mapped in a fourth symbol via a PBCH. As described above, when the PUSCH includes 4 hops, the number of symbols of the first hop may be 3. Therefore, since a fourth symbol does not exist in the first hop, a DMRS is not mapped. In this case, the terminal may consider that a length of the first hop is 4 and another hop having a length of 4 has a length of 3. For example, according to Table 5, the first hop to the fourth hop include 3, 4, 3, and 4 symbols, and the terminal may consider that the length of the first hop is 4, and the length of the second or fourth hop is 3. For example, the terminal may consider that the lengths of the first to fourth hops are 4, 3, 3, and 4. Alternatively, the terminal may estimate a length of a hop for DMRS mapping, via a permutation combination of respective hop lengths determined according to Table 5. For example, the terminal may consider that the lengths of the first to fourth hops are 4, 3, 4, and 3. ii) If the number of symbols configured for uplink transmission is N, the numbers of symbols included in the first hop, second hop, third hop, and fourth hop may be determined based on N. Specifically, the numbers (N1to N4) of symbols included in the first to fourth hops may be calculated as shown in Equation 13. N1=floor(N/4) N2=floor(N/2)−floor(N/4) N3=ceil(N/4) N4=N−floor(N/2)−ceil(N/4)  [Equation 13] Table 6 shows the number of symbols included in the first to fourth hops according to the number N of symbols. TABLE 6# of symbols (N)1st hop (N1)2nd hop (N2)3rd hop (N3)4th hop (N4)8222292232102332112333123333133343143443 According to Table 6, the numbers of symbols included in the first hop to the fourth hop may differ by at most 1 symbol according to the number N of symbols. As in i) described above, the method of ii) is also effective in terms of multiplexing between two uplink channels having the same length starting from the same symbol. The method of ii) is also effective in terms of multiplexing between two uplink channels having different lengths starting from different symbols. For example, there may be a first uplink channel having a length of 5 starting from a third symbol of a slot, and a second uplink channel having a length of 9 starting from a first symbol of a slot. In this case, the first uplink channel may be transmitted in two hops, and the second uplink channel may be transmitted in four hops. A first hop of the first uplink channel may include a third symbol and a fourth symbol of the slot, and a second hop may include a fifth symbol to a seventh symbol of the slot. A boundary between the first hop and the second hop of the first uplink channel may be between the fourth symbol and the fifth symbol of the slot. A first hop of the second uplink channel may include 2 symbols from a first symbol, a second hop may include subsequent 2 symbols, a third hop may include 3 symbols subsequent to the second hop, and a fourth hop may include 2 symbols subsequent to the third hop. Accordingly, the second uplink channel may include the same boundary as the first uplink channel. That is, a boundary between the second hop and the third hop of the second uplink transmission is between a fourth symbol and a fifth symbol, and it may thus include the same boundary as the first uplink channel. Therefore, frequency hopping may be performed at the same boundary. When the terminal transmits a PUSCH via up to two hops, if the PUSCH overlaps with a PUCCH on a certain symbol, UCI of the PUCCH may be multiplexed with the PUSCH so as to be transmitted. In this case, the UCI may be divided in half according to a UCI type, wherein one half is multiplexed in a first hop and the other half is multiplexed in a second hop. The UCI type may be HARQ-ACK, CSI part1, or CSI part2. For example, HARQ-ACK may be divided into two, GACK(1) and GACK(2), as follows. GACK(1)=NL*QM*floor(GACK/(2*NL*QM), GACK(2))=NL*QM*ceil(GACK/(2*NL*QM)). NLis the number of layers of the PUSCH, and Qmis a modulation order of the PUSCH. HARQ-ACK may be multiplexed in the first hop based on GACK(1), and may be multiplexed in the second hop based on GACK(2). CSI part1and CSI part2may also be multiplexed in respective hops in the same manner. When the terminal transmits a PUSCH via up to four hops, if the PUSCH overlaps with a PUCCH on a certain symbol, UCI of the PUCCH may be multiplexed with the PUSCH so as to be transmitted. i) The terminal may divide the UCI into four pieces and multiplex the same in the four hops of the PUSCH, respectively. In this case, according to a UCI type, the UCI may be divided into four pieces, wherein a first 1/4 is multiplexed in a first hop, a second 1/4 is multiplexed in a second hop, a third 1/4 is multiplexed in a third hop, and the last 1/4 is multiplexed in a fourth hop. Sizes of the UCI multiplexed in the respective hops may be calculated as shown in Equation 14 or Equation 15. GACK(1)=NL*QM*floor(GACK/(4*NL*QM) GACK(2)=NL*QM*ceil(GACK/(4*NL*QM) GACK(3)=NL*QM*floor(GACK/(4*NL*QM) GACK(4)=NL*QM*ceil(GACK/(4*NL*QM)[Equation 14] GACK(1)=NL*QM*floor(floor(GACK/(2*NL*QM)/2) GACK(2)=NL*QM*ceil(floor(GACK/(2*NL*QM)/2) GACK(3)=NL*QM*floor(ceil(GACK/(2*NL*QM)/2) GACK(4)=NL*QM*ceil(ceil(GACK/(2*NL*QM)/2)  [Equation 15] HARQ-ACK may be multiplexed in the first hop, the second hop, the third hop, and the fourth hop, based on GAcK(1), GACK(2), GACK(3), and GACK(4), respectively, according to Equation 14 or Equation 15. CSI part1and CSI part2may also be multiplexed in respective hops in the same manner. ii) The terminal may divide the UCI and multiplex the same in the four hops of the PUSCH. In this case, the UCI may be divided in half according to a UCI type, wherein a first half is multiplexed in the first hop and the second hop, and the other half is multiplexed in the third hop and the fourth hop. Alternatively, the first half may be multiplexed in the first hop and the third hop, and the other half may be multiplexed in the second hop and the fourth hop. That is, the UCI is divided in half, and divided pieces of the UCI may be repeatedly transmitted in two hops, respectively. In this case, the size of the UCI(A, B) divided in half is as follows. A=NL*QM*floor(GACK/(2*NL*QM)),B=NL*QM*ceil(GACK/(2*NL*QM)) Dividing the UCI in half, in comparison with dividing the UCI into four, enables reuse of a method of determining a UCI size according to two hops defined in the existing NR system, and enables repeated transmission of the UCI in two different hops, so that dividing the UCI in half is effective in terms of reliability. iii) Even if the PUSCH is configured to be transmitted via four hops, the terminal may divide the UCI and transmit the same via two hops. That is, the UCI may be multiplexed and transmitted in two hops, and may not be multiplexed in the remaining two hops. The terminal may reuse the method of determining a UCI size according to two hops defined in the existing NR system, and may not perform repeated transmission. Specifically, a method of selecting two hops from among four hops is as follows. iii-a) The terminal may always select two hops earliest in time. That is, when the PUSCH is divided into 4 hops, the terminal may multiplex and transmit UCI in a first hop and a second hop which are the earliest in time, and may not multiplex the UCI in a third hop and a fourth hop which are later in time. The base station may receive the UCI more quickly. iii-b) The terminal may always select last two hops. That is, when the PUSCH is divided into four hops, the terminal may multiplex and transmit UCI in a third hop and a fourth hop which are the last in time, and may not multiplex the UCI in a first hop and a second hop which are earlier in time. The terminal may secure time for multiplexing the UCI with the PUSCH. An additional processing time may be required for the terminal to multiplex the UCI with the PUSCH. In iii-b) in comparison with iii-a), since the UCI is multiplexed in the later hops, a processing time is spared, so that iii-b) can be easily implemented. iii-c) The terminal may determine two hops, based on PUSCH hops overlapping with a PUCCH. For example, among the PUSCH hops overlapping with a PUCCH, an earliest hop and a subsequent hop may be selected. As another example, among the PUSCH hops overlapping with a PUCCH, a latest hop and a preceding hop may be selected. If two hops are selected based on the PUSCH hops overlapping with a PUCCH, a time line similar to a time line (i.e., delay) during transmission via the PUCCH may be provided. iii-d) The terminal may select two odd-numbered hops. That is, the terminal may multiplex and transmit the UCI in a first hop and a third hop, and may not multiplex the UCI in a second hop and a fourth hop. Alternatively, the terminal may select two even-numbered hops. That is, the terminal may multiplex and transmit the UCI in the second hop and the fourth hop, and may not multiplex the UCI in the first hop and the third hop. iii-e) The terminal may select two hops which are located farthest in the frequency domain. In the frequency domain, a distance may be calculated as a difference between lowest PRBs of respective hops. For example, when the first hop starts at PRB X1, the second hop starts at PRB X2, the third hop starts at PRB X3, and the fourth hop starts at PRB X4, a distance between an i-th hop and a j-th hop in the frequency domain is calculated with |Xi-Xj|, and two hops with a greatest distance may be selected based on this value. The terminal may multiplex and transmit the UCI in the selected two hops, and may not multiplex the UCI in the remaining two hops. iii-e) is effective in terms of frequency diversity. iii-f) The terminal may select two hops including a large number of symbols. For example, when the PUSCH is of 14 symbols and the numbers of symbols constituting a first hop, a second hop, a third hop, and a fourth hop are 3, 4, 3, and 4, the terminal may multiplex and transmit the UCI in the second hop and the fourth hop, and may not multiplex the UCI in the first hop and the third hop. iii-g) When two hops are selected via the methods of iii-a) to iii-f), hops that satisfy a specific condition may be excluded. The specific condition may be that a symbol to which a DMRS is mapped is located at a last symbol in a hop. This is because UCI cannot be multiplexed in a symbol subsequent to a symbol to which a DMRS is mapped. Alternatively, the specific condition may be a case in which, due to lack of resources in the hop, UCI cannot be multiplexed after a symbol to which a DMRS is mapped. iii-h) The base station may configure, for the terminal, a hop in which the UCI is multiplexed. This configuration may be configured via an RRC signal, and may be configured via DCI. Hereinafter, descriptions will be provided for a method of UCI multiplexing according to frequency hopping when a PUSCH is repeatedly transmitted. The terminal may repeatedly transmit the same TB via repeated PUSCH transmission. For coverage improvement, DMRSs between repeatedly transmitted PUSCHs/PUCCHs which are different from each other may be combined and used for channel estimation. FIG.41illustrates a repeatedly transmitted PUSCH according to an embodiment of the present disclosure. FIG.42andFIG.43illustrate a method of multiplexing a repeatedly transmitted PUSCH and UCI included in a repeatedly transmitted PUSCH according to an embodiment of the present disclosure. A first DMRS included in a repeatedly transmitted first PUSCH and a second DMRS included in a repeatedly transmitted second PUSCH may be transmitted on different symbols. That is, the first DMRS may be transmitted on a first symbol among symbols scheduled for the first PUSCH, and the second DMRS may be transmitted on a second symbol among symbols scheduled for the second PUSCH. Phase continuity should be satisfied when the terminal transmits DMRSs on different repeatedly transmitted PUSCHs. That is, the first PUSCH and the second PUSCH may be transmitted in the same beamforming situation. In addition, the first PUSCH and the second PUSCH need to have the same quasi-co-location (QCL). In addition, a transmission power for transmission of the first PUSCH and a transmission power for transmission of the second PUSCH should be the same. The base station may perform channel estimation by jointing the first DMRS and the second DMRS, and may receive the first PUSCH and the second PUSCH repeatedly transmitted based on a channel estimation result. Some PUSCHs among repeatedly transmitted PUSCHs may be transmitted in a first frequency band and the remaining PUSCHs may be transmitted in a second frequency band. In this case, the first frequency band may be a first hop, and the second frequency band may be a second hop. Accordingly, multiple repeatedly transmitted PUSCHs may be included in the first hop, and other multiple repeatedly transmitted PUSCHs may be included in the second hop. Referring toFIG.41A, PUSCHs may be configured to be repeatedly transmitted in four slots. In this case, for inter-slot frequency hopping, a first PUSCH may be repeatedly transmitted in a first slot, a second PUSCH may be repeatedly transmitted in a second slot, a third PUSCH may be repeatedly transmitted in a third slot, and a fourth PUSCH may be repeatedly transmitted in a fourth slot. Here, the first frequency band and the third frequency band may be the same, and the second frequency band and the fourth frequency band may be the same. Referring toFIG.41B, joint channel estimation may be configured. In this case, a first PUSCH repetition in a first slot and a second PUSCH repetition in a second slot may be transmitted in a first frequency band, and a third PUSCH repetition in a third slot and a fourth PUSCH repetition in a fourth slot may be transmitted in a second frequency band. In addition, a DMRS included in the first PUSCH repetition and a DMRS included in the second PUSCH repetition may be jointed and used for channel estimation of the first frequency band, and a DMRS included in the third PUSCH and a DMRS included in the fourth PUSCH may be jointed and used for channel estimation of the second frequency band. UCI Multiplexing Method UCI included in repeatedly transmitted PUSCHs may be multiplexed and transmitted. In this case, if the repeatedly transmitted PUSCHs are transmitted in different frequency bands (different hops), frequency diversity cannot be obtained via UCI. Hereinafter, a method of obtaining frequency diversity via UCI will be described. PUSCH repetition described in the present specification may have the same meaning as repeatedly transmitted PUSCH. If multiple repeatedly transmitted PUSCHs are configured in each frequency band (each hop), one PUSCH may be selected for each frequency band. i) One PUSCH earliest in time may be selected in each frequency band (each hop). Referring toFIG.41B, the first PUSCH repetition and the second PUSCH repetition may be configured in the first frequency band (first hop), wherein the first PUSCH repetition that is earlier in time among the two PUSCH repetitions may be selected. Similarly, if the third PUSCH repetition and the fourth PUSCH repetition are configured in the second frequency band (second hop), the third PUSCH repetition that is the earliest in time may be selected. Accordingly, the UCI may be multiplexed and transmitted with the first PUSCH repetition and the third PUSCH repetition. ii) In each frequency band (each hop), one PUSCH repetition that is the last in time may be selected. Referring toFIG.41B, if the first PUSCH repetition and the second PUSCH repetition are configured in the first frequency band (first hop), the second PUSCH repetition that is the last in time may be selected. Similarly, if the third PUSCH repetition and the fourth PUSCH repetition are configured in the second frequency band (second hop), the fourth PUSCH repetition that is the last in time may be selected. Accordingly, the UCI may be multiplexed and transmitted with the second PUSCH repetition and the fourth PUSCH repetition. Compared to UCI multiplexing in a preceding PUSCH repetition, the method of UCI multiplexing in a subsequent PUSCH repetition can secure a time required for UCI multiplexing. The PUSCH repetitions including UCI according to the methods of i) and ii) described above may not be PUSCH repetitions that are consecutive in time. Accordingly, the base station may be required to store UCI included in one PUSCH repetition and wait for another PUSCH repetition. Therefore, additional hardware for UCI storage may be required. A method of transmitting UCI in consecutive PUSCHs will be described. iii) One PUSCH repetition located last in time may be selected in a frequency band (hop) ahead in time, and one PUSCH repetition located earliest in time may be selected in a frequency band (hop) later in time. Referring toFIG.41B, among the first PUSCH repetition and the second PUSCH repetition configured in the first frequency band (first hop), the second PUSCH repetition that is later in time may be selected. Similarly, in the second frequency band (second hop), the third PUSCH repetition that is earlier in time may be selected. Accordingly, the UCI may be multiplexed and transmitted with the second PUSCH repetition and the third PUSCH repetition. That is, the UCI may be multiplexed and transmitted with the second PUSCH and the third PUSCH which are consecutive PUSCHs in time. iv) The base station may configure an index of PUSCH repetition in which the UCI is multiplexed. The terminal may multiplex and transmit the UCI with a PUSCH repetition determined according to the index configured by the base station. DMRSs included in PUSCHs repeatedly transmitted in the same PRB in the frequency domain may be jointed and used for channel estimation (joint channel estimation). In order to reduce DMRS overhead, increase channel estimation accuracy, and transmit a large amount of data for joint channel estimation, it is necessary to reduce the density of symbols to which a DMRS is mapped or to perform DMRS-less repeated PUSCH transmission. The following shows information configured for the terminal by the base station in order to configure the number of symbols to which a DMRS included in a PUSCH is mapped. Hereinafter, repeatedly transmitted PUSCHs transmitted in the same PRB may be described as a PUSCH-bundle.Time domain resource allocation (TDRA): Resource allocation information of the time domain. A PUSCH mapping type in the time domain and a PUSCH starting symbol index and length may be included.Frequency hopping flag: A flag indicating whether to perform frequency hopping of PUSCH, which is indicated with a size of 1 bit in DCI of DCI format 0_1 or 0_2 included in a PDCCH.dmrs-AdditionPosition: Information on the number of symbols and symbol positions to which a DMRS is mapped, the DMRS being added according to the number of symbols constituting a PUSCH configured from a higher layer. If PUCCHs and PUSCHs overlap in the time domain, the terminal may multiplex UCI with an earliest PUSCH in the time domain from among overlapping PUSCHs and may transmit no PUCCH. When the UCI is multiplexed with the PUSCH, in order to secure reliability, HARQ-ACK may be mapped from a symbol immediately subsequent to a symbol to which a DMRS of the PUSCH is mapped. CSI-part1 and CSI-part2 may be mapped after the symbol to which HARQ-ACK is mapped. In this case, if the HARQ-ACK is 2 bits or smaller, the HARQ-ACK may be punctured, and if the HARQ-ACK exceeds 2 bits, the HARQ-ACK may be rate-matched. However, if a PUCCH and a PUSCH-bundle overlap, there may not be a symbol to which a DMRS is mapped in PUSCHs, and the UCI may not be multiplexed. Hereinafter, descriptions will be provided for a method of, via UCI multiplexing, guaranteeing reliability of UCI and obtaining PUSCH coverage gain. In order to guarantee reliability of UCI, the terminal may multiplex the UCI only in a PUSCH having a symbol to which a DMRS is mapped. For joint channel estimation, a PUSCH in which UCI is multiplexed may be selected based on information to be described later. Based on first information, if a PUSCH overlapping with a PUCCH has a symbol to which a DMRS is mapped, the terminal may select the overlapping PUSCH for UCI multiplexing. In other words, adjacent PUSCHs of the same PRB as that for the overlapping PUSCH are not considered when UCI is multiplexed. Based on second information, a PUSCH having a symbol to which a DMRS is mapped is selected from among PUSCHs which are consecutive in the time domain and are in the same PRB in the frequency domain, and UCI may be multiplexed. The terminal may segment UCI and multiplex the same not only in a PUSCH overlapping with a PUCCH but also in all PUSCHs having a symbol to which a DMRS is mapped from among PUSCHs consecutively and repeatedly transmitted in the same PRB as that for the overlapping PUSCH. Based on third information, if a PUSCH overlapping with a PUCCH does not have a symbol to which a DMRS is mapped, the terminal may multiplex UCI in k PUSCHs most adjacent to the overlapping PUSCH and transmit the multiplexed UCI. Based on fourth information, if a PUSCH overlapping with a PUCCH has a symbol to which a DMRS is mapped, the terminal may multiplex UCI in k PUSCHs most adjacent to the overlapping PUSCH and transmit the multiplexed UCI. In the third and fourth information, the adjacent PUSCHs should be PUSCHs that satisfy the aforementioned UCI multiplexing conditions, and a k value may be a value configured by the base station. The terminal may select a PUSCH in which UCI is multiplexed, regardless of whether a repeatedly transmitted PUSCH includes a DMRS. i) UCI may be equally segmented and multiplexed in repeatedly transmitted PUSCHs. The terminal may segment the UCI into pieces having the same size as possible and multiplex the same in all PUSCHs within a PUSCH-bundle overlapping with a PUCCH. For example, the UCI may be multiplexed only in the PUSCHs within the PUSCH-bundle overlapping with the PUCCH. As another example, the terminal may multiplex the UCI not only in a PUSCH-bundle overlapping with a PUCCH but also in a PUSCH-bundle configured in different hops in the frequency domain. Multiplexing of UCI may be effective in extending coverage via frequency diversity gain, in addition to joint channel estimation. ii) UCI may be multiplexed in a specific PUSCH among repeatedly transmitted PUSCHs. The UCI may be multiplexed in a PUSCH corresponding to an odd-numbered or even-numbered index within a PUSCH-bundle overlapping with a PUCCH. iii) UCI may be multiplexed in as many PUSCHs as the number configured (indicated) by the base station from a PUSC-bundle overlapping with a PUCCH. The base station may configure (provide), for the terminal, information (value) on an offset and a periodicity for a PUSCH in which UCI is to be multiplexed. Referring toFIG.42, the base station may configure (indicate) an offset of 1 and a periodicity of 2 for the terminal. The terminal may multiplex and transmit UCI in a first PUSCH and a fourth PUSCH in a PUSCH-bundle overlapping with a PUCCH. In addition, the base station may configure (provide), for the terminal, information (value) on an index of the PUSCH in which the UCI is to be multiplexed. Referring toFIG.43, if the base station configures an index of 2 for the terminal, the terminal may multiplex and transmit UCI in a third PUSCH of the PUSCH-bundle. iv) UCI may be multiplexed in a PUSCH earliest in the time domain in the PUSCH-bundle overlapping with a PUCCH. The terminal may multiplex the UCI in an earliest PUSCH for fast feedback, such as HARQ-ACK. In the described i) to iv), if inter-slot frequency hopping is configured, the terminal may multiplex UCI only in a PUSCH-bundle including a PUSCH earliest in the time domain from among PUSCHs overlapping with a PUCCH. Alternatively, the terminal may multiplex UCI in the same symbol position as that of a PUSCH-bundle including a PUSCH earliest in the time domain from among overlapping PUSCHs in all frequency hops. In an embodiment in which the terminal multiplexes UCI in a PUSCH that does not include a DMRS, the terminal may multiplex UCI in a PUSCH that does not include a DMRS symbol, according to a new rule. The PUSCH overlapping with a PUCCH in i) to iv) described above may refer to all repeated PUSCHs including a PUSCH overlapping with a PUCCH in units of symbols or slots. FIG.44illustrates transmission cancellation of a repeatedly transmitted PUSCH, based on a repeatedly transmitted PUCCH according to an embodiment of the present disclosure. If a repeatedly transmitted PUCCH and a repeatedly transmitted PUSCH overlap in one or more slots, a terminal transmits only the PUCCH of the overlapping slot and does not transmit the PUSCH of the overlapping slot. Referring toFIG.44, a repeatedly transmitted PUCCH and a repeatedly transmitted PUSCH may overlap during a period from slot n+2 to slot n+5. In this case, the terminal may transmit only the PUCCH without transmitting the PUSCH of slots n+2 to n+5. If the PUSCH of the overlapping period is not transmitted, the untransmitted PUSCH may not be deferred to a subsequent slot, and thus there is a problem of difficulty to obtain coverage gain due to repeated PUSCH transmission. A method for solving this problem will be described below. If a repeatedly transmitted PUCCH overlaps with a repeatedly transmitted PUSCH, the terminal may multiplex UCI, which is included in the PUCCH, in the PUSCH and transmit the same. In this case, the overlapping PUCCH may not be transmitted. That is, in order to secure coverage gain of the PUSCH, the terminal may transmit the PUSCH by multiplexing the UCI included in the PUCCH, without dropping the overlapping PUSCH. An HARQ-ACK delay may be increased compared to a conventional scheme of dropping a PUSCH, but all information (data and UCI) to be transmitted can be transmitted, so that it is efficient in terms of reliability of a PUSCH and a PUCCH. i) When a PUCCH and a PUSCH overlap, the terminal may multiplex UCI, which is included in the overlapping PUCCH, in the PUSCH and transmit the same. Referring toFIG.44, a PUCCH and a PUSCH overlap in a period from slot n+2 to slot n+5. Accordingly, the terminal may transmit UCI by multiplexing the same in the PUSCH, but may not transmit the PUCCH, the UCI being included in the PUCCH of the period from slot n+2 to slot n+5. The terminal may segment the UCI into the number of overlapping PUSCHs (number of slots) and multiplex the same. That is, the terminal may segment the UCI included in the PUCCH into 4 slots of the PUSCH (slots n+2 to n+5) and multiplex the same. The terminal may multiplex the UCI in one PUSCH without segmenting the UCI. That is, the PUSCH in which the UCI is multiplexed may be repeatedly transmitted 4 times. ii) When a PUSCH and a PUCCH overlap, the terminal may multiplex UCI of the PUCCH in a specific PUSCH. In this case, the specific PUSCH may be predefined between the base station and the terminal, or may be configured for the terminal via the base station. A) A specific PUSCH may be an earliest PUSCH in the time domain from among overlapping PUSCHs. For faster HARQ-ACK feedback, the terminal may multiplex the UCI only in the earliest PUSCH in the time domain. In this case, among PUSCHs overlapping with the PUCCH, a PUSCH without multiplexing may be transmitted as it is. B) A specific PUSCH may be a PUSCH which is the earliest in the time domain from among PUSCHs overlapping with the PUCCH and is transmitted in a different PRB in the frequency domain. For frequency diversity gain for the UCI as well as fast HARQ-ACK feedback, the terminal may multiplex the UCI in a PUSCH which is the earliest in the time domain and is transmitted in a different PRB. C) A specific PUSCH may be selected based on information configured or indicated by the base station. For example, if the base station configures/indicates information that an index is 1, the terminal may multiplex the UCI in a PUSCH having index1(i.e., a second PUSCH) from among PUSCHs overlapping with the PUCCH. As another example, the base station may configure (indicate), for the terminal, information on a start position and length of the PUSCH. If the base station configures/indicates, for the terminal, that a start position is 0 and a length of 2, then the terminal may multiplex the UCI in a first PUSCH (start position0) and a second PUSCH (length2) from among PUSCHs overlapping the PUCCH. FIG.45illustrates a repeatedly transmitted PUCCH according to an embodiment of the present disclosure,FIG.46illustrates a repeatedly transmitted PUCCH and intra-slot frequency hopping according to an embodiment of the present disclosure, andFIG.47illustrates a repeatedly transmitted PUCCH and inter-slot frequency hopping according to an embodiment of the present disclosure. Referring toFIG.45, since DMRSs included in PUCCH repetitions #1, #2, #3, and #4satisfy the aforementioned conditions for joint channel estimation, a base station may perform channel estimation by jointing corresponding DMRSs. In addition, if a PUCCH is repeatedly transmitted for frequency diversity gain, the PUCCH may be transmitted via frequency hopping. A frequency hopping type includes intra-slot frequency hopping and inter-slot frequency hopping.Intra-slot frequency hopping A terminal may divide a PUCCH in half in the time domain within a slot in which PUCCH transmission is configured, and map each of the two divided PUCCHs to two hops so as to transmit the same. In this case, the PUCCH may or may not be repeatedly transmitted. When a length of a symbol in which a PUCCH is configured within one slot is referred to as number of symbols, a first hop may include floor(number of symbols/2) symbols, and a second hop may include (number of symbols−floor(number of symbols/2)) symbols. Referring toFIG.46, a base station may configure a terminal to repeatedly transmit a PUCCH during 4 slots starting from slot n and to perform intra-slot frequency hopping. In this case, a length of symbols, to which the PUCCH is allocated, in one slot may be 14. The terminal may configure a first hop with first 7 symbols (floor(number of symbols(14)/2)) of the PUCCH in each of slots n, n+1, n+2, and n+3, and a second hop may include 7 symbols subsequent to a last symbol constituting the first hop (number of symbols(14)−floor(number of symbols(14)/2)). In this case, the first hop may be transmitted in a first frequency band and the second hop may be transmitted in a second frequency band.Inter-slot frequency hopping Based on a first slot of a first repeatedly transmitted PUCCH, a repetition transmission slot index (slot index for repetition) of a slot in which a PUCCH is repeatedly transmitted may be sequentially indexed. In this case, the first slot of the first repeatedly transmitted PUCCH may have slot index for repetition0. Referring toFIG.47, a base station may configure a terminal to repeatedly transmit a PUCCH during 4 slots starting from slot n and to perform inter-slot frequency hopping. In this case, a slot index for repetition of slot n may be 0, and slot indices for repetition of slots n+1, n+2, and n+3 may be 1, 2, and 3, respectively. The terminal may map, to a first hop, a PUCCH of a slot (i.e., slot of repetition transmission slot index0or2) in which an even-numbered PUCCH is transmitted, among repeatedly transmitted PUCCHs. Similarly, the terminal may map, to a second hop, a PUCCH of a slot (i.e., slot index for repetition 1 or 3) in which an odd-numbered PUCCH is transmitted. In other words, the terminal may transmit a PUCCH in the first hop in slot n and slot n+2, and may transmit a PUCCH in the second hop in slot n+1 and slot n+3. PRBs of the first hop may be PRBs corresponding to the number of PRBs from a PRB of a starting PRB index. PRBs of the second hop may be PRBs corresponding to the number of PRBs from a PRB of a second hop PRB index. When PUCCHs are repeatedly transmitted via frequency hopping, a DMRS of the PUCCH transmitted in the first hop and a DMRS of the PUCCH transmitted in the second hop are transmitted in different PRBs, so that the DMRSs cannot be used for joint channel estimation. Hereinafter, descriptions will be provided for a frequency hopping method for improving coverage via frequency diversity gain and DMRS joint channel estimation. For convenience of description, a PUCCH is described, but the following descriptions may be equally applied to a PUSCH. Frequency Hopping Method for Joint Channel Estimation FIG.48toFIG.53illustrate a method of determining a slot index for repetition during PUCCH transmission via frequency hopping, according to an embodiment of the present disclosure. Hereinafter, a frequency hopping method for joint channel estimation will be described based on inter-slot frequency hopping. That is, a terminal may transmit an even-numbered repeatedly transmitted PUCCH by mapping the same to a first hop, and may transmit an odd-numbered repeatedly transmitted PUCCH by mapping the same to a second hop. In this case, a base station may configure the terminal to repeatedly transmit PUCCHs on N slots, and may configure that a specific number for configuration of a slot index for repetition is M. i) The terminal may maintain the same slot indices for repetition of PUCCHs repeatedly transmitted during a specific number of slots. For each of the specific number of slots, a slot index for repetition may be sequentially increased. The specific number may be the number of PUCCHs including a DMRS for joint channel estimation. Based on a slot of a first repeatedly transmitted PUCCH, slot indices for repetition of M slots may be determined to be 0. Thereafter, a slot index for repetition of a repeatedly transmitted PUCCH may be sequentially increased in every M slots. In this case, the slot index may be independent of whether the PUCCH is repeatedly transmitted. Referring toFIG.48, the base station may configure, for the terminal, that N is 4 and M is 2, and may configure repeated PUCCH transmission from slot n. The terminal may determine slot indices for repetition of two slots from slot n, i.e., slots n and n+1, to be 0, and may determine slot indices for repetition of two slots from slot n+2, i.e., slots n+2 and n+3, to be 1. PUCCHs of slot n and slot n+1 with the slot index for repetition of 0 may be transmitted in the first hop, and PUCCHs of slot n+2 and slot n+3 with the slot index for repetition of 1 may be transmitted in the second hop. Referring toFIG.49, the base station may configure, for the terminal, that N is 4 and M is 2, and may configure repeated PUCCH transmission from slot n. Based on the M value (2), the terminal may determine slot indices for repetition for slots n and n+1 to be 0, may determine slot indices for repetition for slots n+2 and n+3 to be 1, and may determine slot indices for repetition for slots n+4 and n+5 to be 2. Slots with a slot index for repetition of 0 may be transmitted in the first hop, slots with a slot index for repetition of 1 may be transmitted in the second hop, and slots with a slot index for repetition of 2 may be transmitted in the first hop. However, slot n+1 is a slot unavailable for PUCCH transmission, and slot n, slot n+2, slot n+3, and slot n+4 may be slots available for PUCCH transmission. Therefore, since the terminal needs to repeatedly transmit PUCCHs on 4 slots, the PUCCHs may be transmitted in four slots available for PUCCH transmission, which are slot n, slot n+2, slot n+3, and slot n+4. That is, PUCCHs of slots (slot n and slot n+4) with even-numbered slot indices for repetition may be transmitted in the first hop, and PUCCHs of slots (slot n+2 and slot n+3) with odd-numbered slot indices for repetition may be transmitted in the second hop. The terminal may configure a slot index for repetition by binding M consecutive slots regardless of whether a slot is available for PUCCH transmission. In addition, M consecutive slots are configured with the same slot index for repetition, so as to be transmitted in the same frequency band. Accordingly, if there is a slot unavailable for PUCCH transmission among the M consecutive slots, the number of slots in which a PUCCH is actually transmitted may be fewer than M. ii) The terminal may maintain the same slot index for repetition during a slot available for a specific number of repeated PUCCH transmissions. In addition, the terminal may sequentially increase the slot index for repetition for each slot available for the specific number of repeated PUCCH transmissions. The specific number may be the number of PUCCHs including a DRMS used for joint channel estimation. Based on a slot of a first repeatedly transmitted PUCCH, slot indices for repetition of M slots may be determined to be 0. Thereafter, a slot index for repetition of a repeatedly transmitted PUCCH may be sequentially increased in every M slots. Referring toFIG.50, the base station may configure, for the terminal, that N is 4 and M is 2, and may configure repeated PUCCH transmission from slot n. In this case, slot n+1 is a slot unavailable for PUCCH transmission, and slot n, slot n+2, slot n+3, and slot n+4 may be slots available for PUCCH transmission. Based on the M value (2), the terminal may determine slot indices for repetition of slots n and n+2 to be 0 and may determine slot indices for repetition of slots n+3 and n+4 to be 1. Therefore, the terminal may transmit PUCCHs of slot n and n+2 with the slot index for repetition of 0 in the first hop, and may transmit PUCCHs of slots n+3 and n+4 with a slot index for repetition of 1 in the second hop. For joint channel estimation, PUCCHs should be transmitted in the same PRB of consecutive slots. For example, referring toFIG.48, PUCCHs configured in two consecutive slots of slot n and slot n+1 are transmitted in the first hop, and DMRSs included in the PUCCHs configured in slot n and slot n+1 may be thus used for joint channel estimation. Similarly, PUCCHs configured in two consecutive slots of slot n+2 and slot n+3 are transmitted in the second hop, and DMRSs included in the PUCCHs configured in slot n+2 and slot n+3 may be thus used for joint channel estimation. Referring toFIG.49, PUCCHs configured in two consecutive slots of slot n+2 and slot n+3 are transmitted in the second hop, and DMRSs included in the PUCCHs configured in slot n+2 and slot n+3 may be thus used for joint channel estimation. However, although PUCCHs configured in slot n and slot n+4 are transmitted in the first hop, since slot n and slot n+4 are not consecutive in the time domain, DMRSs included in the PUCCHs configured in slot n and slot n+4 cannot be used for joint channel estimation. Referring toFIG.50, PUCCHs configured in two consecutive slots of slot n+3 and slot n+4 are transmitted in the second hop, and DMRSs included in the PUCCHs configured in slot n+3 and slot n+4 may be thus used for joint channel estimation. However, although PUCCHs configured in slot n and slot n+2 are transmitted in the first hop, since slot n and slot n+2 are not consecutive in the time domain, DMRSs included in the PUCCHs configured in slot n and slot n+2 cannot be used for joint channel estimation. In order for DMRSs to be used for joint channel estimation, DMRSs included in PUCCHs need to be transmitted in the same hop in consecutive slots. Referring toFIG.51, the base station may configure, for the terminal, that N is 4 and M is 2, and may configure repeated PUCCH transmission from slot n. In this case, slot n+1, slot n+2, and slot n+5 may be slots unavailable for PUCCH transmission, and slot n, slot n+3, slot n+4, and slot n+6 may be slots available for PUCCH transmission. Since the terminal needs to transmit PUCCHs on 4 slots, the PUCCHs may be transmitted in slot n, slot n+3, slot n+4, and slot n+6. Referring toFIG.51(a), slot indices for repetition may be configured according to i) described above. Slot indices for repetition of slot n and slot n+1 may be configured to be 0, slot indices for repetition of slot n+2 and slot n+3 may be configured to be 1, slot indices for repetition of slot n+4 and slot n+5 may be configured to be 2, and a slot index for repetition of slot n+6 may be configured to be 3. Therefore, the PUCCHs configured in slot n and slot n+4 with slot indices for repetition corresponding to even numbers may be transmitted in the first hop, and the PUCCHs configured in slot n+3 and slot n+6 with slot indices for repetition corresponding to odd numbers may be transmitted in the second hop. Referring toFIG.51(b), slot indices for repetition may be configured according to ii) described above. Slot indices for repetition of slot n and slot n+3 may be configured to be 0, and slot indices for repetition of slot n+4 and slot n+6 may be configured to be 1. Therefore, the PUCCHs configured in slot n and slot n+3 with slot indices for repetition corresponding to even numbers may be transmitted in the first hop, and the PUCCHs configured in slot n+4 and slot n+6 with slot indices for repetition corresponding to odd numbers may be transmitted in the second hop. According toFIG.51(a)andFIG.51(b), PUCCHs configured in slot n+3 and slot n+4 may be transmitted in different hops. Referring toFIG.52, the base station may configure, for the terminal, that N is 8 and M is 2, and may configure repeated PUCCH transmission from slot n. Slot n+3, slot n+4, and slot n+7 are slots unavailable for PUCCH transmission, and slot n, slot n+1, slot n+2, slot n+5, slot n+6, slot n+8, slot n+9, and slot n+10 are slots available for PUCCH transmission. Since the terminal needs to transmit PUCCHs on 8 slots, the PUCCHs may be transmitted in slot n, slot n+1, slot n+2, slot n+5, slot n+6, slot n+8, slot n+9, and slot n+10. Referring toFIG.52(a), slot indices for repetition may be configured according to i) described above. Slot indices for repetition of slot n and slot n+1 may be configured to be 0, slot indices for repetition of slot n+2 and slot n+3 may be configured to be 1, slot indices for repetition of slot n+4 and slot n+5 may be configured to be 2, slot indices for repetition of slot n+6 and slot n+7 may be configured to be 3, slot indices for repetition of slot n+8 and slot n+9 may be configured to be 4, and a slot index for repetition of slot n+10 may be configured to be 5. Therefore, the PUCCHs configured in slot n, slot n+1, slot n+5, slot n+8, and slot n+9 with slot indices for repetition corresponding to even numbers may be transmitted in the first hop, and the PUCCHs configured in slot n+2, slot n+6, and slot n+10 with slot indices for repetition corresponding to odd numbers may be transmitted in the second hop. Referring toFIG.52(b), slot indices for repetition may be configured according to ii) described above. Slot indices for repetition of slot n and slot n+1 may be configured to be 0, slot indices for repetition of slot n+2 and slot n+5 may be configured to be 1, slot indices for repetition of slot n+6 and slot n+8 may be configured to be 2, and slot indices for repetition of slot n+9 and slot n+10 may be configured to be 3. Therefore, the PUCCHs configured in slot n, slot n+1, slot n+6, and slot n+8 with slot indices for repetition corresponding to even numbers may be transmitted in the first hop, and the PUCCHs configured in slot n+2, slot n+5, slot n+9, and slot n+10 with slot indices for repetition corresponding to odd numbers may be transmitted in the second hop. Referring toFIG.52, PUCCHs configured in consecutive slots of slot n+5 and slot n+6 may be transmitted in different hops. According toFIG.51andFIG.52, even if PUCCHs are configured in consecutive slots, different slot indices for repetition are configured and the PUCCHs are thus transmitted in different hops. Therefore, DMRSs included in the PUCCHs configured in consecutive slots cannot be used for joint channel estimation. Hereinafter, descriptions will be provided for a method of using DRMSs included in PUCCHs configured in consecutive slots, for joint channel estimation. iii) The terminal may configure, with the same slot index for repetition, slots available for joint channel estimation from among a specific number of slots available for transmitting repeatedly transmitted PUCCHs. The slots available for joint channel estimation may be consecutive slots in the time domain from among slots available for transmitting repeatedly transmitted PUCCHs. The specific number may be the number of PUCCHs including a DMRS used for joint channel estimation. The terminal may configure the same slot index for repetition by grouping M consecutive slots among slots available for PUCCH transmission. In addition, slot indices for repetition of consecutive slots among slots available for PUCCH transmission may be sequentially increased every M slots. In this case, if the number of consecutive slots is fewer than M, the same slot index for repetition may be configured for the consecutive slots fewer than M. Inconsecutive slots may be configured with different slot indices for repetition. Slot indices for repetition of an earliest slot among inconsecutive slots and subsequent slots may be sequentially indexed. If a slot index for repetition of a slot configured for a first repeatedly transmitted PUCCH, which is configured (indicated) by the base station, is 0, and there are M slots consecutive to the slot configured for the first PUCCH, slot indices for repetition of M slots may be 0. Thereafter, slot indices for repetition of M slots consecutive from the slot available for PUCCH transmission may be 1. If there are not M consecutive slots, that is, if there is an inconsecutive slot, the terminal may obtain consecutive slots after the inconsecutive slot. For example, if a slot index for repetition of a slot preceding the inconsecutive slot is X, a slot index for repetition of a first slot among consecutive slots after the inconsecutive slot may be X+1. Similarly, slot indices for repetition of M consecutive slots including a first slot among consecutive slots after the inconsecutive slot may be X+1. Referring toFIG.53(a), the terminal may configure the same slot index for repetition by grouping two (M=2) consecutive slots available for PUCCH transmission. Since slot n+1 and slot n+2 are slots unavailable for PUCCH transmission, a slot used for PUCCH transmission consecutive to slot n does not exist. Therefore, only slot n may be configured with slot index for repetition0. A slot index for repetition of slot n+3, which is a first slot used for PUCCH transmission after slot n, may be configured to be 1. Since slot n+3 and a subsequent slot of slot n+4 are consecutive, slot indices for repetition of slot n+3 and slot n+4 may be configured to be the same. A slot index for repetition of slot n+6, which is a slot used for PUCCH transmission after slot n+4, may be configured to be 2 (due to slot n+5 being unavailable for PUCCH transmission). Therefore, the terminal may transmit, in the first hop, the PUCCHs configured in slot n and slot n+6 with slot indices for repetition corresponding to even numbers, and may transmit, in the second hop, the PUCCHs configured in slot n+3 and slot n+4 with slot indices for repetition corresponding to odd numbers. In comparison with the description inFIG.51, since the PUCCHs configured in slot n+3 and slot n+4 are transmitted in the same hop, DMRSs configured in the PUCCHs may be used for joint channel estimation. Referring toFIG.53(b), the terminal may configure 0 as a slot index for repetition of a first repeatedly transmitted PUCCH, and may configure 0 as a slot index for repetition of slot n+1 consecutive to slot n among slots available for PUCCH transmission. After slot n+1, a slot index for repetition of slot n+2, which is an earliest slot available for PUCCH transmission, may be configured to 1. There is no slot available for PUCCH transmission consecutive to slot n+2 (slot n+3 and slot n+4 are slots unavailable for PUCCH transmission). Therefore, after slot n+2, a slot index for repetition of slot n+5, which is an earliest slot available for PUCCH transmission, may be configured to 2. In addition, a slot index for repetition of slot n+6, which is a slot adjacent to slot n+5, may be indexed identically to slot n+5. FIG.54toFIG.59illustrate a method of mapping PUCCH repetitions to frequency hops according to an embodiment of the present disclosure. iv) A base station may configure (indicate), for a terminal, an offset and a period of a time window for frequency hopping. The terminal may apply the period and offset to slots configured for repeated PUCCH transmission, and may map PUCCHs within the period to the same hop so as to transmit the PUCCHs. In this case, the base station may configure (indicate) the period and offset regardless of repeated PUCCH transmission. Referring toFIG.54, the base station may configure N to be 4 or 8 in a cell with a subcarrier spacing of 15 kHz, and may configure a period to be 2 ms and an offset to be 0 ms regardless of the N value. Accordingly, when N is 4 or 8, the terminal may transmit two PUCCHs by mapping the same to one hop. The base station may configure (indicate) another period and offset for the terminal according to the number of repeated PUCCH transmissions. Referring toFIG.55, the base station may configure, for the terminal, that in a cell with a subcarrier spacing of 15 kHz, if N is 4, a period is 2 ms and an offset is 0 ms, and if N is 8, a period is 4 ms and an offset is 0 ms. Accordingly, if N is 4, the terminal may map two repeatedly transmitted PUCCHs to one hop so as to transmit the same, and if N is 8, the terminal may map four repeatedly transmitted PUCCHs to one hop so as to transmit the same. The number of slots (N) in which PUCCHs are repeatedly transmitted and the number (M) of slots (or a specific number to determine a slot index for repetition) included in one hop may be explicitly configured or implicitly configured by the base station. Hereinafter, a method of configuring N and M will be described in more detail. N and M Configuration Method i) The terminal may map, to the same frequency hop, PUCCHs repeatedly transmitted during a preconfigured number of slots so as to transmit the same. In this case, M may be configured regardless of the number of repeated PUCCH transmissions. Referring toFIG.56, if the terminal is configured with the number (N) of repeated PUCCH transmissions of 2, 4, or 8, M may be configured to be 2 regardless of the number of repeated transmissions. That is, the terminal may map, to one hop, two slots of repeatedly transmitted PUCCHs and transmit the same regardless of the number of repeated transmissions. ii) The terminal may map, to the same frequency hop, PUCCHs repeatedly transmitted during a preconfigured number of slots so as to transmit the same. In this case, M may be configured differently according to the number of repeated PUCCH transmissions. In this case, M may be configured by a function of N. Accordingly, flexible frequency hopping may be possible for repeatedly transmitted PUCCHs according to the number of repeated transmissions. Referring toFIG.57, M may be configured to be 1 if N is 2, M may be configured to be 2 if N is 4, and M may be configured to be 4 if N is 8. That is, 1 slot may be mapped to one hop if N is 2, 2 slots may be mapped to one hop if N is 4, and 4 slots may be mapped to one hop if N is 8. Hereinafter, descriptions will be provided for a method in which the terminal performs repeated PUCCH transmission via frequency hopping without a separate configuration of M from the base station. iii) The terminal may perform repeated PUCCH transmission via frequency hopping, based on the number of hops. The terminal may determine the number of hops to which repeatedly transmitted N PUCCHs are mapped for transmission, and may determine PUCCHs mapped to each hop. In this case, the number of hops may refer to the number of PUCCHs satisfying a condition for joint channel estimation. Referring toFIG.54, when N is 8, there may be a total of four hops that are a first hop (repetition #1, repetition #2), a second hop (repetition #3, repetition #4), a third hop (repetition #5, repetition #6), and a fourth hop #4(repetition #7, repetition #8). iii-a) The base station may configure the number of hops for the terminal, and the terminal may perform repeated PUCCH transmission via frequency hopping, based on the configured number of hops. Specifically, the terminal may map repeatedly transmitted N PUCCHs to K hops and transmit the same. For example, the terminal may map floor(N/K) PUCCHs in ascending order from the first hop to an (K−1)th hop and may map ceil(N/K) PUCCHs in ascending order to a K-th hop, so as to transmit the same. Referring toFIG.58, if the number (N) of repeated PUCCH transmissions is 8 and the number (K) of hops is configured to be 4, the terminal may map 2(floor(8/4)) PUCCHs to frequency hops #1, #2, and #3and may map 2(ceil(8/4)) PUCCHs to frequency hop #4, so as to transmit the same. That is, the terminal maps repetition #1and repetition #2to hop #1, maps repetition #3and repetition #4to hop #2, maps repetition #5and repetition #6to hop #3, and maps repetition #7and repetition #8to hop #4, so as to transmit the same. According to another embodiment, the terminal may map ceil(N/K) PUCCH repetitions to the first hop in ascending order, and may map floor(N/K) PUCCH repetitions in ascending order from the second hop to the K-th hop, so as to transmit the same. iii-b) The terminal may map PUCCHs which are repeatedly transmitted always in the same number of hops without configuration of the number of hops from the base station, so as to transmit the PUCCHs via frequency hopping. If iii-b) is used, when frequency hopping and joint channel estimation are applied together, a maximum possible number of repeatedly transmitted PUCCHs may be distributed and transmitted in equal frequency hops. The terminal may always divide N repeatedly transmitted PUCCHs into two hops and transmit the same. Floor(N/2) PUCCHs may be mapped to the first hop in ascending order, and N-floor(N/2) PUCCHs may be mapped to the second hop in ascending order. Referring toFIG.59, when the number (N) of repeated PUCCH transmissions is 8, the terminal may map 4 (floor(8/2)) PUCCHs to hop #1and may map 4 (ceil(8/2)) PUCCHs to hop #2, so as to transmit the same. That is, repetition #1, repetition #2, repetition #3, and repetition #4may be mapped to hop #1, and repetition #5, repetition #6, repetition #7, and repetition #8may be mapped to hop #2. As another embodiment, the terminal may map ceil(N/2) PUCCHs to the first hop in ascending order and may map floor(N/2) PUCCHs to the second hop in ascending order, so as to transmit the same. FIG.60illustrates scheduling of one physical uplink shared channel according to an embodiment of the present disclosure. A PUSCH including a DMRS available for joint channel estimation may be a PUSCH including one transport block. A transport block size (TB size (TBS)) may be determined based on one slot or multiple slots. Referring toFIG.60, a terminal may determine, as one TBS, two slots of slot n and slot n+1, for which PUSCH #1is configured. In this case, DMRSs are included in different slots, but if the aforementioned joint channel estimation condition is satisfied, the DMRSs may be used for joint channel estimation. FIG.61illustrates scheduling of multiple physical uplink shared channels according to an embodiment of the present disclosure. a) PUSCHs including DMRSs available for joint channel estimation may be repeatedly transmitted PUSCHs including one transport block. A transport block size may be determined based on one slot, and the PUSCHs may be repeatedly transmitted on multiple slots. For example, the terminal may transmit PUSCH repetition1in slot n and may transmit PUSCH repetition2in slot n+1. In this case, DMRSs are transmitted in different slots (slot n to slot n+1), but if the aforementioned joint channel estimation condition is satisfied, the DMRSs may be used for joint channel estimation. b) The PUSCHs may be PUSCHs including different transport blocks. In this case, the PUSCHs may be scheduled or activated via different DCI. Alternatively, the PUSCHs may be PUSCHs including different transport blocks scheduled or activated via one piece of DCI. For example, referring toFIG.61, the base station may configure the terminal to transmit PUSCH #1in slot n and transmit PUSCH #2in slot n+1. In this case, each of PUSCH #1and PUSCH #2may be scheduled via different DCI. DMRSs included in respective PUSCH #1and PUSCH #2are transmitted in different slots (slot n to slot n+1), but if the aforementioned joint channel estimation condition is satisfied, the DMRSs may be used for joint channel estimation. The base station may configure, for the terminal, a time domain window (or bundling window) for joint channel estimation. In this case, the base station may configure a DMRS to satisfy the aforementioned joint channel estimation condition, the DMRS being included in an uplink channel (PUCCH or PUSCH) transmitted in a specific time domain window. The described PUCCH or PUSCH may be repeatedly transmitted within a time domain window. In this case, the PUCCH or PUSCH may include one transport block or may include different transport blocks. In this case, the time domain window may be explicitly configured or implicitly configured by the base station. Hereinafter, a method of determining a time domain window will be described. Time Domain Window Determination Method FIG.62illustrates a method of determining a time domain window according to an embodiment of the present disclosure. i) The base station may explicitly transmit information on a time domain window to the terminal, and the terminal may determine the time domain window, based on the transmitted information on the time domain window. In this case, information on the time domain window may be information on a duration of the time domain window, and may specifically include at least one information of the number of slots, the number of symbols, and the number of repeated uplink channel transmissions. The terminal may transmit a PUCCH or PUSCH to satisfy a joint channel estimation condition in a time domain window configured by the base station. If the terminal receives information on the time domain window from the base station, the terminal needs to determine a time point at which the time domain window starts. i-a) A time point at which a time domain window starts may be a first symbol of a first slot of radio frame index0. For example, if a duration of the time domain window is 5 slots, the time domain window may be determined by grouping 5 slots from the first slot of radio frame index0. In this case, an index of the first slot of radio frame index0may be 0. i-b) A time point at which a time domain window starts may be a first uplink symbol of a first uplink slot of radio frame index0. An uplink slot refers to a slot including only an uplink symbol. For example, if a duration of the time domain window is 5 slots, the time domain window may be determined by grouping 5 slots from the first uplink slot of radio frame index0. i-c) A time point at which a time domain window starts may be a first non-downlink symbol of a first non-downlink slot of radio frame index0. A non-downlink slot may be a slot including at least one non-downlink symbol. A non-downlink symbol is a symbol other than a downlink symbol, and may be an uplink symbol or a flexible symbol. For example, if a duration of the time domain window is 5 slots, the time domain window may be determined by grouping 5 slots from the first non-downlink slot of radio frame index0. i-d) The base station may configure, for the terminal, an offset value for determination of a time point at which a time domain window starts. An offset value may be at least one of the number of slots, the number of symbols, and the number of repeated uplink channel transmissions. For example, if the offset value is X slots, X symbols, or X repetitions, the time domain window may be configured by grouping durations corresponding to X slots, X symbols, or X repetitions. In this case, the X value may be a value smaller than a duration of the time domain section. The base station may configure information (duration information) on multiple time domain windows for the terminal. Referring toFIG.62, when a base station configures TDD for a terminal, two patterns may be configured. In this case, different periods may be configured for the two patterns, respectively. If a period of a first pattern is P1 and a period of a second pattern is P2, P1+P2 may be a value of one of divisors of 20. Each pattern may include a DL symbol, a UL symbol, and a flexible symbol, and may be configured in the order of a DL symbol, a flexible symbol, and a UL symbol. Referring toFIG.62, the base station may configure P1 to be 2 ms and P2 to be 3 ms, and may configure a subcarrier spacing to be 30 KHz. In this case, the base station may configure, for the terminal, multiple patterns constituting the time domain. In this case, if only one time domain window is configured for multiple patterns, the configured one time domain window may not be suitable for multiple patterns. Accordingly, the base station may configure, for the terminal, multiple time domain windows corresponding to respective multiple patterns. Specifically, the base station may configure, for the terminal, a time domain window configured by the first pattern and a time domain window configured by the second pattern, i.e., two time domain windows. In this case, a duration of a first time domain window may be configured to be X1 slots, X1 symbols, and X1 repetitions, and a duration of a second time domain window may be configured to be X2 slots, X2 symbols, and X2 repetitions. The terminal may configure time domain window #0based on X1 slots, X1 symbols, or X1 repetitions and may configure time domain window #1based on X2 slots, X2 symbols, or X2 repetitions, from a time point at which the time domain window starts. That is, multiple time domain windows having different durations may be configured. In this case, the values of X1 and X2 may be values configured by the base station for the terminal. On the other hand, information on time domain windows indicated by X1 and X2 values may not be explicitly indicated by the base station and may be inferred by the terminal. That is, X1 may correspond to period P1, and X2 may correspond to period P2. Each of the first pattern and the second pattern may be a time domain window. Therefore, a DMRS included in a slot constituting the first pattern may be used for joint channel estimation, and a DMRS included in a slot constituting the second pattern may be used for joint channel estimation. ii) The terminal may determine a time domain window without receiving explicit information on the time domain window from the base station. That is, if the terminal does not receive explicit information on a time domain window from the base station, the terminal may implicitly determine a specific period as a time domain window. ii-a) The terminal may implicitly determine a time domain window, based on the number of repeated PUCCH or PUSCH transmissions. That is, the terminal may determine a time domain window from a time point at which repeated PUCCH or PUSCH transmission starts to a time point at which the repeated transmission ends. In other words, since the repeatedly transmitted PUCCHs or PUSCHs are transmitted within the same time domain window, DMRSs included in the PUCCHs or PUSCHs in this case may be used for joint channel estimation. ii-b) The terminal may implicitly determine a time domain window, based on a slot configuration. That is, the terminal may determine a time domain window according to a slot configuration in an unpaired spectrum. ii-c) The terminal may implicitly determine a time domain window, based on consecutive uplink slots. ii-d) The terminal may implicitly determine a time domain window, based on consecutive non-downlink slots. One or more slots or symbols may be included between resource areas (e.g., slots) in which repeated uplink channel transmission is configured. Specifically, one or more slots or symbols may be included between a resource area in which a repeatedly transmitted first PUSCH/PUCCH is configured and a resource area in which a repeatedly transmitted second PUSCH/PUCCH is configured. In this case, one or more slots or symbols may be a maximum of X slots or symbols. In this case, X may be a value configured by the base station. One or more slots or symbols may be resources that are not used for uplink channel transmission. That is, a certain period (gap) may exist between resource areas in which repeatedly transmitted uplink channels are configured. In other words, a time domain window may be determined based on a certain gap existing between resource areas in which repeatedly transmitted uplink channels are configured. When the terminal determines a time domain window, based on consecutive uplink slots or non-downlink slots, if the number of slots constituting one time domain window is large, this may be disadvantageous in terms of terminal or base station complexity. Accordingly, one time domain window may be divided into multiple sub-time domain windows. In this case, DMRSs included in PUSCHs or PUCCHs transmitted in sub-time domain windows may be available for joint channel estimation. Sub-Time Domain Window Determination Method i) One time domain window may be divided based on a duration of a sub-time domain window. The base station may transmit duration information on a sub-time domain window to the terminal, and the terminal may divide a time domain window into multiple sub-time domain windows, based on the received duration information. In this case, the duration information may be at least one of the number of slots, the number of symbols, and the number of repeated uplink channel transmissions. Specifically, if a duration of a time domain window is N (N slots/symbols/repetitions) and a duration of a sub-time domain window is M (M slots/symbols/repetitions), the terminal may determine a first sub-time domain window by grouping a first slot/symbol/repetition to an M-th slot/symbol/repetition. In addition, the terminal may determine a second sub-time domain window by grouping an (M+1)th slot/symbol/repetition to a 2M-th slot/symbol/repetition. In this case, the number of slots/symbols/repetitions included in a last sub-time domain window may be fewer than M. Similarly, the terminal may determine an M-th sub-time domain window by grouping a (k*M+1)th slot/symbol/repetition to the remaining (N-th) slots/symbols/repetitions. In this case, the number of slots/symbols/repetitions included in the M-th sub-time domain window may be fewer than M. In this case, k may be calculated with floor(N/M). ii) A time domain window may be divided based on the number of sub-time-domain windows. That is, the terminal may receive information on the number of sub-time domain windows from the base station, and the terminal may divide a time domain windows into the number of sub-time domain windows. For example, if the time domain window is N (N slots/symbols/repetitions) and the number of sub-time domain windows is M, the number of slots/symbols/repetitions included in one sub-time domain window may be ceil(N/M) or floor(N/M). Specifically, (N mod M) sub-time domain windows may include ceil(N/M) slots/symbols/repetitions, and (M−(N mod M)) sub-time domain windows may include floor(N/M) slots/symbols/repetitions. As another example, the number of slots/symbols/repetitions included in M−1 sub-time domain windows may be floor(N/M), and the number of slots/symbols/repetitions included in one sub-time domain window may be N−(M−1)*floor(N/M). Here, A mod B refers to a remainder obtained by dividing A by B. If the terminal determines a time domain window, based on consecutive uplink slots, a time domain window including uplink slots may be determined. In this case, it is necessary to determine a time domain window including a slot which is not an uplink slot but is available for uplink transmission. Specifically, it is necessary to determine a time domain window including a non-downlink slot. A non-downlink slot may be included in a time domain window of an adjacent uplink slot. For example, if slot n is a non-downlink slot and slot n+1 is an uplink slot, slot n may be included in a time domain window including slot n+1. In the NR system, various subcarrier spacings may be configured, and therefore the described symbols/slots/repetitions for determination of a (sub-)time domain window may vary according to subcarrier spacings. Therefore, it is necessary to determine a subcarrier spacing for determination of a (sub-)time domain window. In the present specification, a subcarrier spacing that may be referenced to determine a time domain window is referred to as reference subcarrier spacing. Reference subcarrier spacing determination method i) When the base station configures TDD for the terminal, the base station may also configure information on a subcarrier spacing. That is, the terminal may use the subcarrier spacing, which is configured together when the base station configures TDD, as a reference subcarrier spacing which may be referenced to determine a time domain window. ii) When the base station configures one or multiple UL BWPs of a cell for the target, subcarrier spacings of the one or multiple UL BWPs may be configured. When determining a time domain window, the terminal may use, as a reference subcarrier spacing, one value among one or multiple subcarrier spacings. For example, if multiple subcarrier spacings are configured, a smallest subcarrier spacing may be a reference subcarrier spacing. iii) When one UL BWP of each cell is activated, the terminal may use, as a reference subcarrier spacing, a subcarrier spacing of the activated UL BWP. iv) The terminal may use a random subcarrier spacing as a reference subcarrier spacing. A random subcarrier spacing may be determined differently for each frequency range (FR). A random subcarrier spacing may be one value of subcarrier spacings available in each FR, and may be a lowest subcarrier spacing. For example, for FR1, since 15 kHz, 30 kHz, and 60 kHz are available for a subcarrier spacing, a reference subcarrier spacing may be 15 KHz. For FR2, since 60 kHz and 120 kHz are available for a subcarrier spacing, a reference subcarrier spacing may be 60 KHz. v) The base station may configure a reference subcarrier spacing of a cell for the terminal. In this case, the reference subcarrier spacing may not be greater than a subcarrier spacing configured in a UL BWP. Hereinafter, descriptions will be provided for a method in which the terminal autonomously determines a time domain window and transmits information on the determined time domain window to the base station. Method of Autonomous Time Domain Window Determination by Terminal i) The terminal may transmit information on a start time or an end time of a time domain window to the base station. For example, the terminal may inform, using a 1-bit value, the base station of information on the start time or end time of the time domain window. For example, the terminal may indicate a start time of a PUCCH or PUSCH by using “0” and may indicate a period other than the start time by using “1”. Specifically, if resource areas in which PUCCHs or PUSCHs are transmitted within the time domain window is slot n to slot n+3, the terminal may indicate “0” with a 1-bit value for PUCCHs or PUSCHs transmitted in slot n, and may indicate “1” with a 1-bit value for PUCCHs or PUSCHs transmitted in slot n+1, slot n+2, and slot n+3. In this case, indication targets of the indication value “0” or “1” may be interchanged. A 1-bit value may be multiplexed in a PUSCH, and may be multiplexed in a PUSCH in the same manner as HARQ-ACK. ii) When a time domain window is changed, the terminal may transmit, to the base station, information on the time domain window via toggling. For example, if the terminal has transmitted a 1-bit value of “0” for a PUSCH or PUCCH transmitted in a first time domain window, the terminal may transmit a 1-bit value of “1” for a PUSCH or PUCCH transmitted in a second time domain window. FIG.63toFIG.66illustrate a method of indicating a time domain window according to an embodiment of the present disclosure. If a base station fails to receive a PUSCH or PUCCH in a time domain window indicated by a terminal, ambiguity may occur between the terminal and the base station with respect to the time domain window. Referring toFIG.63A, the terminal may transmit information on a time domain window to the base station by using an autonomous interpretation method of terminal i). For example, the terminal may inform the base station of slots0to3as one time domain window and may inform of slot4or5as another time domain window. In this case, if the base station fails to receive a PUCCH or PUSCH in slots3and4, the base station may determine slots0to5as one time domain window so as to perform joint channel estimation. Referring toFIG.63B, the terminal may transmit information on a time domain window to the base station by using an autonomous interpretation method of terminal ii). For example, the terminal may inform the base station of slots0to2as one time domain window, may inform of slot3or4as another time domain window, and may inform of slot5as another time domain window. In this case, if the base station fails to receive a PUCCH or PUSCH in slots3and4, the base station may determine slots0to5as one time domain window so as to perform joint channel estimation. In this case, since the PUCCH or PUSCH transmitted by the terminal does not satisfy the joint channel estimation condition, the base station may fail to perform channel estimation, and coverage performance cannot be improved. Therefore, a method of reducing ambiguity between a terminal and a base station with respect to a time domain window is required. Method of Solving Ambiguity for Time Domain Window i) The terminal may transmit a counter indicator as information on a time domain window to the base station. That is, the terminal may transmit, to the base station, information on a symbol set number within one time domain window. In this case, the symbol set may include repeated transmission of uplink channels, symbols, and slots. Referring toFIG.64(a), the terminal may indicate to the base station that joint channel estimation is possible via uplink DMRSs transmitted in slots0to3and joint channel estimation is possible via an uplink DMRS transmitted in slot4or5. In this case, a starting slot available for joint channel estimation may be indicated with0via a counter indicator, and subsequent slots may be indicated with counter values of 1, 2, . . . , 3n ascending order. Referring toFIG.64(b), uplink DMRSs transmitted in slots0to2are available for joint channel estimation, and an uplink DMRS transmitted in slot3or4is available for joint channel estimation. In this case, the terminal may indicate, with0via a counter indicator, a starting slot available for joint channel estimation, and subsequent slots may be indicated by counter values in ascending order. Therefore, inFIG.64(a)andFIG.64(b), even when the base station fails to decode uplink transmissions in slots3and4, it may be seen, via the counter indicator, that joint channel estimation is not possible for uplink transmissions in slots2and5. This is because the counter indicator value of slot2and the counter indicator value of slot5do not satisfy an ascending order. i-a) The terminal may transmit, to the base station, information on a total indicator as information for joint channel estimation, in addition to a counter indicator. In this case, the total indicator may indicate a symbol set included in one time domain window. A symbol set may include slots, symbols, and repeated transmissions. Referring toFIG.65(b), there may be cases in which the base station fails to receive uplink channels transmitted in slots2and3. In this case, if only a counter indicator exists as information for joint channel estimation, ambiguity may occur between the base station and the terminal with respect to a time domain window. Therefore, the terminal may inform the base station of a total indicator in addition to a counter indicator, thereby reducing ambiguity in a time domain window. In (a, b) of each slot inFIG.65(b), a is a value indicated by a counter indicator and b is a value indicated by a total indicator. That is, in slot0, the counter indicator indicates0and a value indicated by the total indicator is 2. Slot0and slot1are in one time domain window including two symbol sets, and therefore slot0and slot1have the same total indicator value. ii) The terminal may transmit information on an index of the time domain window to the base station. One time domain window is configured with the same index, and another time domain window is configured with a sequentially increased index, so that the terminal may inform the base station that the time domain windows are different time domain windows. Referring toFIG.66, the terminal may inform the base station, via an identical index, that an uplink channel transmission is performed within the same time domain window, and may inform, via an increased index, that an uplink channel transmission is performed within another time domain window. This enables the base station to, when the base station fails to receive uplink channels transmitted in slot3and slot4as described with reference toFIG.66(b), recognize the failure and request retransmission of the uplink channels from the terminal. That is, since indices of slots0to2and an index of slot5are different, the base station may recognize that slots0to2and slot5are included in different time domain windows. Hereinafter, descriptions will be provided for a method of determining a time domain window when multiple uplink cells are configured for a terminal. FIG.67andFIG.68illustrate a method of determining a time domain window in a carrier aggregation situation according to an embodiment of the present disclosure. First, a terminal may be configured with multiple uplink cells from a base station. Configuration of multiple uplink cells may be described as uplink carrier aggregation. In this case, a cell configured for the terminal for the first time may be a primary cell (PCell), and a cell additionally configured, in addition to the PCell, may be a secondary cell (SCell). The terminal may transmit an uplink channel in the configured PCell or SCell. An uplink physical channel may be at least one of a PUSCH and a PUCCH. When transmitting uplink channels in multiple cells configured in the same frequency band, the terminal may share transmission power. When multiple uplink cells are configured for the terminal, configuration may be performed so that the described joint channel estimation conditions are satisfied. When uplink carrier aggregation is configured, if the terminal is configured with one time domain window, there is a problem of determining a time domain window to be applied in multiple cells. In this case, one configured time domain window may be a time domain window configured based on a PCell. If different TDD configurations are configured for respective cells, a time domain window configured based on a PCell may not be suitable for joint channel estimation for uplink channels transmitted on an SCell. Referring toFIG.67, the terminal may be configured with two uplink cells of cell #0and cell #1, and different TDD configurations may be configured for respective cells. A time domain window is configured based on cell #0, and time domain windows may be configured every 5 slots from a first slot in a certain frame. Although the number of consecutive uplink slots of cell #1is 6, since time domain windows are configured every 5 slots, the time domain window configured based on cell #0may not be suitable for cell #1. The base station may configure different subcarrier spacings for multiple uplink cells. In this case, the subcarrier spacing may be a subcarrier spacing for a TDD configuration or a subcarrier spacing for a BWP configuration. In a carrier aggregation situation, if the subcarrier spacing for the TDD configuration of the SCell is smaller than the subcarrier spacing for the TDD configuration of the PCell, a boundary of a time domain configuration determined based on the PCell may not be accurately configured. Referring toFIG.68, a subcarrier spacing for a TDD configuration may be configured to be 30 KHz in cell #0and 15 kHz in cell #1. A time domain window for joint channel estimation may be determined based on cell #0and may be configured every 5 slots or every 2.5 ms from a first slot within a radio frame. In this case, the same time domain window may be applied to cell #1. However, a boundary of the time domain window may be located within a third uplink slot of cell #1. Accordingly, some symbols of the third uplink slot of cell #1may be included in a first time domain window and the remaining symbols may be included in a second time domain window. That is, if a subcarrier spacing for a TDD configuration of an SCell is smaller than a subcarrier spacing for a TDD configuration of a PCell, the time domain window may not be suitable. Therefore, a time domain window that is suitably applicable to all uplink cells in a carrier aggregation situation is required. Method of Determining Time Domain Window in Carrier Aggregation Situation FIG.69toFIG.74illustrate a method of configuring a time domain window according to an embodiment of the present disclosure. i) In a carrier aggregation situation, a base station may configure a separate time domain window for each of multiple cells. That is, when N uplink cells including a PCell are configured for a terminal, the base station may configure time domain windows applied to the N cells, respectively. Referring toFIG.69, the terminal may be configured with cell #0with a subcarrier spacing of 30 KHz and cell #1with a subcarrier spacing of 15 KHz. Time domain window #0and time domain window #1may be configured for cell #0and cell #1, respectively. Time domain window #0may include two slots of 1 ms, and time domain window #1may include two slots of 2 ms. In this case, in order to reduce signaling overhead, a specific parameter commonly applied to each cell may be used when the base station configures a time domain window for each cell. i-a) A reference subcarrier spacing may be commonly used in each cell. That is, the base station may configure, for the terminal, only a reference subcarrier spacing for one time domain window. Alternatively, the terminal may implicitly infer a reference subcarrier spacing for one time domain window. In this case, the reference subcarrier spacing may be applied to all cells. The terminal may obtain subcarrier spacings for the time domain windows of respective cells. For example, the terminal may select one subcarrier spacing from among the obtained subcarrier spacings of respective cells and may apply the selected one subcarrier spacing to the time domain windows of all cells. In this case, the one subcarrier spacing may be a lowest subcarrier spacing among the subcarrier spacings of respective cells. As another example, the terminal may apply, to the time domain windows of all cells, the subcarrier spacing for the time domain window of the PCell among respective cells. As another example, the terminal may apply, to the time domain windows of all cells, a subcarrier spacing of a time domain window of a cell having a lowest index from among respective cells. As another example, the terminal may be configured with a reference subcarrier spacing applied to the time domain windows of all cells from the base station. In this case, the reference subcarrier spacing applied to the time domain windows of all cells, which is configured for the terminal, should not be larger than a subcarrier spacing configured in UL BWPs of all cells. ii) The base station may be configured with a duration of a time domain window commonly applied to all cells. In this case, a duration of a time domain window may be described as a duration of a cell-common time domain window. A duration of the cell-common time domain window may be adjusted according to a reference subcarrier spacing and subcarrier spacings of the cells. That is, when a duration of the cell-common time domain window is M slots/symbols/repetitions, a duration of the time domain window applied to a cell may be f(M*(SCS_cell/SCS_refer)) slots/symbols/repetitions. SCS_refer is a reference subcarrier spacing, and SCS_cell is a subcarrier spacing of an applied cell. f(x) may be at least one of ceil(x), floor(x), and round(x). Referring toFIG.70, cell #0may be configured with a subcarrier spacing of 30 kHz, and cell #1may be configured with a subcarrier spacing of 15 kHz. In this case, a reference subcarrier spacing may be configured with a subcarrier spacing of 15 kHz. A duration of the cell-common time domain window may be configured to be 5 slots. A duration of the time domain window applied to cell #0may be 10 (f(5*(30 kHz/15 kHz))) slots/symbols/repetitions, and a duration of the time domain window applied to cell #1may be determined to be 5 (f(5*(15 kHz/15 kHz))) slots/symbols/repetitions. Referring toFIG.71, for example, cell #0may be configured with a subcarrier spacing of 30 kHz, and cell #1may be configured with a subcarrier spacing of 15 kHz. A reference subcarrier spacing may be configured to be 30 kHz. A cell-common time domain window may be configured to be 5 slots. In this case, if f(x) is ceil(x), a duration of the time domain window applied to cell #0is 5 (ceil(5*(30 kHz/30 kHz))) slots/symbols/repetitions, and a duration of the time domain window applied to cell #1may be determined to be 3 (ceil(5*(15 kHz/30 kHz))) slots/symbols/repetitions. ii-a) The terminal may select one reference cell from among multiple uplink cells. In addition, a time domain window determined based on the selected reference cell may be applied to all cells. A method of determining a reference cell is as follows.PCell: A reference cell may be a PCell. That is, the terminal may extend and apply a time domain window determined based on a PCell to an SCell.The lowest cell index: A reference cell may be a cell having a lowest cell index. The lowest cell index may be 0. That is, a PCell may be a reference cell. The lowest cell index may be 1 or higher. That is, a cell having a lowest cell index from among SCells, except for a PCell, may be a reference cell.The lowest SCS: A reference cell may be a cell configured with a lowest subcarrier spacing. As described with reference toFIG.68, this is to prevent a case of a time domain window boundary being included in a slot of another cell. In this case, if there are multiple cells configured with a lowest subcarrier spacing, a reference cell may be selected in consideration of other criteria. Other criteria may be a cell index, a TDD configuration periodicity, and an uplink slot ratio. For example, if there are two cells configured with a lowest subcarrier spacing, a cell having a lower cell index among the two may be a reference cell.The longest TDD configuration periodicity: A reference cell may be a cell having a longest TDD configuration periodicity. A TDD configuration periodicity refers to a periodicity in which one TDD configuration according to 3GPP standards is repeated. Referring toFIG.72, subcarrier spacings of all cells may be 15 KHz, a TDD configuration periodicity of cell #0may be 5 ms, and a TDD configuration periodicity of cell #1may be 10 ms. In order to include as many uplink slots as possible for multiple uplink cells, the terminal may determine, as a reference cell, a cell having a longest TDD configuration periodicity, and may apply a time domain window of the reference cell to all cells. Accordingly, since the TDD configuration periodicity of cell #0is 5 slots and the TDD configuration periodicity of cell #1is 10 slots, cell #1is selected as a reference cell, and the time domain window of cell #1may be applied to all cells. If there are multiple cells having the longest TDD configuration periodicity, a reference cell may be selected in consideration of other criteria. Other criteria may be a cell index, a subcarrier spacing, and an uplink slot ratio. If there are two cells having the longest TDD configuration periodicity, a cell having a lower SCS may be selected as a reference cell.The most UL slot portion: A reference cell may be a cell including a largest number of UL slots. That is, the terminal may perform uplink transmission for joint channel estimation, by determining, as a reference cell, a cell having a largest number of uplink slots during the same time interval from among multiple uplink cells. The same time interval may be the longest TDD configuration periodicity of multiple cells. Referring toFIG.73, cell #1including more uplink slots compared to cell #0may be a reference cell. If there are multiple cells having the largest number of uplink slots, a reference cell may be selected in consideration of other criteria. Other criteria may be a cell index, a subcarrier spacing, and a TDD configuration periodicity. If there are two cells including the largest number of uplink slots, a cell having a longer TDD configuration periodicity among the two may be selected as a reference cell. iii) The terminal may determine a time domain window, based on consecutive slots in a union of uplink slots with respect to multiple uplink cells. In order to include, in a time domain window, as many configured TDD configurations as possible for multiple uplink cells, the terminal may determine the time domain window, based on consecutive slots in the union of multiple inter-cell uplink slots. A union of uplink slots refers to a slot including uplink symbols in at least one cell. Referring toFIG.74, different TDD configurations may be configured for two uplink cells, wherein the two uplink cells have the same subcarrier spacing of 15 KHz. For cell #0and cell #1, the terminal may determine the union of consecutive uplink slots, as one time domain window. That is, the terminal may determine one time domain window including a 4th slot, a 5th slot, a 9th slot, and a 10th slot of cell #0, and a 5th slot to a 10th slot of cell #1, and may apply the determined one time domain window to all cells. FIG.75is a flowchart illustrating a method of transmitting an uplink channel according to an embodiment of the disclosure. Hereinafter, the methods of transmitting an uplink channel by a terminal, described with reference toFIG.1toFIG.74, will be described viaFIG.75. A terminal may receive, from a base station, first information which is information related to a time division duplex (TDD) configuration, in S7510. The first information may include information on types of symbols constituting a slot, and the types of the symbols include one of a downlink symbol configured to be available for downlink transmission, an uplink symbol configured to be available for uplink transmission, and a flexible symbol configured to be neither the downlink symbol nor the uplink symbol. The terminal may repeatedly transmit an uplink channel to the base station on a resource determined based on the first information, in S7520. The uplink channel may be repeatedly transmitted in a first hop and a second hop. Each of the first hop and the second hop may be configured by bundling a preconfigured number of slots used for uplink channel transmission. The slots used for uplink channel transmission may include the uplink symbol. Each of the first hop and the second hop may include consecutive slots in the time domain, and each of the first hop and the second hop may be transmitted on a different physical resource block (PRBs) via frequency hopping. The preconfigured number may be received from the base station. Slots included in the first hop may be indexed with an identical index, and slots included in the second hop may be indexed with an identical index. If the number of the consecutive slots used for uplink channel transmission is fewer than the preconfigured number, the first hop or the second hop may include fewer consecutive slots than the preconfigured number. The slots used for uplink channel transmission include the uplink symbol and the flexible symbol. The first hop may include a first slot and a second slot, the first slot may include a first demodulation reference signal (DM-RS), the second slot may include a second DM-RS, and the first DM-RS and the second DM-RS may be transmitted on resources of the same number of PRBs starting at the same PRB position in the frequency domain, and may be transmitted using the same phase, the same transmission power, the same Quasi co-location (QCL), and the same beamforming. The second hop may include a third slot and a fourth slot, the third slot may include a third DM-RS, the fourth slot may include a fourth DM-RS, and the third DM-RS and the fourth DM-RS may be transmitted on resources of the same number of PRBs starting at the same PRB position in the frequency domain, and may be transmitted using the same phase, the same transmission power, the same Quasi co-location (QCL), and the same beamforming. That is, the DM-RSs included in the first and second slots may be combined and used for channel estimation, and similarly, the DM-RSs included in the third and fourth slots may be combined and used for channel estimation. At least one of the downlink symbol or the flexible symbol may exist between a last symbol to which the repeatedly transmitted uplink channel is mapped in the first slot, and a first symbol to which the repeatedly transmitted uplink channel is mapped in the second slot. At least one of the downlink symbol or the flexible symbol may exist between a last symbol to which the repeatedly transmitted uplink channel in the third slot is mapped and a first symbol to which the repeatedly transmitted uplink channel in the fourth slot is mapped. The uplink channel may be a physical uplink control channel (PUCCH) or a physical uplink shared channel (PUSCH). The uplink channel may be transmitted within a time domain window. The terminal may receive information on the time domain window from the base station. In this case, the time domain window may be configured based on the information on the time domain window. The information on the time domain window may include one of the number of slots, the number of symbols, and the number of repeated transmissions of the uplink channel. The time domain window may be from a time point at which the repeated transmission of the uplink channel starts to a time point at which the repeated transmission of the uplink channel ends. The time domain window may include consecutive slots in the time domain, which include at least one of the uplink symbol and the flexible symbol. The time domain window may include a first time domain window and a second time domain window, the first time domain window may be configured to correspond to a first pattern, the second time domain window may be configured to correspond to a second pattern, the first pattern and the second pattern may include multiple slots, and multiple slot configurations for configuring each of the first pattern and the second pattern may be different from each other. DM-RSs included in the respective multiple slots constituting the first pattern may be transmitted on resources of the same number of PRBs starting at the same PRB position in the frequency domain, and may be transmitted using the same phase, the same transmission power, the same Quasi co-location (QCL), and the same beamforming. DM-RSs included in the respective multiple slots constituting the second pattern may be transmitted on resources of the same number of PRBs starting at the same PRB position in the frequency domain, and may be transmitted using the same phase, the same transmission power, the same Quasi co-location (QCL), and the same beamforming. That is, DM-RSs included in the multiple slots constituting the first pattern may be combined and used for channel estimation, and DM-RSs included in the multiple slots constituting the second pattern may be combined and used for channel estimation. The terminal performing the method described with reference toFIG.75may be the terminal described with reference toFIG.11. Specifically, the terminal may include a communication module configured to transmit or receive a radio signal, and a processor configured to control the communication module. In this case, the processor of the terminal may perform the method of transmitting an uplink channel, described in the present specification. In addition, a base station receiving an uplink channel transmitted by a terminal, described in the present specification, may include a communication module configured to transmit or receive a radio signal, and a processor configured to control the communication module. In this case, the base station may be the base station described with respect toFIG.11. The processor of the base station may perform the method of receiving an uplink channel, described in the present specification. The method and system of the present disclosure are described in relation to specific embodiments, but configuration elements, a part of or the entirety of operations of the present disclosure may be implemented using a computer system having a general-purpose hardware architecture. The foregoing descriptions of the present disclosure are for illustration purposes, and those skilled in the art, to which the present disclosure belongs, will be able to understand that modification to other specific forms can be easily achieved without changing the technical spirit or essential features of the present disclosure. Therefore, it should be understood that the embodiments described above are illustrative and are not restrictive in all respects. For example, each element described as one type may be implemented in a distributed manner, and similarly, elements described as being distributed may also be implemented in a combined form. The scope of the present disclosure is indicated by claims to be described hereinafter rather than the detailed description, and all changes or modifications derived from the meaning and scope of the claims and their equivalent concepts should be interpreted as being included in the scope of the present disclosure.
284,249
11863473
DESCRIPTION The following description and the drawings sufficiently illustrate specific embodiments to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. Portions and features of some embodiments may be included in, or substituted for, those of other embodiments. Embodiments set forth in the claims encompass all available equivalents of those claims. FIG.1is a block diagram of a radio architecture100in accordance with some embodiments. Radio architecture100may include radio front-end module (FEM) circuitry104, radio IC circuitry106and baseband processing circuitry108. Radio architecture100as shown includes both Wireless Local Area Network (WLAN) functionality and Bluetooth (BT) functionality although embodiments are not so limited. In this disclosure, “WLAN” and “Wi-Fi” are used interchangeably. FEM circuitry104may include a WLAN or Wi-Fi FEM circuitry104A and a Bluetooth (BT) FEM circuitry104B. The WLAN FEM circuitry104A may include a receive signal path comprising circuitry configured to operate on WLAN RF signals received from one or more antennas101, to amplify the received signals and to provide the amplified versions of the received signals to the WLAN radio IC circuitry106A for further processing. The BT FEM circuitry104B may include a receive signal path which may include circuitry configured to operate on BT RF signals received from one or more antennas101, to amplify the received signals and to provide the amplified versions of the received signals to the BT radio IC circuitry106B for further processing. FEM circuitry104A may also include a transmit signal path which may include circuitry configured to amplify WLAN signals provided by the radio IC circuitry106A for wireless transmission by one or more of the antennas101. In addition, FEM circuitry104B may also include a transmit signal path which may include circuitry configured to amplify BT signals provided by the radio IC circuitry106B for wireless transmission by the one or more antennas. In the embodiment ofFIG.1, although FEM104A and FEM104B are shown as being distinct from one another, embodiments are not so limited, and include within their scope the use of an FEM (not shown) that includes a transmit path and/or a receive path for both WLAN and BT signals, or the use of one or more FEM circuitries where at least some of the FEM circuitries share transmit and/or receive signal paths for both WLAN and BT signals. Radio IC circuitry106as shown may include WLAN radio IC circuitry106A and BT radio IC circuitry106B. The WLAN radio IC circuitry106A may include a receive signal path which may include circuitry to down-convert WLAN RF signals received from the FEM circuitry104A and provide baseband signals to WLAN baseband processing circuitry108A. BT radio IC circuitry106B may in turn include a receive signal path which may include circuitry to down-convert BT RF signals received from the FEM circuitry104B and provide baseband signals to BT baseband processing circuitry108B. WLAN radio IC circuitry106A may also include a transmit signal path which may include circuitry to up-convert WLAN baseband signals provided by the WLAN baseband processing circuitry108A and provide WLAN RF output signals to the FEM circuitry104A for subsequent wireless transmission by the one or more antennas101. BT radio IC circuitry106B may also include a transmit signal path which may include circuitry to up-convert BT baseband signals provided by the BT baseband processing circuitry108B and provide BT RF output signals to the FEM circuitry104B for subsequent wireless transmission by the one or more antennas101. In the embodiment ofFIG.1, although radio IC circuitries106A and106B are shown as being distinct from one another, embodiments are not so limited, and include within their scope the use of a radio IC circuitry (not shown) that includes a transmit signal path and/or a receive signal path for both WLAN and BT signals, or the use of one or more radio IC circuitries where at least some of the radio IC circuitries share transmit and/or receive signal paths for both WLAN and BT signals. Baseband processing circuitry108may include a WLAN baseband processing circuitry108A and a BT baseband processing circuitry108B. The WLAN baseband processing circuitry108A may include a memory, such as, for example, a set of RAM arrays in a Fast Fourier Transform or Inverse Fast Fourier Transform block (not shown) of the WLAN baseband processing circuitry108A. Each of the WLAN baseband circuitry108A and the BT baseband circuitry108B may further include one or more processors and control logic to process the signals received from the corresponding WLAN or BT receive signal path of the radio IC circuitry106, and to also generate corresponding WLAN or BT baseband signals for the transmit signal path of the radio IC circuitry106. Each of the baseband processing circuitries108A and108B may further include physical layer (PHY) and medium access control layer (MAC) circuitry, and may further interface with application processor111for generation and processing of the baseband signals and for controlling operations of the radio IC circuitry106. Referring still toFIG.1, according to the shown embodiment, WLAN-BT coexistence circuitry113may include logic providing an interface between the WLAN baseband circuitry108A and the BT baseband circuitry108B to enable use cases requiring WLAN and BT coexistence. In addition, a switch103may be provided between the WLAN FEM circuitry104A and the BT FEM circuitry104B to allow switching between the WLAN and BT radios according to application needs. In addition, although the antennas101are depicted as being respectively connected to the WLAN FEM circuitry104A and the BT FEM circuitry104B, embodiments include within their scope the sharing of one or more antennas as between the WLAN and BT FEMs, or the provision of more than one antenna connected to each of FEM104A or104B. In some embodiments, the front-end module circuitry104, the radio IC circuitry106, and baseband processing circuitry108may be provided on a single radio card, such as wireless radio card102. In some other embodiments, the one or more antennas101, the FEM circuitry104and the radio IC circuitry106may be provided on a single radio card. In some other embodiments, the radio IC circuitry106and the baseband processing circuitry108may be provided on a single chip or integrated circuit (IC), such as IC112. In some embodiments, the wireless radio card102may include a WLAN radio card and may be configured for Wi-Fi communications, although the scope of the embodiments is not limited in this respect. In some of these embodiments, the radio architecture100may be configured to receive and transmit orthogonal frequency division multiplexed (OFDM) or orthogonal frequency division multiple access (OFDMA) communication signals over a multicarrier communication channel. The OFDM or OFDMA signals may comprise a plurality of orthogonal subcarriers. In some of these multicarrier embodiments, radio architecture100may be part of a Wi-Fi communication station (STA) such as a wireless access point (AP), a base station or a mobile device including a Wi-Fi device. In some of these embodiments, radio architecture100may be configured to transmit and receive signals in accordance with specific communication standards and/or protocols, such as any of the Institute of Electrical and Electronics Engineers (IEEE) standards including, IEEE 802.11n-2009, IEEE 802.11-2012, IEEE 802.11-2016, IEEE 802.11ac, and/or IEEE 802.11ax standards, Extremely High Throughput (EHT) standards, and/or proposed specifications for WLANs, although the scope of embodiments is not limited in this respect. Radio architecture100may also be suitable to transmit and/or receive communications in accordance with other techniques and standards. In some embodiments, the radio architecture100may be configured to communicate in accordance with EHT techniques/protocols and/or other 802.11 techniques/protocols. In these embodiments, the radio architecture100may be configured to communicate in accordance with an OFDMA technique, although the scope of the embodiments is not limited in this respect. In some other embodiments, the radio architecture100may be configured to transmit and receive signals transmitted using one or more other modulation techniques such as spread spectrum modulation (e.g., direct sequence code division multiple access (DS-CDMA) and/or frequency hopping code division multiple access (FH-CDMA)), time-division multiplexing (TDM) modulation, and/or frequency-division multiplexing (FDM) modulation, although the scope of the embodiments is not limited in this respect. In some embodiments, as further shown inFIG.1, the BT baseband circuitry108B may be compliant with a Bluetooth (BT) connectivity standard such as Bluetooth, Bluetooth 4.0 or Bluetooth 5.0, or any other iteration of the Bluetooth Standard. In embodiments that include BT functionality as shown for example inFIG.1, the radio architecture100may be configured to establish a BT synchronous connection oriented (SCO) link and/or a BT low energy (BT LE) link. In some of the embodiments that include functionality, the radio architecture100may be configured to establish an extended SCO (eSCO) link for BT communications, although the scope of the embodiments is not limited in this respect. In some of these embodiments that include a BT functionality, the radio architecture may be configured to engage in a BT Asynchronous Connection-Less (ACL) communications, although the scope of the embodiments is not limited in this respect. In some embodiments, as shown inFIG.1, the functions of a BT radio card and WLAN radio card may be combined on a single wireless radio card, such as single wireless radio card102, although embodiments are not so limited, and include within their scope discrete WLAN and BT radio cards In some embodiments, the radio-architecture100may include other radio cards, such as a cellular radio card configured for cellular (e.g., 3GPP such as LTE, LTE-Advanced or 5G communications). In some IEEE 802.11 embodiments, the radio architecture100may be configured for communication over various channel bandwidths including bandwidths having center frequencies of about 900 MHz, 2.4 GHz, 5 GHz, and bandwidths of about 1 MHz, 2 MHz, 2.5 MHz, 4 MHz, 5 MHz, 8 MHz, 10 MHz, 16 MHz, 20 MHz, 40 MHz, 80 MHz (with contiguous bandwidths) or 80+80 MHz (160 MHz) (with non-contiguous bandwidths). In some embodiments, a 320 MHz channel bandwidth may be used. The scope of the embodiments is not limited with respect to the above center frequencies however. FIG.2illustrates FEM circuitry200in accordance with some embodiments. The FEM circuitry200is one example of circuitry that may be suitable for use as the WLAN and/or BT FEM circuitry104A/104B (FIG.1), although other circuitry configurations may also be suitable. In some embodiments, the FEM circuitry200may include a TX/RX switch202to switch between transmit mode and receive mode operation. The FEM circuitry200may include a receive signal path and a transmit signal path. The receive signal path of the FEM circuitry200may include a low-noise amplifier (LNA)206to amplify received RF signals203and provide the amplified received RF signals207as an output (e.g., to the radio IC circuitry106(FIG.1)). The transmit signal path of the circuitry200may include a power amplifier (PA) to amplify input RF signals209(e.g., provided by the radio IC circuitry106), and one or more filters212, such as band-pass filters (BPFs), low-pass filters (LPFs) or other types of filters, to generate RF signals215for subsequent transmission (e.g., by one or more of the antennas101(FIG.1)). In some dual-mode embodiments for Wi-Fi communication, the FEM circuitry200may be configured to operate in either the 2.4 GHz frequency spectrum or the 5 GHz frequency spectrum. In these embodiments, the receive signal path of the FEM circuitry200may include a receive signal path duplexer204to separate the signals from each spectrum as well as provide a separate LNA206for each spectrum as shown. In these embodiments, the transmit signal path of the FEM circuitry200may also include a power amplifier210and a filter212, such as a BPF, a LPF or another type of filter for each frequency spectrum and a transmit signal path duplexer214to provide the signals of one of the different spectrums onto a single transmit path for subsequent transmission by the one or more of the antennas101(FIG.1). In some embodiments, BT communications may utilize the 2.4 GHZ signal paths and may utilize the same FEM circuitry200as the one used for WLAN communications. FIG.3illustrates radio IC circuitry300in accordance with some embodiments. The radio IC circuitry300is one example of circuitry that may be suitable for use as the WLAN or BT radio IC circuitry106A/106B (FIG.1), although other circuitry configurations may also be suitable. In some embodiments, the radio IC circuitry300may include a receive signal path and a transmit signal path. The receive signal path of the radio IC circuitry300may include at least mixer circuitry302, such as, for example, down-conversion mixer circuitry, amplifier circuitry306and filter circuitry308. The transmit signal path of the radio IC circuitry300may include at least filter circuitry312and mixer circuitry314, such as, for example, up-conversion mixer circuitry. Radio IC circuitry300may also include synthesizer circuitry304for synthesizing a frequency305for use by the mixer circuitry302and the mixer circuitry314. The mixer circuitry302and/or314may each, according to some embodiments, be configured to provide direct conversion functionality. The latter type of circuitry presents a much simpler architecture as compared with standard super-heterodyne mixer circuitries, and any flicker noise brought about by the same may be alleviated for example through the use of OFDM modulation.FIG.3illustrates only a simplified version of a radio IC circuitry, and may include, although not shown, embodiments where each of the depicted circuitries may include more than one component. For instance, mixer circuitry320and/or314may each include one or more mixers, and filter circuitries308and/or312may each include one or more filters, such as one or more BPFs and/or LPFs according to application needs. For example, when mixer circuitries are of the direct-conversion type, they may each include two or more mixers. In some embodiments, mixer circuitry302may be configured to down-convert RF signals207received from the FEM circuitry104(FIG.1) based on the synthesized frequency305provided by synthesizer circuitry304. The amplifier circuitry306may be configured to amplify the down-converted signals and the filter circuitry308may include a LPF configured to remove unwanted signals from the down-converted signals to generate output baseband signals307. Output baseband signals307may be provided to the baseband processing circuitry108(FIG.1) for further processing. In some embodiments, the output baseband signals307may be zero-frequency baseband signals, although this is not a requirement. In some embodiments, mixer circuitry302may comprise passive mixers, although the scope of the embodiments is not limited in this respect. In some embodiments, the mixer circuitry314may be configured to up-convert input baseband signals311based on the synthesized frequency305provided by the synthesizer circuitry304to generate RF output signals209for the FEM circuitry104. The baseband signals311may be provided by the baseband processing circuitry108and may be filtered by filter circuitry312. The filter circuitry312may include a LPF or a BPF, although the scope of the embodiments is not limited in this respect. In some embodiments, the mixer circuitry302and the mixer circuitry314may each include two or more mixers and may be arranged for quadrature down-conversion and/or up-conversion respectively with the help of synthesizer304. In some embodiments, the mixer circuitry302and the mixer circuitry314may each include two or more mixers each configured for image rejection (e.g., Hartley image rejection). In some embodiments, the mixer circuitry302and the mixer circuitry314may be arranged for direct down-conversion and/or direct up-conversion, respectively. In some embodiments, the mixer circuitry302and the mixer circuitry314may be configured for super-heterodyne operation, although this is not a requirement. Mixer circuitry302may comprise, according to one embodiment: quadrature passive mixers (e.g., for the in-phase (I) and quadrature phase (Q) paths). In such an embodiment, RF input signal207fromFIG.3may be down-converted to provide I and Q baseband output signals to be sent to the baseband processor Quadrature passive mixers may be driven by zero and ninety-degree time-varying LO switching signals provided by a quadrature circuitry which may be configured to receive a LO frequency (fLO) from a local oscillator or a synthesizer, such as LO frequency305of synthesizer304(FIG.3). In some embodiments, the LO frequency may be the carrier frequency, while in other embodiments, the LO frequency may be a fraction of the carrier frequency (e.g., one-half the carrier frequency, one-third the carrier frequency). In some embodiments, the zero and ninety-degree time-varying switching signals may be generated by the synthesizer, although the scope of the embodiments is not limited in this respect. In some embodiments, the LO signals may differ in duty cycle (the percentage of one period in which the LO signal is high) and/or offset (the difference between start points of the period). In some embodiments, the LO signals may have a 25% duty cycle and a 50% offset. In some embodiments, each branch of the mixer circuitry (e.g., the in-phase (I) and quadrature phase (Q) path) may operate at a 25% duty cycle, which may result in a significant reduction is power consumption. The RF input signal207(FIG.2) may comprise a balanced signal, although the scope of the embodiments is not limited in this respect. The I and Q baseband output signals may be provided to low-nose amplifier, such as amplifier circuitry306(FIG.3) or to filter circuitry308(FIG.3). In some embodiments, the output baseband signals307and the input baseband signals311may be analog baseband signals, although the scope of the embodiments is not limited in this respect. In some alternate embodiments, the output baseband signals307and the input baseband signals311may be digital baseband signals. In these alternate embodiments, the radio IC circuitry may include analog-to-digital converter (ADC) and digital-to-analog converter (DAC) circuitry. In some dual-mode embodiments, a separate radio IC circuitry may be provided for processing signals for each spectrum, or for other spectrums not mentioned here, although the scope of the embodiments is not limited in this respect. In some embodiments, the synthesizer circuitry304may be a fractional-N synthesizer or a fractional N/N+1 synthesizer, although the scope of the embodiments is not limited in this respect as other types of frequency synthesizers may be suitable. For example, synthesizer circuitry304may be a delta-sigma synthesizer, a frequency multiplier, or a synthesizer comprising a phase-locked loop with a frequency divider. According to some embodiments, the synthesizer circuitry304may include digital synthesizer circuitry. An advantage of using a digital synthesizer circuitry is that, although it may still include some analog components, its footprint may be scaled down much more than the footprint of an analog synthesizer circuitry. In some embodiments, frequency input into synthesizer circuitry304may be provided by a voltage controlled oscillator (VCO), although that is not a requirement. A divider control input may further be provided by either the baseband processing circuitry108(FIG.1) or the application processor111(FIG.1) depending on the desired output frequency305. In some embodiments, a divider control input (e.g., N) may be determined from a look-up table (e.g., within a Wi-Fi card) based on a channel number and a channel center frequency as determined or indicated by the application processor111. In some embodiments, synthesizer circuitry304may be configured to generate a carrier frequency as the output frequency305, while in other embodiments, the output frequency305may be a fraction of the carrier frequency (e.g., one-half the carrier frequency, one-third the carrier frequency). In some embodiments, the output frequency305may be a LO frequency (fLO). FIG.4illustrates a functional block diagram of baseband processing circuitry400in accordance with some embodiments. The baseband processing circuitry400is one example of circuitry that may be suitable for use as the baseband processing circuitry108(FIG.1), although other circuitry configurations may also be suitable. The baseband processing circuitry400may include a receive baseband processor (RX BBP)402for processing receive baseband signals309provided by the radio IC circuitry106(FIG.1) and a transmit baseband processor (TX BBP)404for generating transmit baseband signals311for the radio IC circuitry106. The baseband processing circuitry400may also include control logic406for coordinating the operations of the baseband processing circuitry400. In some embodiments (e.g., when analog baseband signals are exchanged between the baseband processing circuitry400and the radio IC circuitry106), the baseband processing circuitry400may include ADC410to convert analog baseband signals received from the radio IC circuitry106to digital baseband signals for processing by the RX BBP402. In these embodiments, the baseband processing circuitry400may also include DAC412to convert digital baseband signals from the TX BBP404to analog baseband signals. In some embodiments that communicate OFDM signals or OFDMA signals, such as through baseband processor108A, the transmit baseband processor404may be configured to generate OFDM or OFDMA signals as appropriate for transmission by performing an inverse fast Fourier transform (IFFT). The receive baseband processor402may be configured to process received OFDM signals or OFDMA signals by performing an FFT. In some embodiments, the receive baseband processor402may be configured to detect the presence of an OFDM signal or OFDMA signal by performing an autocorrelation, to detect a preamble, such as a short preamble, and by performing a cross-correlation, to detect a long preamble. The preambles may be part of a predetermined frame structure for Wi-Fi communication. Referring back toFIG.1, in some embodiments, the antennas101(FIG.1) may each comprise one or more directional or omnidirectional antennas, including, for example, dipole antennas, monopole antennas, patch antennas, loop antennas, microstrip antennas or other types of antennas suitable for transmission of RF signals. In some multiple-input multiple-output (MIMO) embodiments, the antennas may be effectively separated to take advantage of spatial diversity and the different channel characteristics that may result. Antennas101may each include a set of phased-array antennas, although embodiments are not so limited. Although the radio-architecture100is illustrated as having several separate functional elements, one or more of the functional elements may be combined and may be implemented by combinations of software-configured elements, such as processing elements including digital signal processors (DSPs), and/or other hardware elements. For example, some elements may comprise one or more microprocessors, DSPs, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), radio-frequency integrated circuits (RFICs) and combinations of various hardware and logic circuitry for performing at least the functions described herein. In some embodiments, the functional elements may refer to one or more processes operating on one or more processing elements. FIG.5illustrates a WLAN500in accordance with some embodiments. In some embodiments, the WLAN500may comprise an AP502, and one or more stations (STAs)504. Embodiments are not limited to the number of elements (such as APs502, STAs504and/or other) shown inFIG.5. In some embodiments, the AP502may communicate with one or more of the STAs504. Embodiments are not limited to a single AP502, as the WLAN500may comprise one or more APs502, in some embodiments. In some embodiments, the AP502may be a base station. The AP502and/or STAs504may use other communications protocols as well as the IEEE 802.11 protocol. The IEEE 802.11 protocol may be IEEE 802.11ax. The IEEE 802.11 protocol may include using orthogonal frequency division multiple-access (OFDMA), time division multiple access (TDMA), and/or code division multiple access (CDMA). The IEEE 802.11 protocol may include a multiple access technique. For example, the IEEE 802.11 protocol may include space-division multiple access (SDMA) and/or multiple-user multiple-input multiple-output (MU-MIMO). The AP502and/or STAs504may operate in accordance with one or more of IEEE 802.11 a/b/g/n/ac/ad/af/ah/aj/ay, or another legacy wireless communication standard. In some embodiments, the STAs504may be wireless transmit and receive devices such as cellular telephone, portable electronic wireless communication devices, smart telephone, handheld wireless device, wireless glasses, wireless watch, wireless personal device, tablet, or another device that may be transmitting and receiving using the IEEE 802.11 protocol such as IEEE 802.11ax or another wireless protocol. The bandwidth of a channel may be 20 MHz, 40 MHz, or 80 MHz, 160 MHz, 320 MHz contiguous bandwidths or an 80+80 MHz (160 MHz) non-contiguous bandwidth. In some embodiments, the bandwidth of a channel may be 1 MHz, 1.25 MHz, 2.03 MHz, 2.5 MHz, 4.06 MHz, 5 MHz and 10 MHz, or a combination thereof or another bandwidth that is less or equal to the available bandwidth may also be used. In some embodiments the bandwidth of the channels may be based on a number of active data subcarriers. In some embodiments the bandwidth of the channels is based on 26, 52, 106, 242, 484, 996, or 2×996 active data subcarriers or tones that are spaced by 20 MHz. In some embodiments the bandwidth of the channels is 256 tones spaced by 20 MHz. In some embodiments the channels are multiple of 26 tones or a multiple of 20 MHz. In some embodiments a 20 MHz channel may comprise 242 active data subcarriers or tones, which may determine the size of a Fast Fourier Transform (FFT). An allocation of a bandwidth or a number of tones or sub-carriers may be termed a resource unit (RU) allocation in accordance with some embodiments. In some embodiments, the 26-subcarrier RU and 52-subcarrier RU are used in the 20 MHz, 40 MHz, 80 MHz, 160 MHz and 80+80 MHz OFDMA HE PPDU formats. In some embodiments, the 106-subcarrier RU is used in the 20 MHz, 40 MHz, 80 MHz, 160 MHz and 80+80 MHz OFDMA and MU-MIMO HE PPDU formats. In some embodiments, the 242-subcarrier RU is used in the 40 MHz, 80 MHz, 160 MHz and 80+80 MHz OFDMA and MU-MIMO HE PPDU formats. In some embodiments, the 484-subcarrier RU is used in the 80 MHz, 160 MHz and 80+80 MHz OFDMA and MU-MIMO HE PPDU formats. In some embodiments, the 996-subcarrier RU is used in the 160 MHz and 80+80 MHz OFDMA and MU-MIMO HE PPDU formats. A frame and/or MAC protocol data unit (MPDU) may be configured for transmitting a number of spatial streams, which may be in accordance with MU-MIMO and may be in accordance with OFDMA. In other embodiments, the AP502, STA504, and/or other device may also implement different technologies such as code division multiple access (CDMA) 2000, CDMA 2000 1×, CDMA 2000 Evolution-Data Optimized (EV-DO), Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Long Term Evolution (LTE), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), BlueTooth®, or other technologies. In example embodiments, the radio architecture ofFIG.1, the front-end module circuitry ofFIG.2, the radio IC circuitry ofFIG.3, and/or the baseband processing circuitry ofFIG.4may be configured to perform the methods and operations/functions herein described in conjunction with one or more of the figures described herein. In example embodiments, the STA504and/or the AP502are configured to perform the methods and operations/functions described herein in conjunction with one or more of the figures described herein. In example embodiments, an apparatus of the STA504and/or an apparatus of the AP502are configured to perform the methods and functions described herein in conjunction with one or more of the figures described herein. The term Wi-Fi may refer to one or more of the IEEE 802.11 communication standards. FIG.6illustrates a block diagram of an example machine600upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform. In alternative embodiments, the machine600may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine600may operate in the capacity of a server machine, a client machine, or both in server-client network environments. In an example, the machine600may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine600may be an AP502, STA504, personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a portable communications device, a mobile telephone, a smart phone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations. Machine (e.g., computer system)600may include a hardware processor602(e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory604and a static memory606, some or all of which may communicate with each other via an interlink (e.g., bus)608. Specific examples of main memory604include Random Access Memory (RAM), and semiconductor memory devices, which may include, in some embodiments, storage locations in semiconductors such as registers. Specific examples of static memory606include non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; RAM; and CD-ROM and DVD-ROM disks. The machine600may further include a display device610, an input device612(e.g., a keyboard), and a user interface (UI) navigation device614(e.g., a mouse). In an example, the display device610, input device612and UI navigation device614may be a touch screen display. The machine600may additionally include a mass storage (e.g., drive unit)616, a signal generation device618(e.g., a speaker), a network interface device620, and one or more sensors621, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine600may include an output controller628, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.). In some embodiments the processor602and/or instructions624may comprise processing circuitry and/or transceiver circuitry. The storage device616may include a machine readable medium622on which is stored one or more sets of data structures or instructions624(e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions624may also reside, completely or at least partially, within the main memory604, within static memory606, or within the hardware processor602during execution thereof by the machine600. In an example, one or any combination of the hardware processor602, the main memory604, the static memory606, or the storage device616may constitute machine readable media. Specific examples of machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., EPROM or EEPROM) and flash memory devices; magnetic disks, such as internal hard disks and removable disks, magneto-optical disks; RAM; and CD-ROM and DVD-ROM disks. While the machine readable medium622is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions624. An apparatus of the machine600may be one or more of a hardware processor602(e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory604and a static memory606, sensors621, network interface device620, antennas660, a display device610, an input device612, a UI navigation device614, a mass storage616, instructions624, a signal generation device618, and an output controller628. The apparatus may be configured to perform one or more of the methods and/or operations disclosed herein. The apparatus may be intended as a component of the machine600to perform one or more of the methods and/or operations disclosed herein, and/or to perform a portion of one or more of the methods and/or operations disclosed herein. In some embodiments, the apparatus may include a pin or other means to receive power. In some embodiments, the apparatus may include power conditioning hardware. The term “machine readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine600and that cause the machine600to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine readable medium examples may include solid-state memories, and optical and magnetic media. Specific examples of machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; Random Access Memory (RAM); and CD-ROM and DVD-ROM disks. In some examples, machine readable media may include non-transitory machine readable media. In some examples, machine readable media may include machine readable media that is not a transitory propagating signal. In some examples, machine readable media may include non-transitory computer readable storage media. In some examples, machine readable media may include computer readable storage media. The instructions624may further be transmitted or received over a communications network626using a transmission medium via the network interface device620utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, a Long Term Evolution (LTE) family of standards, a Universal Mobile Telecommunications System (UMTS) family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device620may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network626. In an example, the network interface device620may include one or more antennas660to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. In some examples, the network interface device620may wirelessly communicate using Multiple User MIMO techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine600, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software. Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms. Modules are tangible entities (e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations. In an example, the software may reside on a machine readable medium. In an example, the software, when executed by the underlying hardware of the module, causes the hardware to perform the specified operations. Accordingly, the term “module” is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which modules are temporarily configured, each of the modules need not be instantiated at any one moment in time. For example, where the modules comprise a general-purpose hardware processor configured using software, the general-purpose hardware processor may be configured as respective different modules at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time. Some embodiments may be implemented fully or partially in software and/or firmware. This software and/or firmware may take the form of instructions contained in or on a non-transitory computer-readable storage medium. Those instructions may then be read and executed by one or more processors to enable performance of the operations described herein. The instructions may be in any suitable form, such as but not limited to source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. Such a computer-readable medium may include any tangible non-transitory medium for storing information in a form readable by one or more computers, such as but not limited to read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory, etc. FIG.7illustrates a block diagram of an example wireless device700upon which any one or more of the techniques (e.g., methodologies or operations) discussed herein may perform. The wireless device700may be a HE device. The wireless device700may be an AP502and/or STA504(e.g.,FIG.5). An STA504and/or AP502may include some or all of the components shown inFIGS.1-7. The wireless device700may be an example machine600as disclosed in conjunction withFIG.6. The wireless device700may include processing circuitry708. The processing circuitry708may include a transceiver702, physical layer circuitry (PHY circuitry)704, and MAC layer circuitry (MAC circuitry)706, one or more of which may enable transmission and reception of signals to and from other wireless devices700(e.g., AP502, STA504and/or other devices) using one or more antennas712. As an example, the PHY circuitry704may perform various encoding and decoding functions that may include formation of baseband signals for transmission and decoding of received signals. As another example, the transceiver702may perform various transmission and reception functions such as conversion of signals between a baseband range and a Radio Frequency (RF) range. Accordingly, the PHY circuitry704and the transceiver702may be separate components or may be part of a combined component, e.g., processing circuitry708. In addition, some of the described functionality related to transmission and reception of signals may be performed by a combination that may include one, any or all of the PHY circuitry704the transceiver702, MAC circuitry706, memory710, and other components or layers. The MAC circuitry706may control access to the wireless medium. The wireless device700may also include memory710arranged to perform the operations described herein, e.g., some of the operations described herein may be performed by instructions stored in the memory710. The antennas712(some embodiments may include only one antenna) may comprise one or more directional or omnidirectional antennas, including, for example, dipole antennas, monopole antennas, patch antennas, loop antennas, microstrip antennas or other types of antennas suitable for transmission of RF signals. In some multiple-input multiple-output (MIMO) embodiments, the antennas712may be effectively separated to take advantage of spatial diversity and the different channel characteristics that may result. One or more of the memory710, the transceiver702, the PHY circuitry704, the MAC circuitry706, the antennas712, and/or the processing circuitry708may be coupled with one another. Moreover, although memory710, the transceiver702, the PHY circuitry704, the MAC circuitry706, the antennas712are illustrated as separate components, one or more of memory710, the transceiver702, the PHY circuitry704, the MAC circuitry706, the antennas712may be integrated in an electronic package or chip. In some embodiments, the wireless device700may be a mobile device as described in conjunction withFIG.6. In some embodiments the wireless device700may be configured to operate in accordance with one or more wireless communication standards as described herein (e.g., as described in conjunction withFIGS.1-6, IEEE 802.11). In some embodiments, the wireless device700may include one or more of the components as described in conjunction withFIG.6(e.g., display device610, input device612, etc.) Although the wireless device700is illustrated as having several separate functional elements, one or more of the functional elements may be combined and may be implemented by combinations of software-configured elements, such as processing elements including digital signal processors (DSPs), and/or other hardware elements. For example, some elements may comprise one or more microprocessors, DSPs, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), radio-frequency integrated circuits (RFICs) and combinations of various hardware and logic circuitry for performing at least the functions described herein. In some embodiments, the functional elements may refer to one or more processes operating on one or more processing elements. In some embodiments, an apparatus of or used by the wireless device700may include various components of the wireless device700as shown inFIG.7and/or components fromFIGS.1-6. Accordingly, techniques and operations described herein that refer to the wireless device700may be applicable to an apparatus for a wireless device700(e.g., AP502and/or STA504), in some embodiments. In some embodiments, the wireless device700is configured to decode and/or encode signals, packets, and/or frames as described herein, e.g., PPDUs. The PHY circuitry704may be arranged to transmit signals in accordance with one or more communication standards described herein. For example, the PHY circuitry704may be configured to transmit a HE PPDU. The PHY circuitry704may include circuitry for modulation/demodulation, upconversion/downconversion, filtering, amplification, etc. In some embodiments, the processing circuitry708may include one or more processors. The processing circuitry708may be configured to perform functions based on instructions being stored in a RAM or ROM, or based on special purpose circuitry. The processing circuitry708may include a processor such as a general purpose processor or special purpose processor. The processing circuitry708may implement one or more functions associated with antennas712, the transceiver702, the PHY circuitry704, the MAC circuitry706, and/or the memory710. In some embodiments, the processing circuitry708may be configured to perform one or more of the functions/operations and/or methods described herein. In accordance with some embodiments, the STA504may be configurable for wireless local area network (WLAN) communication in a channel. The channel may be configurable to support communication by incumbent devices. The communication by the incumbent devices may be prioritized over the WLAN communication. The channel may comprise a plurality of resource units (RUs). Each RU may comprise a contiguous plurality of resource elements (REs). The STA504may determine a portion of the channel occupied by an incumbent device. The STA504may refrain from communication in a first subset of RUs that overlap the portion of the channel occupied by the incumbent device. The STA504may determine a combined RU that comprises two or more RUs of a second subset of RUs that do not overlap the portion of the channel occupied by the incumbent device. The STA504may encode a physical layer convergence procedure protocol data unit (PPDU) for transmission in the combined RU. The PPDU may be encoded in accordance with joint coding across the RUs of the combined RU. These embodiments are described in more detail below. FIG.8illustrates the operation of a method of communication in accordance with some embodiments. It is important to note that embodiments of the method800may include additional or even fewer operations or processes in comparison to what is illustrated inFIG.8. In addition, embodiments of the method800are not necessarily limited to the chronological order that is shown inFIG.8. In descriptions of the method800, reference may be made to one or more figures, although it is understood that the method800may be practiced with any other suitable systems, interfaces and components. In some embodiments, a STA504may perform one or more operations of the method800, but embodiments are not limited to performance of the method800and/or operations of it by the STA504. In some embodiments, another device and/or component may perform one or more operations that may be the same as, similar to and/or reciprocal to one or more operations of the method800. In a non-limiting example, the AP502may perform an operation that may be the same as, similar to, reciprocal to and/or related to an operation of the method800, in some embodiments. The method800and other methods described herein may refer to APs502, STAs504and/or other devices configured to operate in accordance with WLAN standards, 802.11 standards and/or other standards. However, embodiments are not limited to performance of those methods by those components, and may also be performed by other devices, such as an Evolved Node-B (eNB), User Equipment (UE) and/or other. In addition, the method800and other methods described herein may be practiced by wireless devices configured to operate in other suitable types of wireless communication systems, including systems configured to operate according to Third Generation Partnership Project (3GPP) standards, 3GPP Long Term Evolution (LTE) standards, 5G standards, New Radio (NR) standards and/or other standards. In some embodiments, the method800and/or other method described herein may also be applicable to an apparatus of an AP502, an apparatus of a STA504and/or an apparatus of another device. In some embodiments, an apparatus of a STA504may perform one or more operations of the method800and/or other operations. In some embodiments, an apparatus of an AP502may perform one or more operations that may be the same as, similar to, reciprocal to and/or related to one or more operations described herein. It should also be noted that embodiments are not limited by references herein (such as in descriptions of the method800and/or other descriptions herein) to transmission, reception and/or exchanging of elements such as frames, messages, requests, indicators, signals or other elements. In some embodiments, such an element may be generated, encoded or otherwise processed by processing circuitry (such as by a baseband processor included in the processing circuitry) for transmission. The transmission may be performed by a transceiver or other component, in some cases. In some embodiments, such an element may be decoded, detected or otherwise processed by the processing circuitry (such as by the baseband processor). The element may be received by a transceiver or other component, in some cases. In some embodiments, the processing circuitry and the transceiver may be included in a same apparatus. The scope of embodiments is not limited in this respect, however, as the transceiver may be separate from the apparatus that comprises the processing circuitry, in some embodiments. One or more of the elements (such as messages, operations and/or other) described herein may be included in a standard and/or protocol, including but not limited to WLAN, IEEE 802.11, EHT and/or other. The scope of embodiments is not limited to usage of those elements, however. In some embodiments, different elements, similar elements, alternate elements and/or other elements may be used. The scope of embodiments is also not limited to usage of elements that are included in standards. At operation805, the STA504may detect an incumbent device in a channel. At operation810, the STA504may refrain from communication in a portion of the channel. At operation815, the STA504may determine a combined resource unit (RU) in the channel. At operation820, the STA504may transmit one or more PPDUs (and/or other element(s)) in the combined RU. At operation825, the STA504may transmit one or more codewords (CWs) in the channel. In some embodiments, the STA504may be configurable for wireless local area network (WLAN) communication in a channel. The channel may be configurable to support communication by incumbent devices. The communication by the incumbent devices may be prioritized over the WLAN communication. The channel may comprise a plurality of resource units (RUs). Each RU may comprise a contiguous plurality of resource elements (REs). The STA504may determine a portion of the channel occupied by an incumbent device. The STA504may refrain from communication in a first subset of RUs that overlap the portion of the channel occupied by the incumbent device. The STA504may determine a combined RU that comprises two or more RUs of a second subset of RUs that do not overlap the portion of the channel occupied by the incumbent device. The STA504may encode a physical layer convergence procedure protocol data unit (PPDU) for transmission in the combined RU. The PPDU may be encoded in accordance with joint coding across the RUs of the combined RU. In some embodiments, as part of the joint coding, the STA504may perform one or more of: determine coded bits based on information bits; determine modulated symbols based on the coded bits; map the modulated symbols to the REs of the combined RU; and/or other. In some embodiments, the STA504may determine the coded bits based on an encode operation that is based on a size of the combined RU. The size of the combined RU may be equal to a sum that includes sizes of the RUs that comprise the combined RU. In some embodiments, as part of the joint coding, the STA504may interleave the coded bits. In some embodiments, as part of the joint coding, the STA504may interleave the modulated symbols. In some embodiments, the STA504may map the modulated symbols to the REs of the combined RU for orthogonal frequency division multiplexing (OFDM) transmission. In some embodiments, the STA504may restrict the combined RU to include RUs of size that is greater than or equal to a predetermined minimum size. In a non-limiting example, the predetermined minimum size may be 242 REs. Other sizes, including but not limited to other numbers of REs, may be used in some embodiments. In some embodiments, the STA504may be configurable for wireless local area network (WLAN) communication in a channel that is configurable to support communication by incumbent devices. The communication by the incumbent devices may be prioritized over the WLAN communication. The channel may comprise a plurality of resource units (RUs). Each RU may comprise a contiguous plurality of resource elements (REs). The STA504may perform one or more of: determine a portion of the channel occupied by an incumbent device; refrain from communication in a first subset of RUs that overlap the portion of the channel occupied by the incumbent device; determine a second subset of RUs that do not overlap the portion of the channel occupied by the incumbent device; encode one or more codewords (CWs) for transmission in the RUs of the second subset; and/or other. In some embodiments, the CWs may be encoded in accordance with independent coding for each of the RUs of the second subset. In some embodiments, the STA504may, as part of the independent coding, for each of the RUs of the second subset, perform one or more of: determine coded bits based on information bits; determine modulated symbols based on the coded bits; map the modulated symbols to the REs of the RU; and/or other. In some embodiments, the STA504may, as part of the independent coding, perform one or more of: for each of the RUs of the second subset, determine a codeword size independent of other RUs of the second subset; determine coded bits based on information bits; determine modulated symbols based on the coded bits; map the modulated symbols sequentially across multiple RUs of the second subset; and/or other. In some embodiments, the STA504may map the modulated symbols sequentially across multiple RUs of the second subset for orthogonal frequency division multiplexing (OFDM) transmission. In some embodiments, the STA504may encode a physical layer convergence procedure protocol data unit (PPDU) for transmission in a non-contiguous resource unit (RU) comprising two or more segments of resource elements (REs). In some embodiments, the REs of each segment may be spaced uniformly in frequency by a predetermined spacing. In some embodiments, the segments of REs may be disjoint in frequency. In some embodiments, to encode the PPDU, the STA504may perform one or more of: for a plurality of spatial streams, distribute bits of the spatial streams to different segments; for each of the segments, determine modulated symbols based on the bits of the segment, and interleave the modulated symbols; and/or other. In some embodiments, the STA504may, for each of the segments, map the modulated symbols to the REs of the segment for orthogonal frequency division multiplexing (OFDM) transmission. In some embodiments, an apparatus of a STA504may comprise memory. The memory may be configurable to store one or more elements and the apparatus may use them for performance of one or more operations. The apparatus may include processing circuitry, which may perform one or more operations (including but not limited to operation(s) of the method800and/or other methods described herein). The processing circuitry may include a baseband processor. The baseband circuitry and/or the processing circuitry may perform one or more operations described herein, including but not limited to one or more operations of the method800. The apparatus may include a transceiver to transmit and/or receive one or more blocks, messages and/or other elements. FIG.9illustrates example operations in accordance with some embodiments.FIG.10illustrates example operations in accordance with some embodiments.FIG.11illustrates example operations in accordance with some embodiments.FIG.12illustrates example arrangements of frequency resources in accordance with some embodiments.FIG.13illustrates example arrangements of frequency resources in accordance with some embodiments.FIG.14illustrates example mappings of codewords to frequency resources in accordance with some embodiments.FIG.15illustrates example mappings of codewords to frequency resources in accordance with some embodiments. It should be noted that the examples shown inFIGS.9-15may illustrate some or all of the concepts and techniques described herein in some cases, but embodiments are not limited by the examples. For instance, embodiments are not limited by the name, number, type, size, ordering, arrangement of elements (such as devices, operations, messages and/or other elements) shown inFIGS.9-15. Although some of the elements shown in the examples ofFIGS.9-15may be included in a WLAN standard, Wi-Fi standard, 802.11 standard, and/or other standard, embodiments are not limited to usage of such elements that are included in standards. Some embodiments may be related to frequency resource mapping for the non-continuous RU allocation in EHT. EHT will introduce non-continuous RU allocation. It means more than one frequency segment can be allocated to one client in a PPDU. In the legacy WiFi system, including 11ax, a segment parser/deparser is used to handle the frequency mapping if there are more than one frequency segments in one PPDU. If we follow the same implementation in EHT, one issue is the parser/deparser need to deal with a large number of combinations due to different RU sizes in different frequency segments in one PPDU. Some embodiments herein may be related to options to bypass the segment parser/deparser module and simplify the frequency mapping process for non-contiguous RU allocation. Some embodiments may be related to non-continuous frequency mapping in one PPDU, including but not limited to one or more of the following: 1) remove segment parser/deparser in EHT; 2) add parameters for the new RU size of the non-contiguous frequency segments to enable cross-segments interleaving; 3) independent processing (coding, modulation, interleaving) within each frequency segment; 4) other. In 11ax, the processing procedure900and segment parser/deparser at the transmitter are shown inFIG.9(assuming the PPDU bandwidth is 160 MHz or 80+80 MHz). After streaming parser, each spatial stream has a group of coding bits (bit1˜bit8in the example ofFIG.9). In order to reuse the 80 MHz interleaver (instead of define a new interleaver for 160 MHz), the group of bits are split to two groups in a predefined way. The method of splitting one group to two groups are defined by the segment parser. There are mainly two factors affect the definition of the parser: the number of tones in each frequency segment and the modulation level. There are 6 modulation level in 11ax (BPSK˜1024QAM), but the frequency segment combination is as simple as two segments with equal number of data tones in each segment. So the definition of segment parser in 11ax is not quite complicated (parser mode1inFIG.9is the 11ax 160 MHz segment parser assuming 16QAM is used). However, EHT has larger band width and has more flexible RU puncturing mode. Such that defining the parser could be much more complicated in EHT due to the potential large number of RU size combinations in different frequency segments. For instance, after puncturing a 320 MHz/160+160 MHz channel we can get at least these RU combinations: 160+140 MHz, 160+120 MHz, 160+80 MHz, 80+80 MHz, 80+40 MHz, etc; After puncturing a 160 MHz/80+80 MHz channel, we can get at least these RU combinations: 80+60 MHz, 80+40 MHz, 60+40 MHz, etc. Parser mode2is an example of 160+40 MHz parser given 16QAM is used. Obviously, defining a parser mode for each RU combination impose heavy burden on standard and implementation. Some embodiments may be related to techniques/methods to simplify the frequency mapping process across different non-contiguous frequency segments. In some embodiments (referred to for clarity as “Option 1”), removal of the segment parser/deparser as shown in1000inFIG.10may be performed. The segment parser/deparser, which are introduced in 11ac, is mainly used to achieve cross frequency segment diversity gain for the convolutional code. The BCC, however, has almost been obsoleted in 11ax. LDPC naturally has the gift of frequency diversity due to the structure of channel coding. In addition, EHT is introducing larger bandwidth. Such that most of the diversity gain can be achieved even within one frequency segment for LDPC code. Removing the segment parser/deparser will just marginally impact the performance. Some evaluation results on paper show that only 0.1˜0.3 dB loss can be observed if the parser/deparser is removed. In some embodiments (referred to for clarity as “Option 2”), the segment parser distributes the bits in each spatial stream to different segments with a round robin way. After that, the bits in one segment are modulated and interleaved by a block interleaver within this segment. So the segment parser is equivalent to a cross segment interleaver. To achieve the function of segment parser, another option is to replace the intra segment interleaver with inter segment/cross segment interleaver as shown in1050inFIG.10. One or more tables (including but not limited to one or more tables disclosed herein) may be related to an intra segment interleaver parameter for LDPC code in 11ax. To enable the block interleaver for different RU size, there is a dedicated D_tm for different RU. The principle to define D_tm is that the number of data tones in a RU, which is equivalent to the number of tones in a RU minus the number of pilot tones in the same RU, is dividable by D_tm. For instance, if we want to define a D_tm for 160+80 MHz, D_tm can be 140 or 60, which can divide (996−16)*3. If we want to define a D_tm for 80+40 MHz, D_tm can be 8 which can divide (996+484−32). One consideration is that for each new RU size in EHT a new D_tm needs to be defined. But it may be simpler than define parser/deparser for different RU combinations. In some embodiments (referred to for clarity as “Option 3”), independent processing for each frequency segments can be considered as another option to bypass the complexity of defining the segment parser/deparser. As shown inFIG.11, all of the procedures in the box of each processing chain is what we have in 11ax (as inFIG.9). The only extra module is the info bits parser1100. The function of the info bits parser is to distribute the info bits to different processing chain, such that each chain have similar number of OFDM symbols. For instance, if the two frequency segments are 80 MHz+40 Mhz, the info bits parser want to give the processing chain 1 twice the number of info bits as processing chain 2. Such that both processing chain end up with almost the same number of OFDM symbols. One potential consideration of this option is it could potentially affect the processing flow in the receiver. Because if the receiver process the received frequency blocks sequentially, it needs to buffer the frequency block which is pending for processing. Otherwise, the receiver has to have the capability of parallel processing both frequency blocks. Some embodiments may be related to coding over multiple RUs in EHT. EHT will introduce non-continuous RU allocation. It means in one PPDU, more than one RU can be allocated to a STA. The channel coding options across these RUs are proposed in this disclosure. Some embodiments may be related to a puncture granularity for incumbent. Some embodiments may be related to channel coding across multiple Rus, including but not limited to one or more of the following: 1) joint encoding across multiple RUs; 2) independent encoding across multiple RUs, 3) other. The EHT STAB will work on 6 GHz which has been used already by other wireless services, such as fixed satellite service, microwave backhaul, industry control and security. These services are called incumbent in 6 GHz. WiFi STA shall not interfere these existing receiver of incumbent. The mechanism of interference avoidance is EHT STA will puncture the frequency resources in a WiFi channel that is overlapped with one or more incumbent. An example of frequency resources1200is shown inFIG.12. Reference number1202illustrates a 30 MHz incumbent which overlaps with 80 MHz wifi channel. For this case, the frequency resources covered by the incumbent, plus some guard tones, shall be punctured. 802.1 lax defines a puncture granularity of 242 tone RU, which means the frequency resources that are punctured can be indicated as N*242 tone. We propose to add another granularity which is finer than 242. We propose to use 106 RU and the adjacent 26 tone RU as the puncture granularity. i.e. if a 106 RU is punctured, the neighbor one or two 26 tone RU shall be punctured together. The finer puncturing granularity means better efficiency to collect the residual frequency resources. Some embodiments may be related to SU PPDU with puncturing. After the frequency resources that overlap with the incumbent are punctured, there are at least two approaches to use the residual frequency resources. In some embodiments (referred to for clarity as “Alternative 1”), MU OFDMA may be used. For example inFIG.12, the residual 106 tone RU on the left (reference number1204) can be assigned to one STA. The 3rd 106 tone RU from the right side (reference number1206) can be assigned to the 2nd STA. The 242 tone RU on the right can be assigned to a 3rd STA. This approach has minimum spec change but doesn't support aggregate the residual frequency resources to be a SU PPDU. I.e. no combo RU, 106+106+242, for SU PPDU. In some embodiments (referred to for clarity as “Alternative 2”), a new RU may be defined to support SU PPDU. For example inFIG.12, the three RUs 106+106+242 can be aggregated as a jumbo RU assigned to one STA. To support Alt.2, there are at least two options. In some embodiments (referred to for clarity as “option 1”), joint coding on multiple RUs may be used. Joint coding is described below. In some embodiments (referred to for clarity as “option 2”), independent encoding on multiple RUs may be used. Independent encoding on multiple RUs is described later herein. In some embodiments, joint coding means for different incumbent we need to define a new RU which is a combination of the residual RUs. For the example inFIG.12, a new RU of 106+106+242=454 tone RU needs to be defined. Then the channel coding and interleaving shall be done based on the new RU size. One issue is to determine the minimum RU size used to aggregate the SU PPDU. For the example ofFIG.12, we get the new RU of 106+106+242=454 tone with the assumption that 106 tone RU is the minimum RU for SU PPDU aggregation. If 52 tone RU is used as the minimum RU, we may get a new RU of 106+106+242+52=506 tone, which has more resource. However, a smaller RU for SU PPDU aggregation means more new RU sizes need to be supported. A good trade off we propose to use is 106 tone RU or 242 tone RU as the minimum RU for SU PPDU aggregation. Using exhaustive search and/or other technique(s), one or more tables (including but not limited to one or more tables disclosed herein) may be determined, and may include the new RUs that need to be defined for joint coding. The pilot tones index in the jumbo RU can reuse the pilot tone index definition of each existing 11ax RU in the jumbo RU. One or more tables (including but not limited to one or more tables disclosed herein) may include a proposed new RU size assuming 1) One incumbent overlaps with 80 MHz but not straddle two 80 MHz; 2) 106 tone RU is the minimum RU for SU PPDU aggregation. One or more tables (including but not limited to one or more tables disclosed herein) may include a proposed new RU size assuming 1) One incumbent overlaps with 80 MHz but not straddle two 80 MHz; 2) 242 tone RU is the minimum RU for SU PPDU aggregation. One or more tables (including but not limited to one or more tables disclosed herein) may include a proposed new RU size assuming 1) One incumbent straddles two 80 MHz; 2) 106 tone RU is the minimum RU for SU PPDU aggregation. Note: The frequency resources1300illustrated inFIG.13gives an example of one incumbent straddling two 80 MHz in 160 MHz PPDU. One or more tables (including but not limited to one or more tables disclosed herein) may include a proposed new RU size assuming 1) One incumbent straddles two 80 MHz; 2) 242 tone RU is the minimum RU for SU PPDU aggregation. Some embodiments are related to independent encoding on multiple RUs. In some embodiments, independent coding means channel coding and interleaving is done based on existing 11ax RU size. For the example ofFIG.12, if the 106 tone RU is the minimum RU to aggregate the SU PPDU, the residual three RUs,106,106,242RU are still coded independently instead of coding with a 106+106+242 tone jumbo RU. In this way, no new RU size need to be defined. 11ax RU size can be fully reused. We propose to limit the number of independent coded RU in the SU PPDU to be less than or equal to 3 to simplify the implementation. Also these RUs shall use the same MCS. One issue of independent coding is shown in the left portion (1400) inFIG.14. PHY process PPDU symbol by symbol. PHY want to pass the CW to MAC as long as PHY finish decoding instead of buffering any decoded CW in PHY. However, as shown inFIG.13, CW4in symbol1has to be buffered till CW3in symbol2is decoded and pass to MAC. Same issue for CW9of symbol2. To solve this issue, we propose to define a parser to parse the coded bits to code word mapping. At least two parser options are possible. Parser option #1 is shown in the right portion (1450) inFIG.14. Example rules for the transmitter and receiver may include one or more of the following. In some embodiments, the CW shall be indexed according to the CW priority (the transmitter will map the encoded bits to CW sequentially according to the priority of the CW). In some embodiments, one or more of the following may be applicable for the CW priority: 1) the CW that does not straddle two symbols has highest priority (if more than one CWs that don't straddle two symbols, their priority is the same as 11ax. i.e. indexed low to high frequency and time); 2) CWs that straddle more than one symbol have lower priority (if more than one CWs straddle the same number of symbol, their priority is the same as 11ax. i.e. indexed low to high frequency and time); 3) the number of symbols that a CW straddle across is calculated as: N_sym_total−(i_sym_current−i_sym_start); 4) other. In #3 above: N_sym_total is the total number of symbols a CW straddle across; i_sym_current is the index of the current symbol which transmitter/receiver is processing (mapping the constellation to or demapping the constellation from); i_sym_start is the index of the symbol that the first bit of a CW start from. Parser option #21500is shown inFIG.15. Example rules for the transmitter and receiver may include one or more of the following. In some embodiments, CW is a container for the encoded bits. In some embodiments, container size (CW boundary) is calculated assuming each RU is independent coded. It means the CW boundary calculation in RU1 depends on the RU size of RU1; the CW boundary calculation in RU2 depends on the RU size of RU2. In this way, the calculation fully reuse 11ax defined parameters as inFIG.15. In some embodiments, after all of the CW boundary is fixed, coded bits are loaded to each CW sequentially across multiple RUs instead of loading to one RU after another one. It means the coded bits are first loaded to the CW calculated in previous step, and then modulated to QAM constellation. But instead of mapping the constellation to the frequency resources used to determine the CW boundary, we propose to map the constellation sequentially across multiple RUs as inFIG.15. The Abstract is provided to comply with 37 C.F.R. Section 1.72(b) requiring an abstract that will allow the reader to ascertain the nature and gist of the technical disclosure. It is submitted with the understanding that it will not be used to limit or interpret the scope or meaning of the claims. The following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment.
73,907
11863474
DETAILED DESCRIPTION In the following description, numerous specific details are set forth with respect to one or more embodiments of the present patent disclosure. However, it should be understood that one or more embodiments may be practiced without such specific details. In other instances, well-known circuits, subsystems, components, structures and techniques have not been shown in detail in order not to obscure the understanding of the example embodiments. Accordingly, it will be appreciated by one skilled in the art that the embodiments of the present disclosure may be practiced without such specific components. It should be further recognized that those of ordinary skill in the art, with the aid of the Detailed Description set forth herein and taking reference to the accompanying drawings, will be able to make and use one or more embodiments without undue experimentation. Additionally, terms such as “coupled” and “connected,” along with their derivatives, may be used in the following description, claims, or both. It should be understood that these terms are not necessarily intended as synonyms for each other. “Coupled” may be used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. “Connected” may be used to indicate the establishment of communication, i.e., a communicative relationship, between two or more elements that are coupled with each other. Further, in one or more example embodiments set forth herein, generally speaking, an element, component or module may be configured to perform a function if the element is capable of performing or otherwise structurally arranged or programmed under suitable executable code to perform that function. As used herein, a network element, platform or node may be comprised of one or more pieces of service network equipment, including hardware and software that communicatively interconnects other equipment on a network (e.g., other network elements, end stations, etc.), and is adapted to host one or more applications or services with respect to a plurality of subscribers and associated client devices as well as other endpoints and IoT-based entities, each executing suitable client applications configured to consume various data/voice/media services as well as sense/collect various types of data, information, measurements, etc. As such, some network elements may be disposed in a cellular wireless or satellite telecommunications network, or a broadband wireline network, whereas other network elements may be disposed in a public packet-switched network infrastructure (e.g., the Internet or worldwide web, also sometimes referred to as the “cloud”), private packet-switched network infrastructures such as Intranets and enterprise networks, as well as service provider network infrastructures, any of which may span or involve a variety of access networks and core networks in a hierarchical arrangement. In still further arrangements, one or more network elements may be disposed in cloud-based platforms or data centers having suitable equipment running virtualized functions or applications relative to one or more processes set forth hereinbelow. Example end stations and client devices (broadly referred to as User Equipment or UE devices) may comprise any device configured to consume and/or create any service via one or more suitable access networks or edge network arrangements based on a variety of access technologies, standards and protocols, including a heterogeneous network environment comprising split architectures as will be described in detail below. Accordingly, example UE devices may comprise smartphones, multimedia/video phones, mobile/wireless user equipment, portable media players, smart wearables such as smart watches, goggles, digital gloves, portable laptops, netbooks, palm tops, tablets, phablets, mobile phones, IoT devices and sensors, connected vehicles (manual and/or autonomous), and the like, as well as networked or local gaming devices/consoles including augmented reality (AR), virtual reality (VR) or mixed reality (MR) devices. In a further variation, some client devices or subscriber end stations may also access or consume content/services provided on virtual private networks (VPNs) overlaid on (e.g., tunneled through) the Internet. One or more embodiments of the present patent disclosure may be implemented using different combinations of software, firmware, and/or hardware in one or more modules suitably programmed and/or configured. Thus, one or more of the techniques shown in the Figures (e.g., flowcharts) may be implemented using code and data stored and executed on one or more electronic devices or nodes (e.g., a subscriber client device or end station, a network element, etc.). Such electronic devices may store and communicate (internally and/or with other electronic devices over a network) code and data using computer-readable media, such as non-transitory computer-readable storage media (e.g., magnetic disks, optical disks, random access memory, read-only memory, flash memory devices, phase-change memory, etc.), transitory computer-readable transmission media (e.g., electrical, optical, acoustical or other form of propagated signals—such as carrier waves, infrared signals, digital signals), etc. In addition, such network elements may typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (e.g., non-transitory machine-readable storage media) as well as storage database(s), user input/output devices (e.g., a keyboard, a touch screen, a pointing device, and/or a display), and network connections for effectuating signaling and/or bearer media transmission. The coupling of the set of processors and other components may be typically through one or more buses and bridges (also termed as bus controllers), arranged in any known (e.g., symmetric/shared multiprocessing) or heretofore unknown architectures. Thus, the storage device or component of a given electronic device or network element may be configured to store code and/or data for execution on one or more processors of that element, node or electronic device for purposes of implementing one or more techniques of the present disclosure. Referring now to the drawings and more particularly toFIG.1, depicted therein is an example network environment100including a fronthaul portion102based on a split network architecture, e.g., Coordinated-RAN or Centralized-RAN (C-RAN) architecture (also referred to as Collaborative RAN), wherein one or more embodiments of the present patent disclosure may be practiced in accordance with the teachings herein. Broadly, the fronthaul portion102may be comprised of three main components: one or more BBUs that may be organized into groups of cooperating nodes (referred to as BBU pools, hubs or hotels), a plurality of macrocells, microcells, small cells, femtocells, picocells, etc. (collectively referred to as “cells” unless otherwise specified) comprising a dense, heterogeneous radio environment116, and a suitable transport network. By way of illustration, a plurality of BBUs110-1to110-N and112-1to112-M may be organized into one or more BBU pools or hubs, e.g., BBU hubs108A and108B, which may be operative to serve various cells (i.e., via respective RRUs, mRRUs and the like) connected by means of respective in-phase, quadrature (I/Q) data communication links that may span several kilometers, each operating based on a suitable protocol such as, e.g., Common Public Radio Interface (CPRI), Open Base Station Architecture Initiative (OBSAI), or Open Radio equipment Interface (ORI) over optical fiber or microwave media. As exemplified, cells118-1to118-5are operative with BBU hub108B via fronthaul links122, while cells120-1to120-5are operative with BBU hub108A via fronthaul links124. In general, a BBU may be configured to serve one or more cells depending on cell allocation. In some embodiments, a low latency link114may be disposed between the BBU hubs108A and108B for facilitating inter-hub communication and coordination. By way of illustration, such a link114may be fiber-based, RF microwave link, Ethernet, etc., as long as it is configured to support/provide the required QoS. A backhaul network portion104is operative as an aggregation network for connecting the BBU pools108A/108B to a core network portion106, which may be based on a converged communications architecture, e.g., an Evolved Packet Core (EPC) architecture, that typically includes various components and elements such as one or more mobility management entities (MMEs), serving gateways (SGWs), packet data node (PDN) gateways (PGWs), policy/charging function nodes, etc., as is known in the art. One or more computing platforms150, which may be implemented as one or more standalone servers, data center nodes, and/or management nodes associated with a communications network or a portion thereof, generally shown as nodes152-1to152-K, may be provided in accordance with the teachings herein for purposes of one or more classes of embodiments of the present disclosure with respect to effectuating several aspects relevant to a C-RAN-based fronthaul network architecture, e.g., optimal cell allocation to different BBUs, determining coordination sets of different BBUs according to optimal partnerships, etc., as will be set forth in detail further below. To provide a contextual framework in which example embodiments of the present disclosure may be better appreciated, a general discussion pertaining to C-RAN is provided immediately as follows. One of the key aspects with respect to C-RAN from the point of view of centralization is the possibility of aggregating BBUs to form a BBU pool, as exemplified inFIG.1, which may be configured to take the advantage of data center processing capabilities, potentially including Big Data analytics capabilities. In addition, BBU utilization between heavily loaded and lightly loaded base stations disposed in disparate service areas may be optimized. In other words, BBUs from many, potentially geographically dispersed, sites may be placed at a centralized location, and connected within a BB hub (or, simply, a hub), while the RRUs may be placed at distances up to several kilometers away and connected to the BBUs/hubs via suitable I/Q links, e.g., as illustrated inFIG.1. Further, a BBU pool (e.g., BBU pools108A/108B inFIG.1) can utilize open platforms and real-time virtualization technology rooted in cloud computing to achieve dynamic shared resource allocation. Accordingly, it should be appreciated that BBU functionality and services in example network environment100may be moved to a virtual computing cloud to have a programmable architecture in additional embodiments. Further, in terms of coordination, C-RAN may be implemented as an advanced feature in certain Long Term Evolution (LTE) networks wherein cells may be grouped in clusters and connected to the same BBU to allow inter-cell coordination. Accordingly, improved coverage, spectral efficiency, system capacity and user experience, etc., may be advantageously achieved in example embodiments of the present disclosure by applying suitable design constraints with respect to cell allocation and/or optimal BBU coordination. Moreover, it will be appreciated that centralizing baseband processing and control of radio resources in a single entity facilitates implementing new technologies like integration of heterogeneous cells (e.g., low power nodes, small cells, macrocells, microcells, picocells, femtocells, etc.) and advanced features such as Carrier Aggregation (CA) much simpler and hence more manageable. As more frequencies are transmitted in the same geographical space, coordinated management of overlapping small cells and macrocells becomes essential, especially in a split fronthaul architecture depicted inFIG.1. In a CPRI-based fronthaul implementation, requirements for RRU-to-BBU latency typically impose a one-way 75 μs delay on the CPRI link, which translates into a maximum of 15 km between the RRUs (i.e., cell locations) and the BBU hub, assuming that the hub is populated with co-located or collocated BBUs. In a further arrangement, coordination among BBUs of the same BBU hub (i.e., intra-hub coordination) may also be achieved in certain embodiments. It is also possible to achieve coordination between BBU hubs located several kilometers away from each other, depending on latency requirements (i.e., inter-hub coordination). According to certain example embodiments of the present disclosure, such coordination among BBUs/BBU hubs may be implemented in an advanced feature set architecture referred to as Elastic-RAN (E-RAN), which may be configured to provide optimal partnerships among different BBUs across the entire network. Typically, maximum allowed BBU-to-BBU delay for E-RAN is 30 μs over E5 interface, which translates into a maximum separation of 5 km between BBU hubs in one implementation. As will be seen further below, accordingly, certain example embodiments are directed to optimizing capabilities such as Carrier Aggregation (CA) and Coordinated Multipoint (CoMP) over a unified coordination area irrespective of the BBU deployment scenario in the network by determining appropriate BBU partnerships and configuring BBU coordination sets in response thereto. It will be appreciated that a key benefit of C-RAN is the ability to leverage coordination to improve overall RF performance in a network. Example performance features that take advantage of improved radio coordination between cells and bands in an example embodiment may be set forth as follows. Handover or Mobility Management: Delay in performing inter-site handovers is reduced as it can be done inside the centralized unit instead of between base stations. Moreover, the general amount of signaling information sent to the core mobile network is reduced, after being aggregated in a single entity. Load Balancing or Traffic Management: On the BB hub side, it can be seen that BBUs already form one logical entity; therefore load balancing is a matter of assigning proper BBU resources within a pool. On the RRU/cells side, users can be switched between cells without constraints if the BBU pool has capacity to support them, as capacity can be assigned dynamically from the pool. Interference Management: Interference control techniques such as eICIC (enhanced Inter-Cell Interference Coordination) can benefit from the parallelized computational resources and increased processing power at the centralized BBU/hub. Carrier Aggregation (CA): This feature provides the ability to aggregate multiple LTE carriers together in providing service to an end station or user equipment (UE). Prior to C-RAN adoption, the main restriction for the CA operation is that the UE can only aggregate Component Carriers (CCs) served by the same base station because CCs must be perfectly synchronized. Additionally, small cells are typically configured to operate rather independently as they have their own on-board reference clock, which can give rise to a significant probability of two neighboring cells drifting apart. With the adoption of the C-RAN architecture, all baseband resources and CCs are hosted by a single entity, so it is relatively easy to have a single reference clock for all aggregated CCs, which simplifies joint scheduling. Thus, CA is supported between all cells connected to the same BBU hub, i.e. with both C-RAN and E-RAN architectures, which is advantageously enhanced in example embodiments directed to optimal cell allocation (which optimizes assignment of cells to the same BBU under certain design constraints) as well as in example embodiments directed to determining optimal BBU partners for coordination. Uplink Coordinated Multipoint Reception (UL CoMP in LTE-Advanced): The UL CoMP solution improves the quality, throughput and the capacity of the UL signal because the UE transmission is received via multiple cells that are geographically separated. Thus, the signal level is increased and the inter-cell interference (ICI) reduced by jointly processing these signals in a centralized platform. Downlink Coordinated Multipoint Transmission (DL CoMP in LTE-Advanced): In an example arrangement where the fronthaul connectivity is based on a cell allocation scheme of the present disclosure, C-RAN may be configured to analyze the interference situation on the DL from all the cells in the coordinated cluster (which may be defined responsive to the cell allocation scheme) and, based on that information, the scheduler decisions may be optimized accordingly. In other words, a DL CoMP solution relies on turning inter-cell interference into a useful signal. This increases the Signal Interference plus Noise Ratio (SINR) at the UE, which in turn leads to higher achievable bit rates. This technique is particularly useful to improve the performance at the edges of the cells since the SINR values are generally lower. Whereas one of the main challenges of C/E-RAN design is to find an optimal cell clustering and BB hub assignability with minimal overhead and maximum gain, example embodiments set forth below advantageously address such challenges by focusing on several performance goals and constraints within a computationally efficient process. By way of illustration, example embodiments may be based on, without limitation, one or more of the following: (i) cells should be optimally clustered to be assigned to one BB hub in order to achieve/maximize statistical multiplexing gain, facilitate CoMP and CA, but also prevent the BB hub and the fronthaul from overloading; (ii) a BB hub should support cells from different areas such as office, residential or commercial as well as cells of various types and kinds, such as small cells, macrocells or microcells, etc.; and (iii) intra-hub and inter-hub coordination among BBUs should be possible within the constraints such as latency, etc. Example embodiments may therefore consider constraints such as, e.g., the distance restrictions between RRUs and BBU/BB hub locations, BBU hardware/software resources, the number of available ports to connect various types of heterogeneous cells/nodes (e.g., macrocells, small cells, etc.), and having the possibility of cascading multiple small cells on the same port if they are collocated. Accordingly, in still further aspects, certain example embodiments are directed to optimizing a partner-BBU selection scheme in order to maximize the RF benefit of advanced E-RAN features such as, e.g., CA and CoMP. It will therefore be appreciated that at least a class of example embodiments set forth herein facilitate a fronthaul network (re)configuration or (re)arrangement wherein cells on the same BBU benefit from C-RAN advanced features, whereas cells on the same hub benefit from E-RAN advanced features, as far as they belong to the same coordination set or, in other words, they belong to partner BBUs. In certain example embodiments, a BBU can have up to a configurable number of partner BBUs, (e.g., six), with or without the restriction of BBU partnerships being reciprocal. Additional details with respect to an example BBU200and an example RRU300are illustrated inFIGS.2and3, respectively, each being representative of one or more BBUs and one or more RRUs that may be disposed in a fronthaul network portion (e.g., the fronthaul network portion102shown inFIG.1) for purposes an embodiment of the present patent disclosure. BBU200may include one or more processor modules202and one or more memory modules204for effectuating various baseband processing functionalities206that may be organized under L1-L3 layer functional blocks222,224,226, generally relating to coding, modulation, Fast Fourier Transform (FFT), etc. As illustrated, Radio Resource Control (RRC) functionality228and Media Access Control (MAC) functionality230are exemplified as L3 and L2 layer blocks, respectively. L1 layer functional block222is exemplified with several modules including CoMP module232, eICIC module234, channel coding/decoding module236, quantization/de-quantization module238, multi-input-multi-output (MIMO) antenna mapping module240, resource block mapping module242, sampling/de-sampling module244, modulation/demodulation module246, FFT/Inverse FFT (IFFT) module248as well as IQ protocol module250. An input/output (I/O) module208is operative to support a plurality of fronthaul ports210-1to210-M for effectuating suitable IQ links (e.g., based on CPRI, OBSAI or ORI) with a plurality of RRUs, wherein each port may be configured to support uplink (UL) and/or downlink (DL) communications in an example arrangement. Although not specifically shown inFIG.2, a CS database populated with suitable BBU partnership data may be included in a BBU (e.g., per hub). Additionally, one or more backhaul interfaces may also be provided as part of BBU200for effectuating connectivity with an aggregation network. Example RRU300is broadly concerned with providing an interface between the IQ fiber link and radio connectivity to one or more UEs (not shown in this FIG.), and is generally responsible for digital processing, frequency filtering and power amplification, etc., generally illustrated as an assembly of modules302. An antenna318is operatively coupled to a frequency filter and power amplification module316, which is coupled to an analog-to-digital converter (ADC)314and a digital-to-analog converter (DAC)315for uplink and downlink communications, respectively. ADC314and DAC315are coupled to a crest factor reduction (CFR) and digital predistortion (DPD) module312as well as respective sampling rate conversion (SRC) and digital up/down conversion (DUC)/DDC) modules, as exemplified by SRC308A/B, DDC310B and DUC310A. An IQ protocol module306is operatively coupled to one or more I/O modules304to effectuate IQ communications via suitable IQ links to one or more BBUs depending on cell allocation, e.g., during dynamic or static (re)configuration of a fronthaul network portion. FIG.4Adepicts a block diagram of an apparatus, node, network element, or server that may be configured as a platform400A (e.g., platform150or any node therein as shown inFIG.1) to determine a cell allocation map with respect to an example fronthaul network portion for purposes of a class of embodiments of the present patent disclosure. Preferably, platform400A may be configured with one or more processor modules402operating in conjunction with suitable software/hardware/firmware modules, including a persistent memory having program instructions for executing a cell allocation module404to provide a novel apparatus and method for automatically assigning radio cells to hubs and to the BBUs connected to the hubs, with the target of maximizing the benefit of advanced C-RAN and E-RAN features while minimizing the number of consumed BBUs and hubs and respecting a list of user restrictions. By way of illustration, a number of inputs are provided, generated or otherwise obtained, to effectuate an example cell allocation process, inter alia: (i) a structure identifying overlapping traffic between every pair of cells in the network to configure; (ii) physical information per cell and per hub; (iii) hardware constraints (e.g., number of hubs/BBUs, ports and latency requirements, etc.); and (iv) a list of preassigned cells to each hub, wherein an empty list means no preassigned cell. InFIG.4A, the foregoing inputs and constraints are shown as a list or database structure of cells406(which may include cell identities, respective physical locations in terms of latitude/longitude, port identities, etc.); a list or database structure of BBU hubs408(which may include hub identities, respective physical locations, etc.); a list or database structure of preassigned cells410; a list or database structure of overlapping traffic412(which may be obtained or generated from network planning tools, drive tests, call traces, etc.); a set of hardware constraints414(e.g., number of hubs, BBUs per hub, ports per BBU, maximum cell-to-hub distance, latency, etc.); an affinity matrix or database structure416(which identifies user restrictions, cell restrictions, etc.). A cell allocation map418may be obtained by executing the cell allocation process, which identifies the mapping of various cells to one or more hubs and individual BBUs therein. At a high level, an example cell allocation process may comprise three steps or sub-processes that may be executed iteratively:(i) Step 1: Hub selection. Higher priority is given to hubs with preassigned cells, and then to hubs with most availability (e.g., hubs with higher available gaps, where a gap may be determined as the minimum value between the available number of cells and the available number of ports, i.e., port-to-cell gap);(ii) Step 2: Cell allocation to selected hub. Once a hub is selected, it is fully filled before continuing with the next hub. Preassigned cells are assigned first, and then those that can only be allocated to the current hub due to latency requirements. Finally, cells with highest overlapping with the existing cells in the hub are added to the hub, until it is full; and(iii) Step 3: Cell allocation to BBUs in selected hub. The cells assigned to a hub are allocated to the BBUs connected to that hub before continuing assigning cells to the next hub, with the following conditions: Preassigned cells are allocated to their BBUs first. Then, those BBUs are filled with the unallocated cells in the hub with highest overlapping. Finally, BBUs with no preassigned cells are filled with unallocated cells in a sequential way, i.e., a BBU must be full before starting assigning cells to another BBU: BBUs are firstly assigned the pair of unallocated cells in the hub with highest overlapping between them. Then, unallocated cells are added one-by-one, selecting those with maximum overlapping with respect to the cells already assigned to the BBU first. Advantageously, an example embodiment may use smallest distance to hub as a criterion to select the best cell in case of equal overlapping value during Step 2 (cell allocation to selected hub) and Step 3 (cell allocation to BBUs). Advantageously, an example embodiment may include the current or forecasted traffic measurements per cell as additional inputs, and consider them as an extra capacity restriction to avoid exceeding the limits per BBU and per hub. Advantageously, an example embodiment may use a square symmetric affinity matrix as input to consider user restrictions to the allocation of every pair of cells to the same BBU, whereinAffinity 0: Pair of cells forced not to belong to the same BBU;Affinity 1: Pair of cells with freedom to belong to the same BBU or not;Affinity 2: Pair of cells forced to belong to the same BBU. These cells may be consolidated into single cell groups and assigned jointly to the hub. Advantageously, an example embodiment may use an extra affinity matrix to add user constraints on the allocation of cells to the same hub. FIG.4Bis a flowchart of various blocks, steps and/or acts that may be (re)combined in one or more arrangements, with or without additional flowcharts of the present disclosure, for effectuating network connectivity configuration in a fronthaul network portion based on cell allocation according to one or more embodiments of the present patent disclosure. In one embodiment, process400B may commence by obtaining, providing, or otherwise configuring a plurality of input variables and design constraints with respect to a fronthaul network (block452). At block454, a BBU hub is selected from the group of BBU hubs responsive to a determination with respect to at least one or more input variables and design constraints (e.g., heuristic-based determination process). At block456, at least a portion of the cells are allocated to the selected BBU hub until the selected BBU hub is determined to be full with respect to a capacity parameter (e.g., port utilization). Thereafter, the cells allocated to the selected BBU hub are allocated or assigned to individual BBUs (e.g., sequentially) within the selected BBU hub (block458). Preferably, the acts of selecting BBU hubs, allocating cells to each of the selected BBU hubs sequentially and assigning the allocated cells to individual BBUs sequentially within each selected BBU hub, respectively, may be repeated in a nested-iterative process (i.e., a sequential nested iterative process) until there remain no unallocated cells (block460). Responsive thereto, a cell allocation map is thereby obtained, determined and/or otherwise provided, which identifies allocations of the plurality of cells/RRUs with respect to the selected BBU hubs and the individual BBUs therein (block462). As set forth at block464, appropriate fronthaul connections between the plurality of cells and selected BBU hubs may be configured (e.g., automatically, on-demand, on-schedule, or operator-triggered, etc.) responsive to the cell allocation map so as to optimize one or more key performance indicators (KPIs) in the fronthaul network portion with respect to at least one of radio frequency (RF) capacity, network throughput, per-BBU port utilization, and inter-cell interference, etc. Additional details and/or further variations with respect to an implementation of the foregoing embodiments are set forth in the following sections. As noted above, example embodiments are preferably configured to assign radio cells to hubs and BBUs with the target of maximizing the benefit of advanced C-RAN and E-RAN features while minimizing the hardware resources and respecting a configurable list of design restrictions. For purposes of an example implementation, hardware resources may represent the required number of BBU and hub units, wherein examples of advanced E-RAN features comprise Carrier Aggregation and UL CoMP and an example of advanced C-RAN feature comprises DL CoMP. As illustrated inFIGS.4A/4B, an example embodiment of the process may be implemented as an executable software module or program on a general-purpose computer machine (e.g., personal computer, computing server, data center node, management node, etc.), using one or more following inputs, without limitation: A list of cells in the network, including physical information: latitude and longitude per cell and per hub, and port utilization per cell. Port utilization is normally 1 for macrocells, but it might be lower for collocated small cells that can be cascaded on the same port. In these cases, the port utilization is equal to 1 divided by the maximum number of collocated small cells that can be cascaded on the same port. An overlapping matrix A: In accordance with the teachings of the present patent disclosure, a unidirectional overlapping matrix Aumay be defined as a square matrix with size equal to the number of cells in the network, of which elements: Aj,ku∈[0,1] wherein the values in the range [0,1] represent the ratio of traffic in which cells #j and #k have a predetermined or predefined good service level, compared to the traffic in which cell #j has good service level. In an example implementation, service level can represent, for example, coverage and/or signal quality parameters, which may be determined as at least one of a Reference Signal Received Power (RSRP) value over a threshold (e.g., a first threshold), a Reference Signal Received Quality (RSRQ) value over a threshold (e.g., a second threshold), and a Reference Signal to Interference-plus-Noise Ratio (RS-SINR) value over a threshold (e.g., a third threshold). The input overlapping matrix A may be obtained as a symmetric square matrix derived from Au, of which elements Aj,kare a measure of the mutual service level overlapping between cells #j and #k, defined as follows: Aj,k=Ak,j=(Aj,ku)2+(Ak,ju)2Eqn. (1) where Auis non-symmetric, i.e., Aj,ku≠Ak,ju. In an example embodiment, overlapping values can be obtained from RSRP and RS-SINR maps generated by planning/predicting tools or obtained through drive tests or call traces. Yet another multi-dimensional input variable may comprise a list of hubs that identifies, e.g., where to allocate cells, including physical information: latitude and longitude per hub, as noted above. Yet another multi-dimensional input variable may comprise a list of preassigned cells to each hub. Various other inputs comprise hardware constraints and user restrictions as also noted previously. In one arrangement, user restrictions may comprise cell allocation restrictions based on a list of cells that must belong to the same BBU, a list of cells that must belong to different BBUs, and a list of cells that may be allowed to belong to the same BBU or not. An affinity symmetric matrix with size equal to the number of cells in the network may be defined wherein values between a pair of cells #j and #k may be defined as follows: Affinity value of “x” if the cells #j and #k are forced to belong to different BBUs; Affinity value of “y” if the cells #j and #k have freedom to belong to the same BBU or not; and Affinity value of “z” if the cells #j and #k are forced to belong to the same BBU. As previously noted, affinity values {x}, {y} and {z} may take on values “0”; “1”; and “2” in an example implementation. One skilled in the art will recognize upon reference hereto that various other types of variables, e.g., discrete variables, continuous variables, etc. may be used for defining inter-cell affinities between a pair of cells for purposes of an embodiment of the present patent disclosure. Further, the cell allocation restrictions may be particularized based on user settings in an example implementation, for instance, including a determination as to whether collocated cells must be forced or not to belong to the same BBU. Still further, an allocation strategy may be implemented in an example embodiment for purposes of later allocation of cells to BBUs when estimating the number of cells per hub. By way of illustration, such strategy may be configured as a rules-based strategy wherein the following rules are exemplary:Rule 1: Macrocell “as is” and small cells on a different BBU;Rule 2: Marcocell “as is” and small cells on a same BBU; andRule 3: Free mixing of macrocells and small cells. Preferably, an example cell allocation process is configured to assign cells to the hubs sequentially, i.e., a hub must be completely full (e.g., with respect to certain predetermined capacity such as port utilization) before adding cells to any other hubs. Once a hub is full, the allocated cells are further assigned to the individual BBUs of the hub also sequentially. Skilled artisans will appreciate that the two-stage sequential approach set forth herein provides and facilitates various aspects of hardware resource optimization, e.g., with respect to the number of BBUs/hubs needed for configuring a given number of cells in the network. Set forth below are further details with respect to the three high level sub-processes (or, Steps) as previously noted. Step 1: Hub Selection The process may be configured to fill those hubs with some preassigned cell first. If there is more than one hub with at least one preassigned cell, priority is given to the hub with the highest number of gaps, determined as the minimum value between the available number of cells and the available number of ports. Then, it will fill those hubs with non-preassigned cells, again with higher priority to hubs with higher number of available cells per hub. After selecting a hub, the process will continue with STEPS 2 and 3 before selecting the next hub. Step 2: Allocation of Cells to the Selected Hub Once a hub is selected, it is fully filled before continuing with the next hub. The selection of the first cell (or group of cells if they have affinity 2) for the current hub is done with special conditions, following the next sub-step:STEP 2.1: Preassigned cells have the highest priority. All preassigned cells will be automatically assigned to their hub. Unallocated cells that can only be allocated to the current hub due to maximum distance restriction will also be automatically assigned to it. If there are no allocated cells to a hub using the previous criterion, the first allocated cell will be the closest cell to that hub. The selection of the next cell or group of cells to be allocated to the current hub is done following the next sub-steps:STEP 2.2: The process is configured to create the list of candidate cells that can be connected to the hub according to distance and/or some relevant cost function based thereon.STEP 2.3: The process is configured to select the candidate cell (or group of cells with affinity 2 among them) with lowest number of candidate hubs to which they can be connected according to the distance/cost function. In case of having more than one cell (or group of cells with affinity 2) with the same lowest number of candidate hubs, select the cell or group of cells with highest overlapping with the rest of cells already allocated to the hub. The overlapping between a cell and a group of cells, or between two groups of cells may be determined as the sum of all the overlapping values as defined in Equation (1), associated with all possible combinations of pairs of cells formed by the cells in the first and the second group. In case of equal overlapping, select the closest cell/group to that hub.STEP 2.3 is repeated until the hub is full. Step 3: Allocation of Cells to BBUS in Selected Hub The cells assigned to a hub are allocated to the BBUs connected to that hub before continuing assigning cells to the next hub. It should be noted that in an example embodiment, if the available C-RAN features among cells connected to the same BBU only allow intra-frequency coordination (e.g. DL CoMP), overlapping equal to 0 must be assumed between pairs of cells operating on different frequency carriers. The allocation of cells to the BBUs in a hub may be performed in three sub-steps:STEP 3.1: Allocation of preassigned cells to their BBUs. When using allocation strategy based on rule 1 or 2 set forth above, groups of unallocated collocated macrocells are also assigned to their own empty BBU within the hub according to an example embodiment.STEP 3.2: Allocation of cells to BBUs that contain preassigned cells. In this sub-step, the process allocates cells to BBUs with preassigned cells (or with macrocells, in case of rules 1 and 2). In case of allocation strategy based on rule 1, BBUs with macrocells cannot host any other cell, and they would be skipped in this step. This is done by the process upon finding the unallocated cell (or groups of cells if they have affinity 2) in the hub with highest overlapping with any of the BBUs. The overlapping between a candidate cell/group and a BBU is the sum of the overlapping values between all pairs of combinations of the candidate cell/group and the cells already assigned to that hub. The candidate group can be divided into subgroups of candidate cells that match the available gaps in the BBUs. The process is repeated until all the BBUs with preassigned cells (or macrocells, in case of rules 1 or 2) are full.STEP 3.3: Allocation of cells to BBUs with no preassigned cells. In this sub-step, the process allocates cells to the rest of BBUs.STEP 3.3.1: Selection of the initial cell/group. This is executed by the process upon selecting the pair of cells/groups with highest overlapping and assigning them to the first empty BBU. This is not necessary if there are remaining cells from the previous BBU during STEP 3.2.STEP 3.3.2: The process is configured to continue assigning cells to the BBU one-by-one, until it is full. Selection is based on maximum overlapping with respect to the cells already assigned to the BBU. In case of equal overlapping, the process selects the cell/group with smallest average distance (or a minimal cost function) to the cells already assigned to the BBU. Once the BBU is full, the process iterates to STEP 3.1 again, and repeats the loop. If the last group needs more gaps than available in the BBU, the group may be divided into two subgroups, and the remaining cells are assigned to the next empty BBU as its initial cell/group in STEP 3.1. The subgroup assigned to the current BBU is the combination of cells in the group with maximum overlapping that matches the remaining gaps. Once all BBUs are full, the process iterates to STEP 1 again to select the next hub. If there are pending cells in the hub with no assigned BBUs, the process deallocates them from the hub, so that they can be assigned to other hubs. One skilled in the art will recognize that such a situation can arise due to other hardware limitations different from the number of cells per BBU, such as number of ports per BBU or maximum traffic per BBU or hub. Turning attention toFIGS.5A and5B, depicted therein are flowcharts of various blocks, steps and/or acts that may be (re)combined in one or more arrangements, with or without additional flowcharts of the present disclosure, for determining cell allocation according to one or more embodiments of the present patent disclosure wherein no user restrictions (e.g., such as those described above) are assumed. Skilled artisans will however recognize that various process inputs and other constraints set forth in the foregoing sections are equally applicable with respect to the overall process flow shown inFIGS.5A and5B. Example process500A commences with hub selection (blocks502and504), consistent with the embodiments described above. Once a hub is selected, a determination is made as to whether there are any preassigned cells to the selected hub, as set forth at block506. If so, the preassigned cells are allocated to the selected hub (block508). Otherwise, the closest cells or cells with a minimum cost function relative to the selected hub are allocated (block510). At block512, a plurality of candidate cells satisfying a predetermined requirement (e.g., distance, latency, or some other cost function) are determined. Candidate cells with least number of hubs are then determined (block514). If there is only one candidate cell with a least number of hubs, it is allocated to the selected hub (blocks516,518). Otherwise, a candidate cell with a least number of hubs and having a maximal overlapping with the already assigned cells in the selected hub is allocated (block520). At block522, a determination is made as to whether there are any remaining unassigned cells. If so, a further determination is made to determine whether the selected hub is full (block524). If the selected hub is full, control of example process500A flows to process500B, which may be executed to sequentially allocate the assigned cells to the individual BBUs in the selected hub (block526). As illustrated inFIG.5A, flow control loops back to the hub selection process (block504) upon completion of the sequential intra-hub allocation process of block526, thereby (re)commencing the process of sequentially selecting (any) additional hubs. On the other hand, if there are remaining unassigned cells and the selected hub is not full (blocks522,524), flow control loops back to block514to find candidate cells with least number of hubs subject to maximum distance/cost function constraints as before. If there are no remaining cells (block522), the cells assigned to the selected hub are allocated to the individual BBUs therein (block525), essentially similar to block526, except that the process flow exits upon executing the sequential intra-hub allocation process, as exemplified by block527. Process500B ofFIG.5B, which illustrates an implementation of sequential allocation of cells to individual BBUs within a hub, may commence from either one of two entry flow points from process500A, as shown at blocks525,526ofFIG.5A. At block554, cells preassigned to specific BBUs within the hub are identified and preferentially allocated. Unallocated cells in the hub are determined and assigned to a best BBU therein based on, e.g., overlapping (blocks556,558). If there are no unassigned cells in the hub (block560), the process flow exits (block580), and may return to either block527(for completion of the overall cell allocation process) or block504(for selection of additional hubs) of process500A, as discussed above. If there are remaining unassigned cells in the hub (block560), a further determination is made as to whether all BBUs hosting preassigned cells are full (block562). If not, the process flow returns to block556to find or determine unallocated cells and allocate them to a best BBU(s) as before (block558). On the other hand, if all the BBUs with preassigned cells are full (block562), a still further determination is made to determine if there are any empty BBUs in the hub (block564). If there are no empty hubs, the pending cells are deallocated from the current hub (block566) and the process flow exits (block580) so as to return to hub selection inFIG.5A. If it is determined that there are empty BBUs in the current hub, cell allocation may be made based on determining a new BBU and assigning a pair of unallocated cells having the highest overlapping in the hub (block568). If there are still unassigned cells remaining (block570), they may be assigned to the current BBU based on overlapping (block572). This process may be repeated in a loop until the current BBU is full, as set forth at blocks574,576. If the current BBU is full (block576), the process flow returns to determining whether there are any remaining empty BBUs in the hub (block564). In one embodiment, for any remaining unallocated cells, such remaining unallocated cells may be allocated to BBUs having no preassigned cells one-by-one, taking into account at least one of (i) any overlapping between a candidate cell to be allocated and cells already assigned to the BBU; and (ii) a port-to-cell gap associated with the BBU, for example, as part of blocks568,572. If there are no unassigned cells in the hub, as determined either at block570or block574, the process flow exits (block580) so as to return to an appropriate action inFIG.5A. Based on the foregoing, it can be appreciated that a hierarchical (or, nested or multi-staged) sequential allocation process as set forth above to fill the hubs and BBUs therein allows a determination of optimal allocation of cells while ensuring that only a minimum amount of resources will be necessary to service the cells in a network. Accordingly, the total cost of ownership (TCO) comprising capital expenditures (CAPEX) and operating expenditures (OPEX) relating to the hardware is advantageously minimized. On other hand, the target of maximizing the advantages of advanced C-RAN/E-RAN features in the design and deployment of a fronthaul network continues to be maintained in configuring the cellular-BBU connectivity based on the cell allocation data that is determined according to an embodiment of the present patent disclosure. Skilled artisans will further recognize that embodiments set forth herein address and overcome the following deficiencies, without limitation, regarding current C-RAN/E-RAN implementations. For example, existing solutions pay most attention to boosting the radio benefit of the E-RAN advanced features. However, they give less priority to cost associated with hardware unit deployment. Whereas a current solution allows fast cell allocation, it uses the number of hardware units (BBUs or hubs) as a hard, fixed constraint input. This means that all the available BBUs are allocated, regardless of whether they are really needed or not. Additionally, cells are assigned individually, not considering the possibility of extra user restrictions such as forcing certain cells (e.g. collocated) to belong to the same clustering group, or preventing non-collocated cells from belonging to the same clustering group. This solution has also the potential problem of ending up with unassigned cells that cannot be allocated to hubs within the maximum distance because they are already full, whereas an earlier allocation of those cells with fewer suitable hubs within the maximum distance range would prevent the problem from happening. Another solution addresses some user constraints, but those related to minimizing hardware cost are considered as soft requirements. In other words, they are not treated as targeted constraints required to be fulfilled. It will be further appreciated that embodiments set forth herein may be specifically configured to provide and/or facilitate one or more following features, inter alia. An example embodiment may be configured to provide radio cell to hub mapping and radio cell to BBU mapping that maximize RF capacity, as well as BBU and hub utilization. The sequential allocation approach targets at maximizing the benefit of the advanced C-RAN and E-RAN solutions but gives operators the possibility to put higher priority on minimizing the number of required BBUs and hubs, and therefore the total costs associated with hardware unit deployment as noted above. An example embodiment may be configured to provide a reduction in the probability of ending up with unassigned cells that cannot be allocated to any hubs due to latency requirements because they are already full. This is possible by prioritizing the allocation of cells with fewer hubs within their maximum distance. An example embodiment may be configured to support high flexibility for addition of user restrictions. It can be seen that the rule-based use restrictions described above advantageously support various strategies to allocate different types of cells to hubs, based on real requests/needs from customers, in additional/alternative embodiments. For example, the rule relating to “Macrocell “as is” and small cells on different BBU” comports well with the case in which the macrocells are still deployed in legacy architecture (e.g., Distributed RAN or D-RAN), and the operator would like to keep them on the same DU/BBU and add new small cells into new BBUs. Likewise, the rule relating to “Macrocell “as is” and small cells on same BBU” is amenable to an implementation where the macrocells are still deployed in legacy architecture (e.g., D-RAN), and they will be migrated to BBUs in C/E-RAN architecture, together with new small cells. Still further, the rule relating to free mixing of macros and small cells allows maximum flexibility in a fronthaul design, deployment and (re)configuration in accordance with an embodiment of the present invention. The example cell allocation process is also configurable or extensible in that additional user restrictions can be added as needed depending on future implementations and requirements, e.g., forcing collocated cells or cells belonging to user-defined groups to belong to the same/different hub/BBU. By way of illustration, an example of group of cells that must belong to the same hub are cells on the same side of a city divided by a river or some other natural/artificial boundary. Likewise, certain cells can be forced to be preassigned to particular BBUs and/or hubs depending on a particular operator's implementation. In further aspects, another class of embodiments of the present patent disclosure are directed to systems, methods and non-transitory computer-readable storage media with respect to configuring BBUs for facilitating inter-BBU coordination (e.g., inter-site baseband connectivity) in order to better leverage one or more advanced E-RAN features in a network implementation. As noted elsewhere in the present patent disclosure, baseband coordination is a key element in achieving high network performance, and an E-RAN implementation in the fronthaul may be configured to extend KPIs such as, e.g., the user experience, network throughput, and efficiency benefits of coordination, etc., across the entire network. It should be appreciated that in such an implementation based on E-RAN, every BBU may be configured to coordinate/cooperate with any adjacent one, whether in a centralized, distributed or hybrid network architecture, based on configurable BBU partnerships or coordination sets (CSs). Further, such highly flexible implementations not only support hyper-scalable architectures but also help advance the operator's migration to Cloud RAN. Whereas the benefits of most centralized baseband deployments are contained to a specific area in the existing implementations, it will be seen that example embodiments relating to E-RAN optimization advantageously provide a scalable architecture based on generating intelligent BBU partnerships, wherein optimal basebands are interconnected through high-performance transport networks (e.g., Ethernet), enabling the complete network to operate as one unified coordination area. An E-RAN implementation according to an example BBU partnership configuration scheme of the present disclosure can therefore ensure that capabilities such as Carrier Aggregation (CA) and CoMP may be extended to improve the user's application coverage network-wide irrespective of the baseband deployment scenario. FIG.7Ais a block diagram of an apparatus, node, network element, or server that may be configured as a platform700A (e.g., platform150or any node therein as exemplified inFIG.1) to determine BBU coordination sets with respect to a fronthaul network portion for purposes of an example embodiment of the present patent disclosure. Similar to the platform400A shown inFIG.4A, platform700A may be configured with one or more processor modules702operating in conjunction with suitable software/hardware/firmware modules, including a persistent memory having program instructions for executing a CS generation module704to provide a novel apparatus and method to select and/or assign partner BBUs (which may or may not be in the same hubs), with the target of maximizing the benefit of advanced E-RAN features. By way of illustration, a number of inputs are provided to effectuate an example CS generation process (also referred to as BBU partnership generation): (i) a list of cells assigned to each BBU (block706); (ii) a maximum number of partner BBUs allowed per BBU (block708); and (iii) a matrix or other to suitable data structure for identifying the traffic overlapping between every pair of cells per BBU in the network to configure (block710). In one implementation, an example CS generation process may be executed separately for every hub in the network. Further, an example CS generation process may be executed separately for every advanced E-RAN feature being configured for the network (e.g., CA, UL CoMP, etc.). Additionally, it should be appreciated that per-BBU cell assignment may be obtained in a number of ways and an example CS generation process of the present disclosure may be practiced regardless of how the cell assignment in a network has been configured. Accordingly, in one embodiment, an example CS generation process may be configured to operate based on the cell allocation scheme described previously although it is not a requirement. Similar to the teachings set forth above, various inputs and constraints to the example CS generation module704may comprise, inter alia, one or more per-BBU cell lists, with physical information per cell and per hub; hardware constraints (e.g., number of hubs/BBUs, ports and latency requirements, etc.; a list or database structure of traffic overlapping, which may be obtained or generated from network planning tools, drive tests, call traces, etc.); as well as inter-BBU and/or inter-hub distances and related cost functions, and so on. Preferably, a CS or partnership map712may be obtained by executing the CS generation process, which identifies a set of partnering BBUs for each BBU of the hub/network, and/or for one or more E-RAN features, either alone or taken in any reasonable combination thereof. At a high level, an example CS generation process may comprise three main steps or sub-processes that may be executed iteratively:(i) Step 1: Reduction of the traffic overlapping matrix per pair of cells to a traffic overlapping matrix per pair of BBUs, by aggregating the values associated with the cells of the same BBU;(ii) Step 2: Creation of a list of candidate pairs of BBUs, also referred to as candidate partnerships, sorted in a particular order, e.g., in a descending order with a decreasing value of traffic overlapping; and(iii) Step 3: Selection of the partnerships. Partnerships are selected sequentially, starting from the beginning of the sorted/ordered list of candidate partnerships. The selection of a partnership indicates mutually adding each of both BBUs to the CS of the other BBUs. In one example embodiment, a partnership may be discarded if its addition indicates exceeding the maximum number of partners allowed per BBU. Advantageously, an example embodiment may be configured to execute a fourth step or sub-process (e.g., optionally or additionally) for final fine-tuning of a generated CS/partnership set in case at least one candidate partnership was discarded and not all CSs are full:(vi) Step 4: Final fine-tuning. This step comprises finding combinations of a certain number of discarded partnerships (e.g., two discarded partnerships) to replace partnerships selected in Step 3, for which the following conditions are satisfied: (a) removing a partnership selected in Step 3 makes adding two discarded partnerships possible in terms of maximum number of partners per BBU; and (b) the sum of the traffic overlapping values of the added partnerships is higher than or equal to the traffic overlapping value of the removed partnership. Advantageously, an example embodiment may be configured to execute Step 4 by also searching for combinations of more than two discarded partnerships to replace combinations of more than one partnership. Advantageously, an example embodiment may be configured to execute the process for more than one hub, if hubs are close enough (e.g., within the latency and/or cost function requirements) to make E-RAN coordination between their BBUs possible. On the other hand, the case of BBUs of different hubs that cannot be coordinated due to exceeding the maximum distance to guarantee the RRU-to-BBU latency requirements can be considered by setting BBU overlapping equal to “0” according to one example implementation. FIG.7Bis a flowchart of various blocks, steps and/or acts that may be (re)combined in one or more arrangements, with or without additional flowcharts of the present disclosure, for configuring BBU coordination in a fronthaul network portion according to one or more embodiments of the present patent disclosure. In one embodiment, process700B may commence by obtaining, providing, or otherwise configuring a plurality of input variables and design constraints with respect to one or more E-RAN features to be configured for a plurality of BBUs, which may be organized in one or more BBU hubs of a fronthaul network (block752). At block754, a BBU overlapping matrix may be generated based on a cell overlapping matrix associated with a selected network (E-RAN) feature, e.g., for each BBU hub or for at least portion of the fronthaul network. At block756, a sorted/ordered list of candidate BBU partnerships may be generated. Thereafter, BBU partnerships from the sorted/ordered list of candidate BBU partnerships may be selected sequentially/iteratively, e.g., responsive to at least part of the input variables and/or design constraints in order to generate CS groupings of the BBUs (i.e., one set of partners for each BBU), as set forth at block758. Optionally, the CS groupings may be refined or fine-tuned (block760), where at least one candidate partnership was discarded in the sequential selection process and not all CSs are complete (i.e., a CS has fewer BBU partners than a maximum of BBU partners allowed per each BBU). In one embodiment, a reciprocity relationship between the BBU partners may be defined and imposed/applied at block760in order to achieve the fine-tuning. Thereafter, the process flow may be completed and the CS groupings may be obtained, generated and/or otherwise provided, whereupon the BBUs may be configured to effectuate suitable control message communications according to the partnerships for facilitating optimization of one or more KPIs and E-RAN features among the BBU partnerships, e.g., carrier aggregation, UL CoMP, etc. Also, hardware/software resources of BBUs may be configured based on the CS groupings to enable tight cooperation and inter-site radio resource coordination (block762). Additional details and/or further variations with respect to an implementation of the foregoing embodiments are set forth in the following sections. As previously noted, example embodiments may be configured to assign reciprocal and/or nonreciprocal partner BBUs to each BBU of a fronthaul network with the target of maximizing the coordination benefit of advanced E-RAN features. Allocation of cells to BBUs and cells to hubs may be performed in a number of ways, including but not limited to the embodiments set forth previously. As illustrated inFIGS.7A/7B and related Figures described below, an embodiment of the process may be executed through a software program on a general-purpose computer machine (personal computer or computing server, data center node, management node, or a network element or node, etc.). In one example embodiment, the process may be executed separately for every hub, and for every advanced E-RAN feature. In this description, two particular E-RAN features will be considered by way of illustration: CA and UL CoMP. The process may be configured to use the following inputs, including but not limited to, for every particular execution:(i) List of cells assigned to each BBU connected to the hub.(ii) Maximum number of partner BBUs per BBU.(iii) An overlapping matrix A associated with the particular E-RAN feature: In accordance with the teachings of the present patent disclosure, a unidirectional overlapping matrix Aumay be defined as a square matrix with size equal to the number of cells in the hub or network, of which elements: Aj,ku∈[0,1] wherein the values in the range [0,1] represent the ratio of traffic in which cells #j and #k have a predetermined or predefined good service level, compared to the traffic in which cell #j has good service level. In one variation, the maximum number of partners allowed per BBU may be the same, i.e., every BBU has the same maximum number of partners. In another variation, the maximum number of partners allowed per BBU may be variable, i.e., different BBUs may have different maximum number of partners. In one example implementation, service level may be defined differently based on the particular E-RAN feature being configured for the network. For CA, service level could be considered as coverage (RSRP over threshold) and signal quality (RS-SINR over threshold). Overlapping between cells on the same carrier is obviously equal to zero. For UL CoMP, service level could be considered as to coverage (RSRP over threshold) and dominant coverage (RSRP higher than best server's RSRP minus an offset). Similar to teachings set forth previously, the input overlapping matrix A may be obtained as a symmetric square matrix derived from Au, of which elements Aj,kare a measure of the mutual service level overlapping between cells #j and #k, defined as follows except that the dimensionality may be different in this scenario depending on whether intra-hub level or inter-hub level partnerships are being configured: Aj,k=Ak,j=(Aj,ku)2+(Ak,ju)2Eqn. (2) In an example embodiment, overlapping values can be obtained from RSRP and RS-SINR maps generated by planning/predicting tools or obtained through drive tests or call traces, as noted previously. Example steps, blocks, modules, etc., to compute the CS (i.e., list of partner BBUs) for every BBU are illustrated in the flow diagrams ofFIGS.8A and8B, and further described below. Step 1: Generation of BBU Overlapping Matrix In an example implementation, a BBU overlapping matrix ABBUmay be generated as a symmetric square matrix with size equal to the number of BBUs in the hub (or the entire network or a portion thereof, referred to as a coordination area), of which elements are obtained by summing all overlapping values Aj,kassociated with all pair combinations between a cell #j connected to BBU #l and a cell #k connected to BBU #m. Accordingly, for a pair of BBUs comprising BBU #l and BBU #m, the cumulative overlapping value is defined as: Al,mBBU=∑j⁢∑k⁢Aj,kEqn.⁢(3) Step 2: Generation of Sorted List of Candidate Partnerships All possible pair combinations of BBUs (l,m), also known as candidate partnerships, may be sorted in descending order of the cumulative overlapping values Al,mBBU. A sorted list L is thereby created, where the i-th element Licontains the candidate partnership composed of a pair of BBUs with the i-th strongest overlapping value. In one arrangement, candidate partnerships with zero overlapping are not included in the list. Where there are no zero overlapping partners, the cardinality of a candidate list (i.e., the total number of candidates) may be given as follows: CandList( )=[N(N−1)]/2  Eqn. (4) where N is the number of BBUs in the network or coordination area. Step 3: Selection of BBU Partnerships Starting from the beginning of the sorted list of candidate partnerships, the example CS generation process proceeds to select the partnerships if there is room in the CSs (i.e., the cardinality of the partnership set for any particular BBU is less than the allowed maximum number of partners). In an example implementation, this is effectuated by the example CS generation process upon executing the following sub-steps:STEP 3.1: Creating one empty CS (list of partner BBUs) per BBU.STEP 3.2: Selecting the first pair of BBUs (l,m) from the sorted list of candidate partnerships. In case it is not possible to select the first pair because the list is empty, go to STEP 4.STEP 3.3: Determining if there is room to add one more BBU in the CS of BBU #l and another one in the CS of BBU #m. In case there is room in both CSs, add BBU #l to the CS of BBU #m and vice versa. Remove the partnership from the sorted list of candidate partnerships, and continue with the next pair of BBUs in the sorted list (STEP 3.2). Otherwise, move the partnership to a sorted list of discarded candidate partnerships and continue with the next pair of BBUs in the sorted list (STEP 3.2). Step 4: Final Fine Tuning As noted previously, this final step can be optionally followed in an example embodiment for the case in which one or more CSs have not been totally filled with any of the remaining candidate partner BBUs because their CSs are already full. This is accomplished by the example CS generation process upon executing the following sub-steps:STEP 4.1: Saving the current CSs as the best CSs. Save the sum of the BBU overlapping values associated with all partnerships in the best CSs as the best total overlapping.STEP 4.2: Selecting the first candidate partnership from the sorted list of previously discarded candidate partnerships computed at STEP 3. If both CSs are full, skip STEPS 4.3 to 4.7, and directly go to STEP 4.8.STEP 4.3: Removing the partnership with lowest BBU overlapping from the full CS, as well as its reciprocal.STEP 4.4: Mutually adding the BBUs to the CS of the other BBU.STEP 4.5: Inspecting for the first partnership in the discarded list where the CSs of both BBUs are not full. If it is found, mutually add the BBUs to the CS of the other BBU and continue with STEP 4.6. Otherwise continuing with STEP 4.8.STEP 4.6: If the current total overlapping is higher than the best total overlapping, saving the current CSs as the best CSs and save the current total overlapping as the best total overlapping, delete both partnerships from the sorted list of discarded candidate partnerships, and continue with STEP 4.7. Otherwise continue with STEP 4.8.STEP 4.7: If the sorted list of discarded candidate partnerships is empty, the process is finished. Otherwise, repeat the procedure from STEP 4.2.STEP 4.8: Deleting the first partnership from the sorted list of previously discarded candidate partnerships, and save the best CSs as the current CSs, as well as the best total overlapping as the current total overlapping. If the sorted list of discarded candidate partnerships is empty, the process is finished. Otherwise, the process flow is repeated from STEP 4.2. FIGS.8A and8Bare flowcharts of various blocks, steps and/or acts that may be (re)combined in one or more arrangements, with or without additional flowcharts of the present disclosure, for generating BBU coordination sets in a fronthaul network portion for purposes of an example embodiment of the present patent disclosure. Skilled artisans will recognize that various process inputs and other constraints set forth in the foregoing sections are generally applicable with respect to the overall process flow shown inFIG.8A, which in some exemplary embodiments may include an optional fine-tuning process800B illustrated in detail in the flowchart ofFIG.8B. Example process800A commences with generating a BBU overlapping matrix (referred to herein as OvlBBU), as set forth at blocks802,804, which is followed by generating a list of candidate partnerships therefrom (referred to herein as CandList), as set forth at block806. A coordination set (CS) per BBU is initialized to an empty set (block808) prior to executing an iterative process for sequentially populating it in accordance with the teachings herein. A decision block810is operative to determine whether the candidate list is empty. If not, a BBU partnership pair (l,m) with the highest cumulative overlapping value (OvlBBU) is obtained from the candidate list (block814). A further determination is made as to whether the CSs corresponding to either BBU #l or BBU #m is full (block816). If not, BBU #l is added to the CS of BBU #m and vice versa (block822). Otherwise, the partnership pair (l,m) is added to a list of discarded partnerships, referred to herein as DiscardList (block820). Thereafter, the partnership pair (l,m) is removed from the candidate list (block826). As illustrated inFIG.8A, the process of partnership assignment may continue in an iterative process until the candidate list is empty (block810). Upon determining that the candidate list is empty at block810, a determination is made as to whether the list of discarded partnerships is empty or whether the CSs are full (block812). In one example implementation, if either or both conditions are met, the process flow is terminated and the BBU partnerships as identified in the CSs may be provided as output (block830). Otherwise, a further determination may be made as to whether a fine-tuning process is needed (block818). If so, process800B ofFIG.8Bis executed, as set forth at block824. Otherwise, the process flow is terminated with the resultant CSs being returned as before (block830). Example fine-tuning process800B commences with defining best CSs as best found collection (bestCSs) of one CS per BBU, which may be initialized to a temporary collection of one CS per BBU (currentCSs), whereby a best total overlapping value may be determined by summing all overlapping values with respect to a BBU pair (l,m) for all (l,m) pairs belonging to the set of best CSs (blocks852,854). At block856, a partnership (l,m) having the highest overlapping value is obtained from the DiscardList. At block858, a determination is made as to whether both CSs are full. If so, the partnership (n,p) is deleted from the DiscardList (block880). If at least one CS is not full, a full CS is obtained for a pair (n,p), and its partnership with lowest overlapping as well as its reciprocal is removed (block860). Thereafter, BBU #n is added to the CS of BBU #p and vice versa (block862). The DiscardList is examined for a next partnership (q,r) having the highest overlapping value where the CSs of both BBUs are not full (block864). If such a partnership is obtained from the DiscardList, BBU #q is added to the CS of BBU #r and vice versa (block868). The cumulative overlapping value of the updated current CSs is then obtained (block870). A determination is made whether the current total overlapping value is greater than or equal to the best total overlapping value previously determined (block872). If so, the bestCSs list is updated to the currentCSs list. Likewise, the best total overlapping value is updated to the current total overlapping value (block874). Thereafter, partnership (q,r) is deleted from the DiscardList (block876). If the partnership (q,r) is not found (block866), the currentCSs list is updated to the bestCSs list and the current total overlapping value is updated to the best total overlapping value (block878). The same updating is also obtained if the determination at block872is that the current total overlapping value is not greater than or equal to the best total overlapping value. After the updating at block878, the partnership (n, p) is deleted from the DiscardList as before (block880). At block882, a determination is made as to whether the DiscardList is empty. If so, the fine-tuning process is completed and an updated CS list is provided (block884). Otherwise, the process flow returns to block856to continue fine-tuning with the next partnership (l,m) having the highest overlapping value, preferably in an iterative loop fashion, until the DiscardList is empty (block882). An example CS generation scenario in accordance with the foregoing teachings is set forth below, assuming a network of five BBUs and a maximum number of two partner BBUs per BBU. By way of illustration, the following normalized BBU overlapping matrix for the set of five BBUs is obtained a result of the execution of STEP 1: ABBU=[10.20.30.40.50.2100.30.40.3010.20.30.40.30.210.20.50.40.30.21]Eqn.⁢(5) where Al,mBBUrepresents the overlapping between BBU #l and BBU #m as previously described. As part of STEP 2, a sorted list of candidate partnerships may be created as follows: CandList=[(1,5); (1,4); (2,5); (1,3); (2,4); (3,5); (1,2); (3,4); (4,5)], where the zero-overlapping partnership (2,3) is omitted. Sequential/iterative selection of BBU partnerships may be executed as part of STEP 3, illustrated as follows:First iterationSTEP 3.1: Creation of empty CSs (CSi is the CS of BBU #i):CS1=[ ]CS2=[ ]CS3=[ ]CS4=[ ]CS5=[ ]STEP 3.2: Selection of the first partnership in the candidate list (1,5).STEP 3.3: CS1 and CS5 are empty, so there is room in both for one extra partner.The updated CSs are:CS1=[5]CS2=[ ]CS3=[ ]CS4=[ ]CS5=[1]The updated candidate list (i.e., after the removal of the allocated partnership (1,5)) is:Candidate List=[(1,4); (2,5); (1,3); (2,4); (3,5); (1,2); (3,4); (4,5)]Second iteration:Next candidate is (1,4).CS1 and CS4 are not full yet.Updated CSs and candidate list are:CS1=[5,4]CS2=[ ]CS3=[ ]CS4=[1]CS5=[1]Candidate List=[(2,5); (1,3); (2,4); (3,5); (1,2); (3,4); (4,5)]Third iteration:Next candidate is (2,5).CS2 and CS5 are not full yet.Updated CSs and candidate list are:CS1=[5,4]CS2=[5]CS3=[ ]CS4=[1]CS5=[1,2]Candidate List=[(1,3); (2,4); (3,5); (1,2); (3,4); (4,5)]Fourth iteration:Next candidate is (1,3).CS1 is full, so (1,3) is added to the sorted discarded list:Discarded List=[(1,3)]Updated CSs and candidate list are:Candidate List=[(2,4); (3,5); (1,2); (3,4); (4,5)]Fifth iteration:Next candidate is (2,4)CS2 and CS4 are not full yet.Updated CSs and candidate list are:CS1=[5,4]CS2=[5,4]CS3=[ ]CS4=[1,2]CS5=[1,2]Candidate List=[(3,5); (1,2); (3,4); (4,5)]Sixth iteration:Next candidate is (3,5)CS5 is full, so (3,5) is added to the sorted discarded list:Discarded List=[(1,3); (3,5)]Successive iterations: All CSs are already full, apart from CS3. This means that successive iterations will systematically move the pending elements of the candidate list to the discarded list, ending up with this situation:Discarded List=[(1,3); (3,5); (1,2); (3,4); (4,5)]Candidate List=[ ]Final fine tuning (STEP 4):STEP 4.1: Save current CSs and total overlapping as the best onesBest CSs:CS1=[5,4]CS2=[5,4]CS3=[ ]CS4=[1,2]CS5=[1,2]Best total overlapping=1.6STEP 4.2: Select first discarded candidate partnership, which is (1,3).STEP 4.3: Since CS1 is full, but CS2 is not, the process removes the partnership with lowest BBU overlapping from CS1, which is (1,4), as well as its reciprocal (4,1):Current CSs:CS1=[5]CS2=[5,4]CS3=[ ]CS4=[2]CS5=[1,2]Current total overlapping=1.2STEP 4.4: The selected candidate partnership (1,3) is added to the current CSs, as well as the reciprocal (3,1).Current CSs:CS1=[5,3]CS2=[5,4]CS3=[1]CS4=[2]CS5=[1,2]Current total overlapping=1.5Since current total overlapping is not higher than best total overlapping, the process continues with step 4.5:STEP 4.5: The first partnership in the discarded list where the CSs of both BBUs are not full is (3,4), and it is added to the current CSs, as well as its reciprocal (4,3).Current CSs:CS1=[5,3]CS2=[5,4]CS3=[1,4]CS4=[2,3]CS5=[1,2]Current total overlapping=1.7STEP 4.6: Since current total overlapping (1.7) is higher than the best total overlapping (1.6), the new best CSs isBest CSs:CS1=[5,3]CS2=[5,4]CS3=[1,4]CS4=[2,3]CS5=[1,2] Since there is no more room for extra partnerships in any of the CSs, the process flow finishes here, thereby determining the foregoing CSs for the five BBUs, wherein BBU #l is partnered with BBUs #5and #3; BBU #2is partnered with BBUs #5and #4; BBU #3is partnered with BBUs #1and #4; BBU #4is partnered with BBUs #2and #3; and finally, BBU #5is partnered with BBUs #1and #2. Based on the foregoing, it will be appreciated that an example CS generation process of the present patent disclosure may be configured to find a set of optimal partner BBUs for every BBU to maximize the benefit of advanced E-RAN features, which may be conditioned on forcing the mutual selection of reciprocal partnerships and respecting the maximum number of partners per BBU. As previously noted, BBU partnerships do not have to be reciprocal, however. Skilled artisans will recognize that such a requirement may speed up convergence in the BBU pooling, which may or may not guarantee optimality in certain conditions. Furthermore, not all E-RAN features may require reciprocal BBU requirement. For instance, there may be a higher likelihood of reciprocity in CoMP than in CA. In a still further variation, it should be noted that since cells-to-BBU assignment is fixed (and known beforehand), capacity verifications (e.g., with respect to per-BBU port utilization, etc.) do not always have to be executed in an example CS/partnership generation process. Moreover, one skilled in the art will appreciate that example embodiments specifically provide the following benefits. First, an example embodiment may be configured to pursue maximization of RF benefit associated with one or more E-RAN features by facilitating better E-RAN coordination among the optimal BBUs. Also, a fine-tuning process in an example embodiment may be configured to ensure better use of the available number of partners per BBU. An example embodiment is also particularly advantageous in that the process may be automated on a suitable computing platform, unlike manual solutions that not only consume the time and effort of network engineers but often also result in determinations that are suboptimal. Example embodiments, having a linear complexity proportional to the number of BBU pairs with non-zero overlapping, may be executed in a computationally efficient O{n} process (i.e., the execution is very fast). Furthermore, example embodiments may be customized in different network implementations, e.g., to maximize the coordination benefit of any E-RAN feature, by means of using the most appropriate service level definitions to generate suitable overlapping traffic matrices accordingly. Example embodiments are also particularly advantageous when practiced in combination with different architectural implementations, e.g., as follows:Flexible Cloud RAN deployment ensures good baseband coordination for strong network performance.Distributed, centralized, and virtual baseband (vRAN) architectures may be supported.Optimized 4G/5G interworking in vRAN. Skilled artisans will further recognize that example CS generation embodiments set forth herein address and overcome the following deficiencies, without limitation, regarding current E-RAN implementations. For example, an existing solution based on fixed clusters of BBUs requires that BBUs connected to the same hub be grouped in clusters of 7. Inside every cluster, every BBU has a reciprocal relationship with the other 6 BBUs. However, this approach does not guarantee the selection of the best partners, and may not provide optimal performance. Actually, cells in the border of the clusters may experience lower probability to get coordinated with cells in other partner BBUs, while having high coverage overlapping with cells belonging to non-partner BBUs. In another existing solution, chained clusters of BBUs may be implemented where the maximum number of partner BBUs is 2 and partner-BBU relationships have a chain topology. A chained cluster may impose limitations on performance as geographical location and different overlap relations between the basebands may not give optimal performance. In another existing solution, daisy chain clusters of BBUs may be implemented where the E-RAN cluster is allowed to follow the end stations or UE devices. All such approaches fail to address the optimization of partner-BBU selection, however, in order to maximize the RF benefit of advanced to E-RAN features, such as CA and UL CoMP, which is addressed by the example embodiments herein. Turning toFIG.6, depicted therein is an example network600where a cell allocation and/or a BBU coordination scheme may be provided in association with one or more management nodes operatively coupled to the example network600according to further embodiments of the present patent disclosure. By way of illustration, example network600may comprise a heterogeneous network environment wherein a plurality of cells, e.g., comprising one or more macro evolved Node B (macro-eNB) nodes, one or more micro-eNB nodes, one or more pico-eNB nodes, one or more femto-eNB nodes, or one or more home eNB (HeNB) nodes, may be geographically dispersed in different regions or clusters. Consistent with the teachings herein, the cells may be organized in one or more C-RAN portions, each being served by one or more BBU hubs or sites. As shown in this FIG., BBU hub604A is operatively coupled to a node607A serving an area609A, BBU hub604B is operatively coupled to a node607B serving an area609B, and BBU hubs604C/D are operatively coupled to a myriad of nodes, e.g., macro nodes607C(N) as well as small/micro nodes607D(M), collectively serving an area609C. Each BBU, which is coupled to a respective portion of cells via suitable IQ connections, may be coupled via backhaul networks/connections605A-C to one or more core networks602. One skilled in the art will recognize that the plurality of cells/nodes may be operative with a vast range of tethered and/or untethered UE stations or endpoints, as previously noted, which are exemplified by endpoints652A(N),652B(K), and652C(M). As such, C-RANs, backhaul networks, as well as core networks602can be subsets of a single network provided by a single service provider or can be separate networks provided by different service providers at different hierarchical levels. One or more management nodes606attached to core networks602can be configured to manage the operations of core networks606and/or the operations of the BBU sites and associated C-RANs. For purposes of an example embodiment of the present patent disclosure, management node(s)606can include but not limited to the following examples. Management node(s)606may be provided as an integral portion of core network(s)602or be provided outside of core network(s)602, e.g., as a hosted service node by a third party. As technologies such as Software Defined Networking (SDN) and Network Function Virtualization (NFV) transform traditional networks into software programmable domains running on simplified, lower cost hardware, management node(s)606can be provided as data center nodes, and can further be present at different hierarchical layers within the network. For example, management node606can be located at a new entity, such as a Node C in a heterogeneous cloud radio access network (H-CRAN), at network edge nodes rather than in the centralized core, a mobility management entity (MME), a packet/service-gateway (P/S-GW), a node in a multi-service management plane (MSMP), etc. Also, management node(s)606can be cloud based and/or part of a Self-Organizing Network or Self-Optimizing Network (SON) in some example embodiments. One of the tools of management node(s)606may be configured as a CS generation module, a cell allocation and BBU optimization module, or a combination thereof, shown as a functional module608, which may in turn be configured to operate with a network (re)configuration facility for effectuating static or dynamic resource allocation, assignment and provisioning with respect to any C-RAN portion of the network600. Depending on the configured functionality, module608may execute one or more processes described in detail hereinabove to oversee the generation of BBU coordination sets and/or allocation of cells to BBUs and/hubs in an example embodiment as well as determining/assigning reciprocal partner BBUs with the objective of maximizing the coordination benefits that can be realized in an E-RAN. CS partnerships may be comprised of intra-hub as well as inter-hub partnerships, and may or may not necessarily be based on reciprocity. Depending on latency requirements, BBUs of heavily loaded BBU sites such as604C/D may be partnered with lightly loaded BBUs at sites such as BBU hub604A and/or604B, as exemplified by one or more partnership communications653. FIG.9depicts is a block diagram of a computer-implemented apparatus900that may be (re)configured and/or (re)arranged as a platform, server, node or element to effectuate an example management network node or element for cell allocation and BBU optimization according to an embodiment of the present patent disclosure. It should be appreciated that apparatus900may be implemented as a distributed data center platform or as a standalone node in some arrangements (e.g., node400A ofFIG.4Ain one embodiment). One or more processors902may be operatively coupled to various modules that may be implemented in persistent memory for executing suitable program instructions or code portions (e.g., code portion933) with respect to effectuating various aspects of cell allocation and optimization by way of one or more modules as exemplified by cell allocation/optimization and cell assignment map generation module908and cell site database910. A design constraints database935may also be provided, which may be dynamically/automatically updated, e.g., periodically or triggered pursuant to network/operator conditions and policies. Depending on the implementation, appropriate “upstream” interfaces (I/F)918and/or “downstream” I/Fs920may be provided for interfacing with external nodes, e.g., BSS nodes and/or other OSS components, BB hubs, management nodes, RRUs, etc. Accordingly, depending on the context, interfaces selected from interfaces918,920may sometimes be referred to as a first interface, a second interface, and so on. In similar fashion, a block diagram of a computer-implemented apparatus1000is illustrated inFIG.10, which may be (re)configured and/or (re)arranged as a platform, server, node or element to effectuate an example management network node or element for CS generation and BBU partnership configuration according to an embodiment of the present patent disclosure. As with the platform900, it should be appreciated that apparatus1000may be implemented as a distributed data center platform or as a standalone node in some arrangements (e.g., node700A ofFIG.7Ain one embodiment). One or more processors1002may be operatively coupled to various modules that may be implemented in persistent memory for executing suitable program instructions or code portions (e.g., code portion1033) with respect to effectuating CS generation in association with one or more modules, e.g., CS generation module1055, responsive to a BBU/cell database, partners/BBU database, overlapping matrix database, collectively, CS database1057, as well as a cell site database1010, for generating coordination sets and BBU partnership assignments according to an embodiment described herein. Optionally, a cell allocation module1008may also be integrated within the apparatus1000to provide optimization of both C-RAN and E-RAN features in a further embodiment. Accordingly, depending on the implementation, appropriate “upstream” interfaces (I/F)1018and/or “downstream” I/Fs1020may be provided for interfacing with external nodes, e.g., BSS nodes and/or other OSS components, BB hubs, management nodes, RRUs, etc., which may be referred to as a first interface, a second interface, and so on. Turning toFIG.11, depicted therein is a Network Function Virtualization (NFV) architecture1100that may be applied in conjunction with an OSS of the present invention configured to allocate cells or cell sites to BBUs and/or hubs as well as facilitate CS generation in a heterogeneous network environment such as the environment600set forth inFIG.6. Various physical resources and services executing thereon within the network environment600may be provided as virtual appliances wherein the resources and service functions are virtualized into suitable virtual network functions (VNFs) via a virtualization layer1110. Resources1102comprising compute resources1104, memory resources1106, and network infrastructure resources1108are virtualized into corresponding virtual resources1112wherein virtual compute resources1114, virtual memory resources1116and virtual network resources1118are collectively operative to support a VNF layer1120including a plurality of VNFs1122-1to1122-N, which may be managed by respective element management systems (EMS)1123-1to1123-N. Virtualization layer1110(also sometimes referred to as virtual machine monitor (VMM) or “hypervisor”) together with the physical resources1102and virtual resources1112may be referred to as NFV infrastructure (NFVI) of a network environment. Overall NFV management and orchestration functionality1126may be supported by one or more virtualized infrastructure managers (VIMs)1132, one or more VNF managers1130and an orchestrator1128, wherein VIM1132and VNF managers1130are interfaced with NFVI layer and VNF layer, respectively. An OSS platform1124(which may be integrated or co-located with a Business Support System (BSS) in some arrangements) is responsible for network-level functionalities such as network management, fault management, configuration management, service management, and subscriber management, etc. In one arrangement, various OSS components of the OSS platform1124may interface with VNF layer1120and NFV orchestration1128via suitable interfaces. In addition, OSS/BSS1124may be interfaced with a CS generation and cell allocation/optimization module1134for facilitating the CS generation, cell allocation and optimization within a network. Broadly, NFV orchestration1128involves generating, maintaining and tearing down of network services or service functions supported by corresponding VNFs, including creating end-to-end services over multiple VNFs in a network environment, (e.g., allocation of radio resources, BBU ports, etc.). Further, NFV orchestrator1128is also responsible for global resource management of NFVI resources, e.g., managing compute, storage and networking resources among multiple VIMs in the network. Based on the foregoing, it should be appreciated that in the context of the present application, the CS generation and/or cell allocation/optimization functionality associated with an OSS platform such as OSS1124may also be configured in an example embodiment to access suitable OSS components that may be mapped to different hierarchical information layers based on how the virtualized resources are organized in accordance with NFVI. It should be appreciated that because the physical resources allocated to a VNF are considered to be elastic and the VNFs can run on multiple physical infrastructure network nodes, there is a loose coupling between the VNFs and the physical infrastructure hardware nodes they exist on, which allows greater scalability and dynamic configurability of a virtualized network environment. Consequently, the databases provided with different OSS components (based on the different hierarchical layers to which they are mapped) may need to be dynamically reconfigured as the underlying topologies change. FIGS.12A/12B illustrate connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention wherein at least a portion of a heterogeneous hierarchical network environment and/or associated network nodes/components shown in some of the Figures previously discussed may be implemented in a virtualized environment. In particular,FIG.12Ashows NDs1200A-H, which may be representative of various servers, database nodes, OSS components, external storage nodes, as well as other network elements of a network environment (e.g., management nodes, BBUs, (m)RRUs, and the like), wherein example connectivity is illustrated by way of lines between A-B, B-C, C-D, D-E, E-F, F-G, and A-G, as well as between H and each of A, C, D, and G. As noted elsewhere in the patent application, such NDs may be provided as physical devices, and the connectivity between these NDs can be wireless or wired (often referred to as a link). An additional line extending from NDs1200A, E, and F illustrates that these NDs may act as ingress and egress nodes for the network (and thus, these NDs are sometimes referred to as edge NDs; while the other NDs may be called core NDs). Two of the exemplary ND implementations inFIG.12Aare: (1) a special-purpose network device1202that uses custom application-specific integrated-circuits (ASICs) and a proprietary operating system (OS); and (2) a general purpose network device1204that uses common off-the-shelf (COTS) processors and a standard OS. The special-purpose network device1202includes appropriate hardware1210(e.g., custom or application-specific hardware) comprising compute resource(s)1212(which typically include a set of one or more processors), forwarding resource(s)1214(which typically include one or more ASICs and/or network processors), and physical network interfaces (NIs)1216(sometimes called physical ports), as well as non-transitory machine readable storage media1218having stored therein suitable application-specific software or program instructions1220(e.g., CS generation and/or cell allocation/optimization1221, etc.). A physical NI is a piece of hardware in an ND through which a network connection (e.g., wirelessly through a wireless network interface controller (WNIC) or through plugging in a cable to a physical port connected to a network interface controller (NIC)) is made, such as those shown by the connectivity between NDs1200A-H. During operation, the application software1220may be executed by the hardware1210to instantiate a set of one or more application-specific or custom software instance(s)1222. Each of the custom software instance(s)1222, and that part of the hardware1210that executes that application software instance (be it hardware dedicated to that application software instance and/or time slices of hardware temporally shared by that application software instance with others of the application software instance(s)1222), form a separate virtual network element1230A-R. Each of the virtual network element(s) (VNEs)1230A-R includes a control communication and configuration module1232A-R (sometimes referred to as a local control module or control communication module) and forwarding table(s)1234A-R with respect to suitable application/service instances1233A-R, such that a given virtual network element (e.g.,1230A) includes the control communication and configuration module (e.g.,1232A), a set of one or more forwarding table(s) (e.g.,1234A), and that portion of the application hardware1210that executes the virtual network element (e.g.,1230A) for supporting the application instance1233A (e.g., collecting RAN data, performing CS generation and/or cell allocation/optimization, and the like in relation to a CS generation and/or cell allocation/optimization subsystem virtualization). Software1220can include code such as CS generation, cell allocation and optimization module1221, which when executed by networking hardware1210, causes the special-purpose network device1202to perform operations of one or more embodiments of the present invention as part of networking software instances1222. In an example implementation, the special-purpose network device1202is often physically and/or logically considered to include: (1) a ND control plane1224(sometimes referred to as a control plane) comprising the compute resource(s)1212that execute the control communication and configuration module(s)1232A-R; and (2) a ND forwarding plane1226(sometimes referred to as a forwarding plane, a data plane, or a bearer plane) comprising the forwarding resource(s)1214that utilize the forwarding or destination table(s)1234A-R and the physical NIs1216. By way of example, where the ND is a virtual OSS node, the ND control plane1224(the compute resource(s)1212executing the control communication and configuration module(s)1232A-R) is typically responsible for participating in determining the allocation and optimization of cells to BBUs/hubs. Likewise, ND forwarding plane1226is responsible for receiving that data on the physical NIs1216(e.g., similar to I/Fs inFIGS.8and9) and forwarding that data out the appropriate ones of the physical NIs1216based on the forwarding information. FIG.12Billustrates an exemplary way to implement the special-purpose network device1202according to some embodiments of the invention, wherein an example special-purpose network device includes one or more cards1238(typically hot pluggable) coupled to an interconnect mechanism. While in some embodiments the cards1238are of two types (one or more that operate as the ND forwarding plane1226(sometimes called line cards), and one or more that operate to implement the ND control plane1224(sometimes called control cards)), alternative embodiments may combine functionality onto a single card and/or include additional card types (e.g., one additional type of card is called a service card, resource card, or multi-application card). A service card can provide specialized processing (e.g., Layer 4 to Layer 7 services (e.g., firewall, Internet Protocol Security (IPsec) (RFC 4301 and 4309), Secure Sockets Layer (SSL)/Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway), etc.). By way of example, a service card may be used to terminate IPsec tunnels and execute the attendant authentication and encryption algorithms. These cards may be coupled together through one or more interconnect mechanisms illustrated as backplane1236(e.g., a first full mesh coupling the line cards and a second full mesh coupling all of the cards). Returning toFIG.12A, an example embodiment of the general purpose network device1204includes hardware1240comprising a set of one or more processor(s)1242(which are often COTS processors) and network interface controller(s)1244(NICs; also known as network interface cards) (which include physical NIs1246), as well as non-transitory machine readable storage media1248having stored therein software1250. During operation, the processor(s)1242execute the software1250to instantiate one or more sets of one or more applications1264A-R with respect to facilitating OSS functionalities. While one embodiment does not implement virtualization, alternative embodiments may use different forms of virtualization represented by a virtualization layer1254and software containers1262A-R. For example, one such alternative embodiment implements operating system-level virtualization, in which case the virtualization layer1254represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple software containers1262A-R that may each be used to execute one of the sets of applications1264A-R. In this embodiment, the multiple software containers1362A-R (also called virtualization engines, virtual private servers, or jails) are each a user space instance (typically a virtual memory space); these user space instances are separate from each other and separate from the kernel space in which the operating system is run; the set of applications running in a given user space, unless explicitly allowed, cannot access the memory of the other processes. Another such alternative embodiment implements full virtualization, in which case: (1) the virtualization layer1254represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM) as noted elsewhere in the present patent application) or a hypervisor executing on top of a host operating system; and (2) the software containers1262A-R each represent a tightly isolated form of software container called a virtual machine that is run by the hypervisor and may include a guest operating system. A virtual machine is a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine; and applications generally do not know they are running on a virtual machine as opposed to running on a “bare metal” host electronic device, though some systems provide para-virtualization which allows an operating system or application to be aware of the presence of virtualization for optimization purposes. The instantiation of the one or more sets of one or more applications1264A-R, as well as the virtualization layer1254and software containers1262A-R if implemented are collectively referred to as software instance(s)1252. Each set of applications1264A-R, corresponding software container1262A-R if implemented, and that part of the hardware1240that executes them (be it hardware dedicated to that execution and/or time slices of hardware temporally shared by software containers1262A-R), forms a separate virtual network element(s)1260A-R. The virtual network element(s)1260A-R perform similar functionality to the virtual network element(s)1230A-R e.g., similar to the control communication and configuration module(s)1232A and forwarding table(s)1234A (this virtualization of the hardware1240is sometimes referred to as Network Function Virtualization (NFV) architecture, as set forth previously. Thus, NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which could be located in data centers, NDs, and customer premise equipment (CPE). However, different embodiments of the invention may implement one or more of the software container(s)1262A-R differently. For example, while embodiments of the invention may be practiced in an arrangement wherein each software container1262A-R corresponds to one VNE1260A-R, alternative embodiments may implement this correspondence at a finer level granularity (e.g., line card virtual machines virtualize line cards, control card virtual machine virtualize control cards, etc.); it should be understood that the techniques described herein with reference to a correspondence of software containers1262A-R to VNEs also apply to embodiments where such a finer level of granularity is used. In certain embodiments, the virtualization layer1254may include a virtual switch that provides similar forwarding services as a physical Ethernet switch. Specifically, this virtual switch forwards traffic between software containers1262A-R and the NIC(s)1244, as well as optionally between the software containers1262A-R. In addition, this virtual switch may enforce network isolation between the VNEs1260A-R that by policy are not permitted to communicate with each other (e.g., by honoring virtual local area networks (VLANs)). Software1250can include code such as CS generation and/or cell allocation/optimization1253, which when executed by networking hardware1240, causes the general-purpose network device1204to perform operations of one or more embodiments of the present invention as part of software instances1253. The third exemplary ND implementation inFIG.12Ais a hybrid network device1206, which may include both custom ASICs/proprietary OS and COTS processors/standard OS in a single ND or a single card within an ND. In certain embodiments of such a hybrid network device, a platform VM (i.e., a VM that implements the functionality of the special-purpose network device1202) could provide for para-virtualization to the application-specific hardware present in the hybrid network device1206for effectuating one or more components, blocks, modules, and functionalities of an OSS platform. Regardless of the above exemplary implementations of an ND, when a single one of multiple VNEs implemented by an ND is being considered (e.g., only one of the VNEs is part of a given virtual network) or where only a single VNE is currently being implemented by an ND, the shortened term network element (NE) is sometimes used to refer to that VNE. Also in all of the above exemplary implementations, each of the VNEs (e.g., VNE(s)1230A-R, VNEs1260A-R, and those in the hybrid network device1206) receives data on the physical NIs (e.g.,1216,1246) and forwards that data out the appropriate ones of the physical NIs (e.g.,1216,1246). Accordingly, various hardware and software blocks configured for effectuating an example management node including those associated with CS generation and/or cell allocation/optimization functionality may be embodied in NDs, NEs, NFs, VNE/VNF/VND, virtual appliances, virtual machines, and the like, as well as electronic devices and machine-readable media, which may be configured as any of the apparatuses described herein. One skilled in the art will therefore recognize that various apparatuses and systems with respect to the foregoing embodiments, as well as the underlying network infrastructures set forth above may be architected in a virtualized environment according to a suitable NFV architecture in additional or alternative embodiments of the present patent disclosure as noted above. In view of the foregoing, it will be appreciated that one or more embodiments of the present patent disclosure may be implemented in a virtualized heterogeneous network environment including a C-RAN architecture, wherein the network virtualization contains a group of virtual nodes and virtual links. Further, multiple virtual networks can coexist on the same physical substrate. Deploying the virtual networks for the heterogeneous network architecture promotes flexible control, low cost, efficient resource usage, and diversified applications, all of which may be particularly leveraged by an example embodiment of the present patent disclosure. In the context of BBU pooling, it will be realized that network virtualization separates not only data storage but also applications, operating systems and management control. BBU pools may be configured to operate over respective sets of hardware platforms including CPU, memory, NICs and so on, as described above. The virtualization may be implemented via suitable operating systems (e.g., as host or guest machines), wherein the functions of a base station are realized as software instances, which may be referred to as Virtual Base Stations (VBSs) or Virtual BBUs. Within a VBS/VBBU pool, several virtual operators may share a common network environment, allowing virtual BBU partnerships for E-RAN features such as CA and CoMP, as well as allocating cells to virtual BBUs in accordance with the teachings herein. Accordingly, at least a portion of an example network architecture disclosed herein may be virtualized as set forth above and architected in a cloud-computing environment comprising a shared pool of configurable virtual resources. Various pieces of hardware/software associated with hub selection, cell allocation to selected hub(s), cell allocation to BBU(s) in selected hub(s), CS generation and BBU assignment, and the like, as well as BBUs and RRUs, may be implemented in a service-oriented architecture, e.g., Software as a Service (SaaS), Platform as a Service (PaaS), infrastructure as a Service (IaaS) etc., with multiple entities providing different features of an example embodiment of the present invention, wherein one or more layers of virtualized environments may be instantiated on commercial off the shelf (COTS) hardware. Skilled artisans will also appreciate that such a cloud-computing environment may comprise one or more of private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, multiclouds and interclouds (e.g., “cloud of clouds”), and the like. In the above-description of various embodiments of the present disclosure, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and may not be interpreted in an idealized or overly formal sense expressly so defined herein. At least some example embodiments are described herein with reference to block diagrams and/or flowchart illustrations of computer-implemented methods, apparatus (systems and/or devices) and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions that are performed by one or more computer circuits. Such computer program instructions may be provided to a processor circuit of a general purpose computer circuit, special purpose computer circuit, and/or other programmable data processing circuit to produce a machine, so that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, transform and control transistors, values stored in memory locations, and other hardware components within such circuitry to implement the functions/acts specified in the block diagrams and/or flowchart block or blocks, and thereby create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block(s). Additionally, the computer program instructions may also be stored in a tangible computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the functions/acts specified in the block diagrams and/or flowchart block or blocks. As pointed out previously, tangible, non-transitory computer-readable medium may include an electronic, magnetic, optical, electromagnetic, or semiconductor data storage system, apparatus, or device. More specific examples of the computer-readable medium would include the following: a portable computer diskette, a random access memory (RAM) circuit, a read-only memory (ROM) circuit, an erasable programmable read-only memory (EPROM or Flash memory) circuit, a portable compact disc read-only memory (CD-ROM), and a portable digital video disc read-only memory (DVD/Blu-ray). The computer program instructions may also be loaded onto or otherwise downloaded to a computer and/or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer and/or other programmable apparatus to produce a computer-implemented process. Accordingly, embodiments of the present invention may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.) that runs on a processor or controller, which may collectively be referred to as “circuitry,” “a module” or variants thereof. Further, an example processing unit may include, by way of illustration, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGA) circuits, any other type of integrated circuit (IC), and/or a state machine. As can be appreciated, an example processor unit may employ distributed processing in certain embodiments. Further, in at least some additional or alternative implementations, the functions/acts described in the blocks may occur out of the order shown in the flowcharts. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Moreover, the functionality of a given block of the flowcharts and/or block diagrams may be separated into multiple blocks and/or the functionality of two or more blocks of the flowcharts and/or block diagrams may be at least partially integrated. Furthermore, although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction relative to the depicted arrows. Finally, other blocks may be added/inserted between the blocks that are illustrated. It should therefore be clearly understood that the order or sequence of the acts, steps, functions, components or blocks illustrated in any of the flowcharts depicted in the drawing Figures of the present disclosure may be modified, altered, replaced, customized or otherwise rearranged within a particular flowchart, including deletion or omission of a particular act, step, function, component or block. Moreover, the acts, steps, functions, components or blocks illustrated in a particular flowchart may be inter-mixed or otherwise inter-arranged or rearranged with the acts, steps, functions, components or blocks illustrated in another flowchart in order to effectuate additional variations, modifications and configurations with respect to one or more processes for purposes of practicing the teachings of the present patent disclosure. Although various embodiments have been shown and described in detail, the claims are not limited to any particular embodiment or example. None of the above Detailed Description should be read as implying that any particular component, element, step, act, or function is essential such that it must be included in the scope of the claims. Reference to an element in the singular is not intended to mean “one and only one” unless explicitly so stated, but rather “one or more.” All structural and functional equivalents to the elements of the above-described embodiments that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Accordingly, those skilled in the art will recognize that the exemplary embodiments described herein can be practiced with various modifications and alterations within the spirit and scope of the claims appended below.
116,896
11863475
DETAILED DESCRIPTION Hereinafter, embodiments of the present disclosure are described in detail with reference to the accompanying drawings. It should be noted that the same elements will be designated by the same reference numerals although they are shown in different drawings. In the following description, specific details such as detailed configurations and components are merely provided to assist with the overall understanding of the embodiments of the present disclosure. Therefore, it should be apparent to those skilled in the art that various changes and modifications of the embodiments described herein may be made without departing from the scope of the present disclosure. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness. The terms described below are terms defined in consideration of the functions in the present disclosure, and may be different according to users, intentions of the users, or customs. Therefore, the definitions of the terms should be determined based on the contents throughout this specification. The present disclosure may have various modifications and various embodiments, among which embodiments are described below in detail with reference to the accompanying drawings. However, it should be understood that the present disclosure is not limited to the embodiments, but includes all modifications, equivalents, and alternatives within the scope of the present disclosure. Although the terms including an ordinal number such as first, second, etc. may be used for describing various elements, the structural elements are not restricted by the terms. The terms are only used to distinguish one element from another element. For example, without departing from the scope of the present disclosure, a first structural element may be referred to as a second structural element. Similarly, the second structural element may also be referred to as the first structural element. As used herein, the term “and/or” includes any and all combinations of one or more associated items. The terms used herein are merely used to describe various embodiments of the present disclosure but are not intended to limit the present disclosure. Singular forms are intended to include plural forms unless the context clearly indicates otherwise. In the present disclosure, it should be understood that the terms “include” or “have” indicate existence of a feature, a number, a step, an operation, a structural element, parts, or a combination thereof, and do not exclude the existence or probability of the addition of one or more other features, numerals, steps, operations, structural elements, parts, or combinations thereof. Unless defined differently, all terms used herein have the same meanings as those understood by a person skilled in the art to which the present disclosure belongs. Terms such as those defined in a generally used dictionary are to be interpreted to have the same meanings as the contextual meanings in the relevant field of art, and are not to be interpreted to have ideal or excessively formal meanings unless clearly defined in the present disclosure. The electronic device according to an embodiment may be one of various types of electronic devices. An electronic device may include a portable communication device (e.g., a smart phone), a computer, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. However, an electronic device is not limited to those described above. The terms used in the present disclosure are not intended to limit the present disclosure but are intended to include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the descriptions of the accompanying drawings, similar reference numerals may be used to refer to similar or related elements. A singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B, or C,” “at least one of A, B, and C,” and “at least one of A, B, or C,” may include all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, terms such as “1st,” “2nd,” “first,” and “second” may be used to distinguish a corresponding component from another component, but are not intended to limit the components in other aspects (e.g., importance or order). It is intended that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with,” “coupled to,” “connected with,” or “connected to” another element (e.g., a second element), it indicates that the element may be coupled with the other element directly (e.g., wired), wirelessly, or via a third element. As used herein, the term “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” and “circuitry.” A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to one embodiment, a module may be implemented in a form of an application-specific integrated circuit (ASIC). As described above, when partial transmission of a PDSCH DMRS occurs, CE for the PDSCH, noise and interference measurement for PDSCH demodulation, etc., may be negatively affected. Accordingly, the present disclosure utilizes special handling at a UE side for partial DMRS transmission, limits the number of RM patterns incurring partial DMRS transmission, and indicates which RM pattern potentially incurs such partial DMRS transmission to a UE. Certain RM patterns may not always incur partial DMRS transmission because the partial DMRS transmission will also depend on PDSCH allocation itself. However, from a UE implementation viewpoint, knowing which RM patterns can incur partial DMRS transmission is still important to prepare for potentially special handling for those RM patterns. In the current NR specification, there are 2 types of RM patterns depending on granularity. The first is resource block (RB) symbol level, and the other is resource element (RE) level. In addition to RB symbol level and RE level RM patterns, a synchronization signal (SS)/physical broadcast channel (PBCH) block is also considered as a type of RM pattern. Because a UE applies different handling for different types of RM patterns, applicability of partial DMRS transmission as well as the number of those patterns, if applicable, should depend on the type of RM pattern. For example, a DMRS may always be transmitted if RM with RE level granularity is applied. RM patterns may be further categorized in the current NR specification. For example, an RE level RM pattern can be a common reference signal (CRS) and zero power channel state information reference signal (ZP-CSI-RS), and applicability of partial DMRS transmission should depend on the type. Because CRS is an LTE signal, and a UE supporting NR should additionally acknowledge LTE operation. Due to such special handling, applicability of partial DMRS transmission may not be allowed for an RE level RM pattern corresponding to a CRS. In the current NR specification, downlink control information (DCI) can be used to select the RM pattern, i.e., DCI-based, to be applied among high layer configured multiple patterns. Alternatively, a high layer configured RM pattern, i.e., RRC-based, may be directly applied. Applicability of partial DMRS transmission should depend on DCI-based or RRC-based. A UE may have more difficulty handling DCI-based RM pattern selection due to its dynamic nature. Therefore, DCI-based RM may be excluded or limited to a certain number for partial DMRS transmission. In the current NR specification, there are number of configurations related to a PDSCH, and there is a need for differentiating the applicability of an RM pattern incurring partial DMRS transmission as well as the number of those patterns, if applicable, depending on those configurations. In accordance with an embodiment of the present disclosure, for different types of PDSCHs, one or more of the following options may be utilized. Option 1) A PDSCH can be transmitted in an RRC idle/inactive/connected state, and differentiation can be made among them. For example, for a PDSCH for remaining minimum system information (RMSI) and/or other system information (OSI), paging, etc., DMRS may always be transmitted. This may simplify a UE operation before/without RRC connection. There can also be differentiation between RRC idle/inactive/connected regarding allowing partial DMRS transmission. Option 2) There are 2 processing capabilities defined in the current NR specification. Capability 1 is for normal processing, and capability 2 is for fast processing. According to an embodiment of the disclosure, differentiation can be made among capability 1 and capability 2. For capability 2, partial transmission of DMRS may cause an issue due to its challenging fast processing. Hence, a DMRS may always be transmitted for a PDSCH with processing capability 2, for example. Option 3) There are 2 mapping types A and B of PDSCH in the current NR specification, and according to an embodiment of the disclosure, differentiation can be made among mapping types A and B. For example, for mapping type B, a DMRS may always be transmitted since this mapping type can be challenging for a UE due to its flexible nature. The aforementioned differentiation of applicability of RM patterns incurring partial DMRS transmission as well as the number of those patterns, if applicable, can explicitly be indicated by a UE as a capability. There is also a concept of precoding granularity, i.e., certain number of consecutive RB's should use the same precoding for PDSCH transmission, which may improve a UE's CE quality. However, partial DMRS transmission, which occurs within RB's with the same precoding granularity, can create more complications at a UE. Hence, a DMRS is either entirely transmitted or is entirely absent within RB's with the same precoding. Allowing partial transmission within RB's with the same precoding may be declared as UE capability. Accordingly, a total number of rate matching patterns that can incur partial DMRS transmission is limited, and such special rate matching patterns may be indicated by a network to a UE. Further, applicability of rate matching patterns, which can incur partial DMRS transmission, depends on certain conditions such as a type of a rate matching pattern, an RRC state of a UE, or a characteristic of a PDSCH such as the type, processing time, DMRS mapping type, etc. Additionally, support of an RM pattern incurring partial DMRS transmission is explicitly indicated by a UE as a capability. FIG.1is flow chart illustrating an operation of a base station allocating a PDCSH according to one embodiment. Referring toFIG.1, in step105, the base station transmits, to a UE, a schedule for a PDSCH associated with a DMRS. In step110, the base station determines whether the PDSCH is rate matched. In response to determining that the PDSCH is not rate matched in step110, the base station allocates the PDSCH with a full transmission of the DMRS. However, in response to determining that the PDSCH is rate matched in step110, the base station determines whether a rate matching pattern of the PDSCH is applicable for a partial transmission of the DMRS in step120. For example, the determination as to whether the rate-matching pattern of the PDSCH is applicable for the partial transmission of the DMRS may be based on a radio resource control state of the UE and/or a characteristic of the PDSCH such as a type of the PDSCH, a processing time, and/or a DMRS mapping type. In response to determining that the rate matching pattern of the PDSCH is applicable for the partial transmission of the DMRS in step120, the base station allocates, to the UE, the PDSCH with a partial transmission of the DMRS. FIG.2is flow chart illustrating an operation of a UE decoding a PDCSH according to one embodiment. Referring toFIG.2, in step205, the UE receives, from a base station, a schedule for a PDSCH associated with a DMRS. In step210, the UE determines whether the PDSCH is rate matched. In response to determining that the PDSCH is not rate matched in step210, the UE decodes the PDSCH with a full transmission of the DMRS. However, in response to determining that the PDSCH is rate matched in step210, the UE determines whether a rate matching pattern of the PDSCH is applicable for a partial transmission of the DMRS in step220. For example, the determination as to whether the rate-matching pattern of the PDSCH is applicable for the partial transmission of the DMRS may be based on a radio resource control state of the UE and/or a characteristic of the PDSCH such as a type of the PDSCH, a processing time, and/or a DMRS mapping type. In response to determining that the rate matching pattern of the PDSCH is applicable for the partial transmission of the DMRS in step220, the UE decodes the PDSCH with a partial transmission of the DMRS. FIG.3illustrates a block diagram of an electronic device301in a network environment300, according to one embodiment. Referring toFIG.3, the electronic device301in the network environment300may communicate with another electronic device302via a first network398(e.g., a short-range wireless communication network), or another electronic device304or a server308via a second network399(e.g., a long-range wireless communication network). The electronic device301may also communicate with the electronic device304via the server308. The electronic device301may include a processor320, a memory330, an input device350, a sound output device355, a display device360, an audio module370, a sensor module376, an interface377, a haptic module379, a camera module380, a power management module388, a battery389, a communication module390, a subscriber identification module (SIM)396, or an antenna module397. In one embodiment, at least one (e.g., the display device360or the camera module380) of the components may be omitted from the electronic device301, or one or more other components may be added to the electronic device301. In one embodiment, some of the components may be implemented as a single integrated circuit (IC). For example, the sensor module376(e.g., a fingerprint sensor, an iris sensor, or an illuminance sensor) may be embedded in the display device360(e.g., a display). The processor320may execute, for example, software (e.g., a program340) to control at least one other component (e.g., a hardware or a software component) of the electronic device301coupled with the processor320, and may perform various data processing or computations. As at least part of the data processing or computations, the processor320may load a command or data received from another component (e.g., the sensor module376or the communication module390) in volatile memory332, process the command or the data stored in the volatile memory332, and store resulting data in non-volatile memory334. The processor320may include a main processor321(e.g., a central processing unit (CPU) or an application processor (AP)), and an auxiliary processor323(e.g., a graphics processing unit (GPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from, or in conjunction with, the main processor321. Additionally or alternatively, the auxiliary processor323may be adapted to consume less power than the main processor321, or execute a particular function. The auxiliary processor323may be implemented as being separate from, or a part of, the main processor321. The auxiliary processor323may control at least some of the functions or states related to at least one component (e.g., the display device360, the sensor module376, or the communication module390) among the components of the electronic device301, instead of the main processor321while the main processor321is in an inactive (e.g., sleep) state, or together with the main processor321while the main processor321is in an active state (e.g., executing an application). According to one embodiment, the auxiliary processor323(e.g., an ISP or a CP) may be implemented as part of another component (e.g., the camera module380or the communication module390) functionally related to the auxiliary processor323. The memory330may store various data used by at least one component (e.g., the processor320or the sensor module376) of the electronic device301. The various data may include, for example, software (e.g., the program340) and input data or output data for a command related thereto. The memory330may include the volatile memory332or the non-volatile memory334. The program340may be stored in the memory330as software, and may include, for example, an operating system (OS)342, middleware344, or an application346. The input device350may receive a command or data to be used by another component (e.g., the processor320) of the electronic device301, from the outside (e.g., a user) of the electronic device301. The input device350may include, for example, a microphone, a mouse, or a keyboard. The sound output device355may output sound signals to the outside of the electronic device301. The sound output device355may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or recording, and the receiver may be used for receiving an incoming call. According to one embodiment, the receiver may be implemented as being separate from, or a part of, the speaker. The display device360may visually provide information to the outside (e.g., a user) of the electronic device301. The display device360may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to one embodiment, the display device360may include touch circuitry adapted to detect a touch, or sensor circuitry (e.g., a pressure sensor) adapted to measure the intensity of force incurred by the touch. The audio module370may convert a sound into an electrical signal and vice versa. According to one embodiment, the audio module370may obtain the sound via the input device350, or output the sound via the sound output device355or a headphone of an external electronic device302directly (e.g., wired) or wirelessly coupled with the electronic device301. The sensor module376may detect an operational state (e.g., power or temperature) of the electronic device301or an environmental state (e.g., a state of a user) external to the electronic device301, and then generate an electrical signal or data value corresponding to the detected state. The sensor module376may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The interface377may support one or more specified protocols to be used for the electronic device301to be coupled with the external electronic device302directly (e.g., wired) or wirelessly. According to one embodiment, the interface377may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface. A connecting terminal378may include a connector via which the electronic device301may be physically connected with the external electronic device302. According to one embodiment, the connecting terminal378may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (e.g., a headphone connector). The haptic module379may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or an electrical stimulus which may be recognized by a user via tactile sensation or kinesthetic sensation. According to one embodiment, the haptic module379may include, for example, a motor, a piezoelectric element, or an electrical stimulator. The camera module380may capture a still image or moving images. According to one embodiment, the camera module380may include one or more lenses, image sensors, ISPs, or flashes. The power management module388may manage power supplied to the electronic device301. The power management module388may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery389may supply power to at least one component of the electronic device301. According to one embodiment, the battery389may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell. The communication module390may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device301and the external electronic device (e.g., the electronic device302, the electronic device304, or the server308) and performing communication via the established communication channel. The communication module390may include one or more CPs that are operable independently from the processor320(e.g., the AP) and supports a direct (e.g., wired) communication or a wireless communication. According to an embodiment, the communication module390may include a wireless communication module392(e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module394(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device via the first network398(e.g., a short-range communication network, such as Bluetooth™, wireless-fidelity (Wi-Fi) direct, or a standard of the Infrared Data Association (IrDA)) or the second network399(e.g., a long-range communication network, such as a cellular network, the Internet, or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single IC), or may be implemented as multiple components (e.g., multiple ICs) that are separate from each other. The wireless communication module392may identify and authenticate the electronic device301in a communication network, such as the first network398or the second network399, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module396. The antenna module397may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device301. The antenna module397may include one or more antennas, and, therefrom, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network398or the second network399, may be selected, for example, by the communication module390(e.g., the wireless communication module392). The signal or the power may then be transmitted or received between the communication module390and the external electronic device via the selected at least one antenna. At least some of the above-described components may be mutually coupled and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, a general purpose input and output (GPIO), a serial peripheral interface (SPI), or a mobile industry processor interface (MIPI)). According to one embodiment, commands or data may be transmitted or received between the electronic device301and the external electronic device304via the server308coupled with the second network399. Each of the electronic devices302and304may be a device of a same type as, or a different type, from the electronic device301. All or some of operations to be executed at the electronic device301may be executed at one or more of the external electronic devices302,304, or308. For example, if the electronic device301should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device301, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested, or an additional function or an additional service related to the request, and transfer an outcome of the performing to the electronic device301. The electronic device301may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, or client-server computing technology may be used, for example. One embodiment may be implemented as software (e.g., the program340) including one or more instructions that are stored in a storage medium (e.g., internal memory336or external memory338) that is readable by a machine (e.g., the electronic device301). For example, a processor of the electronic device301may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. Thus, a machine may be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include code generated by a complier or code executable by an interpreter. A machine-readable storage medium may be provided in the form of a non-transitory storage medium. The term “non-transitory” indicates that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. FIG.4illustrates a base station according to one embodiment. Referring toFIG.4, the base station, e.g., a gNB, includes a transceiver410, a controller420, and a memory430. The controller420may be defined as a circuit, an ASIC, or a processor. The transceiver410may transmit/receive a signal to/from another network entity. The transceiver410may transmit system information to, e.g., the UE, and may transmit a synchronization signal or a reference signal. Further, the transceiver may transmit and receive, information related to initial access operation, random access operation, and handover operation to and from the UE. The controller420may control the overall operation of the base station. The controller420may control to perform the operation according to the above-described flowchart ofFIG.1. The memory430may store at least one piece of information transmitted/received through the transceiver410and information generated through the controller420. For example, the memory430may store information related to a schedule for a downlink associated with a reference signal. The memory430may store a basic program for the operation of a communication processor, an application, and data such as configuration information. Further, the memory430may include at least one storage medium among a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (e.g., an SD memory, an extreme digital (XD) memory, etc.), a magnetic memory, a magnetic disk, an optical disk, a random access memory (RAM), a static RAM (SRAM), a read only memory (ROM), a programmable ROM (PROM), and an electrically erasable PROM (EEPROM). The controller420may perform various operations using a variety of programs, content, and data stored in the memory. According to one embodiment, a method of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., a compact disc ROM (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., Play Store™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server. According to one embodiment, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities. One or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In this case, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. Operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added. According to the above-described embodiments, a system and method are provided, which utilize defined conditions for applicability of rate matching patterns, which can incur partial DMRS transmission. Although certain embodiments of the present disclosure have been described in the detailed description of the present disclosure, the present disclosure may be modified in various forms without departing from the scope of the present disclosure. Thus, the scope of the present disclosure shall not be determined merely based on the described embodiments, but rather determined based on the accompanying claims and equivalents thereto.
30,626
11863476
BEST MODE The embodiments of the present disclosure described below are combinations of elements and features of the present disclosure in specific forms. The elements or features may be considered selective unless otherwise mentioned. Each element or feature may be practiced without being combined with other elements or features. Further, an embodiment of the present disclosure may be constructed by combining parts of the elements and/or features. Operation orders described in embodiments of the present disclosure may be rearranged. Some constructions or elements of any one embodiment may be included in another embodiment and may be replaced with corresponding constructions or features of another embodiment. In the description of the attached drawings, a detailed description of known procedures or steps of the present disclosure will be avoided lest it should obscure the subject matter of the present disclosure. In addition, procedures or steps that could be understood to those skilled in the art will not be described either. Throughout the specification, when a certain portion “includes” or “comprises” a certain component, this indicates that other components are not excluded and may be further included unless otherwise noted. The terms “unit”, “-or/er” and “module” described in the specification indicate a unit for processing at least one function or operation, which may be implemented by hardware, software or a combination thereof. In addition, the terms “a or an”, “one”, “the” etc. may include a singular representation and a plural representation in the context of the present disclosure (more particularly, in the context of the following claims) unless indicated otherwise in the specification or unless context clearly indicates otherwise. In the embodiments of the present disclosure, a description is mainly made of a data transmission and reception relationship between a Base Station (BS) and a User Equipment (UE). A BS refers to a UE node of a network, which directly communicates with a UE. A specific operation described as being performed by the BS may be performed by an upper node of the BS. Namely, it is apparent that, in a network comprised of a plurality of network nodes including a BS, various operations performed for communication with a UE may be performed by the BS, or network nodes other than the BS. The term ‘BS’ may be replaced with a fixed station, a Node B, an evolved Node B (eNode B or eNB), gNode B (gNB), an Advanced Base Station (ABS), an access point, etc. In the embodiments of the present disclosure, the term UE may be replaced with a UE, a Mobile Station (MS), a Subscriber Station (SS), a Mobile Subscriber Station (MSS), a mobile UE, an Advanced Mobile Station (AMS), etc. A transmission end is a fixed and/or mobile node that provides a data service or a voice service and a reception end is a fixed and/or mobile node that receives a data service or a voice service. Therefore, a UE may serve as a transmission end and a BS may serve as a reception end, on an UpLink (UL). Likewise, the UE may serve as a reception end and the BS may serve as a transmission end, on a DownLink (DL). The embodiments of the present disclosure may be supported by standard specifications disclosed for at least one of wireless access systems including an Institute of Electrical and Electronics Engineers (IEEE) 802.xx system, a 3rd Generation Partnership Project (3GPP) system, a 3GPP Long Term Evolution (LTE) system, 3GPP 5G NR system and a 3GPP2 system. In particular, the embodiments of the present disclosure may be supported by the standard specifications, 3GPP TS 38.211, 3GPP TS 38.212, 3GPP TS 38.213, 3GPP TS 38.321 and 3GPP TS 38.331. That is, the steps or parts, which are not described to clearly reveal the technical idea of the present disclosure, in the embodiments of the present disclosure may be explained by the above standard specifications. All terms used in the embodiments of the present disclosure may be explained by the standard specifications. Reference will now be made in detail to the embodiments of the present disclosure with reference to the accompanying drawings. The detailed description, which will be given below with reference to the accompanying drawings, is intended to explain exemplary embodiments of the present disclosure, rather than to show the only embodiments that can be implemented according to the disclosure. The following detailed description includes specific terms in order to provide a thorough understanding of the present disclosure. However, it will be apparent to those skilled in the art that the specific terms may be replaced with other terms without departing the technical spirit and scope of the present disclosure. Hereinafter, 3GPP NR system is explained, which are examples of wireless access systems. Technology described below may be applied to various wireless access systems such as code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), orthogonal frequency division multiple access (OFDMA), and single carrier frequency division multiple access (SC-FDMA). To clarify technical features of the present disclosure, embodiments of the present disclosure are described focusing upon a 3GPP NR system. However, the embodiments proposed in the present disclosure may be equally applied to other wireless systems (e.g., 3GPP LTE, IEEE 802.16, and IEEE 802.11). 1. NR System 1.1. Physical Channels and General Signal Transmission In a wireless access system, a UE receives information from a base station on a DL and transmits information to the base station on a UL. The information transmitted and received between the UE and the base station includes general data information and various types of control information. There are many physical channels according to the types/usages of information transmitted and received between the base station and the UE. FIG.1illustrates physical channels and a general signal transmission method using the physical channels, which may be used in embodiments of the present disclosure. A UE performs initial cell search such as synchronization establishment with a BS in step S11when the UE is powered on or enters a new cell. To this end, the UE may receive a primary synchronization channel (P-SCH) and a secondary synchronization channel (S-SCH) from the BS, establish synchronization with the BS, and acquire information such as a cell identity (ID). Thereafter, the UE may receive a physical broadcast channel (PBCH) from the BS to acquire broadcast information in the cell. Meanwhile, the UE may receive a DL reference signal (RS) in the initial cell search step to confirm a DL channel state. Upon completion of initial cell search, the UE may receive a physical downlink control channel (PDCCH) and a physical downlink shared channel (PDSCH) according to information included in the PDCCH to acquire more detailed system information in step S12. Next, the UE may perform a random access procedure such as steps S13to S16to complete access to the BS. To this end, the UE may transmit a preamble through a physical random access channel (PRACH) (S13) and receive a random access response (RAR) to the preamble through the PDCCH and the PDSCH corresponding to the PDCCH (S14). The UE may transmit a physical uplink shared channel (PUSCH). In the case of contention-based random access, a contention resolution procedure including transmission of a PRACH signal (S15) and reception of a PDCCH signal and a PDSCH signal corresponding to the PDCCH signal (S16) may be additionally performed. The UE which has performed the above procedures may receive a PDCCH signal and/or a PDSCH signal (S17) and transmit a PUSCH signal and/or a physical uplink control channel (PUCCH) signal (S18) as a general UL/DL signal transmission procedure. Control information that the UE transmits to the BS is referred to as uplink control information (UCI). The UCI includes a hybrid automatic repeat and request (HARQ) acknowledgement (ACK)/negative ACK (NACK) signal, a scheduling request (SR), a channel quality indicator (CQI), a precoding matrix index (PMI), a rank indicator (RI), or beam indication (BI) information. In an NR system, the UCI is generally periodically transmitted on the PUCCH. However, according to an embodiment (if control information and traffic data should be transmitted simultaneously), the control information and traffic data may be transmitted on the PUSCH. In addition, the UCI may be transmitted aperiodically on the PUSCH, upon receipt of a request/command from a network. 1.2. Radio Frame Structure FIG.2is a diagram illustrating a radio frame structure in an NR system to which embodiments of the present disclosure are applicable. In the NR system, UL and DL transmissions are based on a frame as illustrated inFIG.2. One radio frame is 10 ms in duration, defined by two 5-ms half-frames. One half-frame is defined by five 1-ms subframes. One subframe is divided into one or more slots, and the number of slots in a subframe depends on an SCS. Each slot includes 12 or 14 OFDM(A) symbols according to a CP. Each slot includes 14 symbols in a normal CP case, and 12 symbols in an extended CP case. Herein, a symbol may include an OFDM symbol (or a CP-OFDM) symbol and an SC-FDMA symbol (or a DFT-s-OFDM symbol). Table 1 lists the number of symbols per slot, the number of slots per frame, and the number of slots per subframe in the normal CP case, and Table 2 lists the number of symbols per slot, the number of slots per frame, and the number of slots per subframe in the extended CP case. TABLE 1μNsymbslotNslotframe, μNslotsubframe, μ0141011142022144043148084141601651432032 TABLE 2μNsymbslotNslotframe, μNslotsubframe, μ212404 In the above tables, Nslotsymbrepresents the number of symbols in a slot, Nframe,μslotrepresents the number of slots in a frame, and Nsubframe,μslotrepresents the number of slots in a subframe. In the NR system to which the present disclosure is applicable, different OFDM(A) numerologies (e.g., SCSs, CP length, and so on) may be configured for a plurality of cells aggregated for a UE. Therefore, the (absolute) duration of a time resource (e.g., an SF, slot, or TTI) (for the convenience of description, generically referred to as a time unit (TU)) including the same number of symbols may be different between the aggregated cells. FIG.3is a diagram illustrating a slot structure in an NR system to which embodiments of the present disclosure are applicable. One slot includes a plurality of symbols in the time domain. For example, one slot includes 7 symbols in a normal CP case and 6 symbols in an extended CP case. A carrier includes a plurality of subcarriers in the frequency domain. An RB is defined by a plurality of (e.g., 12) consecutive subcarriers in the frequency domain. A bandwidth part (BWP), which is defined by a plurality of consecutive (P)RBs in the frequency domain, may correspond to one numerology (e.g., SCS, CP length, and so on). A carrier may include up to N (e.g., 5) BWPs. Data communication may be conducted in an activated BWP, and only one BWP may be activated for one UE. In a resource grid, each element is referred to as an RE, to which one complex symbol may be mapped. FIG.4is a diagram illustrating a self-contained slot structures in an NR system to which embodiments of the present disclosure are applicable. InFIG.4, the hatched area (e.g., symbol index=0) indicates a DL control region, and the black area (e.g., symbol index=13) indicates a UL control region. The remaining area (e.g., symbol index=1 to 12) may be used for DL or UL data transmission. Based on this structure, a base station and a UE may sequentially perform DL transmission and UL transmission in one slot. That is, the base station and UE may transmit and receive not only DL data but also a UL ACK/NACK for the DL data in one slot. Consequently, this structure may reduce a time required until data retransmission when a data transmission error occurs, thereby minimizing the latency of a final data transmission. In this self-contained slot structure, a predetermined length of time gap is required to allow the base station and UE to switch from transmission mode to reception mode and vice versa. To this end, in the self-contained slot structure, some OFDM symbols at the time of switching from DL to UL may be configured as a guard period (GP). Although it has been described above that the self-contained slot structure includes both DL and UL control regions, these control regions may be selectively included in the self-contained slot structure. In other words, the self-contained slot structure according to the present disclosure may include either the DL control region or the UL control region as well as both the DL and UL control regions as illustrated inFIG.5. Further, the order of the regions in one slot may vary according to embodiments. For example, one slot may be configured in the order of DL control region, DL data region, UL control region, and UL data region, or UL control region, UL data region, DL control region, and DL data region. A PDCCH may be transmitted in the DL control region, and a PDSCH may be transmitted in the DL data region. A PUCCH may be transmitted in the UL control region, and a PUSCH may be transmitted in the UL data region. The PDCCH may deliver downlink control information (DCI), for example, DL data scheduling information, UL data scheduling information, and so on. The PUCCH may deliver uplink control information (UCI), for example, an ACK/NACK for DL data, channel state information (CSI), a scheduling request (SR), and so on. The PDSCH conveys DL data (e.g., DL-shared channel transport block (DL-SCH TB)) and uses a modulation scheme such as quadrature phase shift keying (QPSK), 16-ary quadrature amplitude modulation (16QAM), 64QAM, or 256QAM. A TB is encoded into a codeword. The PDSCH may deliver up to two codewords. Scrambling and modulation mapping are performed on a codeword basis, and modulation symbols generated from each codeword are mapped to one or more layers (layer mapping). Each layer together with a demodulation reference signal (DMRS) is mapped to resources, generated as an OFDM symbol signal, and transmitted through a corresponding antenna port. The PDCCH carries DCI and uses QPSK as a modulation scheme. One PDCCH includes 1, 2, 4, 8, or 16 control channel elements (CCEs) according to an aggregation level (AL). One CCE includes 6 resource element groups (REGs). One REG is defined by one OFDM symbol by one (P)RB. FIG.5is a diagram illustrating the structure of one REG in an NR system to which embodiments of the present disclosure are applicable. InFIG.5, D represents an RE to which DCI is mapped, and R represents an RE to which a DMRS is mapped. The DMRS is mapped to REs #1, #5, and #9 along the frequency axis in one symbol. The PDCCH is transmitted in a control resource set (CORESET). A CORESET is defined as a set of REGs having a given numerology (e.g., SCS, CP length, and so on). A plurality of CORESETs for one UE may overlap with each other in the time/frequency domain. A CORESET may be configured by system information (e.g., a master information block (MIB)) or by UE-specific higher layer (RRC) signaling. Specifically, the number of RBs and the number of symbols (up to 3 symbols) included in a CORESET may be configured by higher-layer signaling. The PUSCH delivers UL data (e.g., UL-shared channel transport block (UL-SCH TB)) and/or UCI based on a CP-OFDM waveform or a DFT-s-OFDM waveform. When the PUSCH is transmitted in the DFT-s-OFDM waveform, the UE transmits the PUSCH by transform precoding. For example, when transform precoding is impossible (e.g., disabled), the UE may transmit the PUSCH in the CP-OFDM waveform, while when transform precoding is possible (e.g., enabled), the UE may transmit the PUSC in the CP-OFDM or DFT-s-OFDM waveform. A PUSCH transmission may be dynamically scheduled by a UL grant in DCI, or semi-statically scheduled by higher-layer (e.g., RRC) signaling (and/or layer 1 (L1) signaling such as a PDCCH) (configured grant). The PUSCH transmission may be performed in a codebook-based or non-codebook-based manner. The PUCCH delivers UCI, an HARQ-ACK, and/or an SR and is classified as a short PUCCH or a long PUCCH according to the transmission duration of the PUCCH. Table 3 lists exemplary PUCCH formats. TABLE 3PUCCHLength in OFDMNumber offormatsymbols NsymbPUCCHbitsUsageEtc01-2≤2HARQ, SRSequence selection14-14≤2HARQ, [SR]Sequence modulation21-2>2HARQ, CSI, [SR]CP-OFDM34-14>2HARQ, CSI, [SR]DFT-s-OFDM(no UE multiplexing)44-14>2HARQ, CSI, [SR]DFT-s-OFDM(Pre DFT OCC) PUCCH format 0 conveys UCI of up to 2 bits and is mapped in a sequence-based manner, for transmission. Specifically, the UE transmits specific UCI to the base station by transmitting one of a plurality of sequences on a PUCCH of PUCCH format 0. Only when the UE transmits a positive SR, the UE transmits the PUCCH of PUCCH format 0 in a PUCCH resource for a corresponding SR configuration. PUCCH format 1 conveys UCI of up to 2 bits and modulation symbols of the UCI are spread with an OCC (which is configured differently whether frequency hopping is performed) in the time domain. The DMRS is transmitted in a symbol in which a modulation symbol is not transmitted (i.e., transmitted in time division multiplexing (TDM)). PUCCH format 2 conveys UCI of more than 2 bits and modulation symbols of the DCI are transmitted in frequency division multiplexing (FDM) with the DMRS. The DMRS is located in symbols #1, #4, #7, and #10 of a given RB with a density of 1/3. A pseudo noise (PN) sequence is used for a DMRS sequence. For 1-symbol PUCCH format 2, frequency hopping may be activated. PUCCH format 3 does not support UE multiplexing in the same PRBS, and conveys UCI of more than 2 bits. In other words, PUCCH resources of PUCCH format 3 do not include an OCC. Modulation symbols are transmitted in TDM with the DMRS. PUCCH format 4 supports multiplexing of up to 4 UEs in the same PRBS, and conveys UCI of more than 2 bits. In other words, PUCCH resources of PUCCH format 3 includes an OCC. Modulation symbols are transmitted in TDM with the DMRS. 1.3. Analog Beamforming In a millimeter wave (mmW) system, since a wavelength is short, a plurality of antenna elements can be installed in the same area. That is, considering that the wavelength at 30 GHz band is 1 cm, a total of 100 antenna elements can be installed in a 5*5 cm panel at intervals of 0.5 lambda (wavelength) in the case of a 2-dimensional array. Therefore, in the mmW system, it is possible to improve the coverage or throughput by increasing the beamforming (BF) gain using multiple antenna elements. In this case, each antenna element can include a transceiver unit (TXRU) to enable adjustment of transmit power and phase per antenna element. By doing so, each antenna element can perform independent beamforming per frequency resource. However, installing TXRUs in all of the about 100 antenna elements is less feasible in terms of cost. Therefore, a method of mapping a plurality of antenna elements to one TXRU and adjusting the direction of a beam using an analog phase shifter has been considered. However, this method is disadvantageous in that frequency selective beamforming is impossible because only one beam direction is generated over the full band. To solve this problem, as an intermediate form of digital BF and analog BF, hybrid BF with B TXRUs that are fewer than Q antenna elements can be considered. In the case of the hybrid BF, the number of beam directions that can be transmitted at the same time is limited to B or less, which depends on how B TXRUs and Q antenna elements are connected. FIGS.6and7are diagrams illustrating representative methods for connecting TXRUs to antenna elements. Here, the TXRU virtualization model represents the relationship between TXRU output signals and antenna element output signals. FIG.6shows a method for connecting TXRUs to sub-arrays. InFIG.6, one antenna element is connected to one TXRU. Meanwhile,FIG.7shows a method for connecting all TXRUs to all antenna elements. InFIG.7, all antenna elements are connected to all TXRUs. In this case, separate addition units are required to connect all antenna elements to all TXRUs as shown inFIG.7. InFIGS.6and7, W indicates a phase vector weighted by an analog phase shifter. That is, W is a major parameter determining the direction of the analog beamforming. In this case, the mapping relationship between CSI-RS antenna ports and TXRUs may be 1:1 or 1-to-many. The configuration shown inFIG.6has a disadvantage in that it is difficult to achieve beamforming focusing but has an advantage in that all antennas can be configured at low cost. On the contrary, the configuration shown inFIG.7is advantageous in that beamforming focusing can be easily achieved. However, since all antenna elements are connected to the TXRU, it has a disadvantage of high cost. When a plurality of antennas is used in the NR system to which the present disclosure is applicable, a hybrid beamforming (BF) scheme in which digital BF and analog BF are combined may be applied. In this case, analog BF (or radio frequency (RF) BF) means an operation of performing precoding (or combining) at an RF stage. In hybrid BF, each of a baseband stage and the RF stage perform precoding (or combining) and, therefore, performance approximating to digital BF can be achieved while reducing the number of RF chains and the number of a digital-to-analog (D/A) (or analog-to-digital (A/D) converters. For convenience of description, a hybrid BF structure may be represented by N transceiver units (TXRUs) and M physical antennas. In this case, digital BF for L data layers to be transmitted by a transmission end may be represented by an N-by-L matrix. N converted digital signals obtained thereafter are converted into analog signals via the TXRUs and then subjected to analog BF, which is represented by an M-by-N matrix. FIG.8is a diagram schematically illustrating an exemplary hybrid BF structure from the perspective of TXRUs and physical antennas according to the present disclosure. InFIG.8, the number of digital beams is L and the number analog beams is N. Additionally, in the NR system to which the present disclosure is applicable, a BS designs analog BF to be changed in units of symbols to provide more efficient BF support to a UE located in a specific area. Furthermore, as illustrated inFIG.11, when N specific TXRUs and M RF antennas are defined as one antenna panel, the NR system according to the present disclosure considers introducing a plurality of antenna panels to which independent hybrid BF is applicable. In the case in which the BS utilizes a plurality of analog beams as described above, the analog beams advantageous for signal reception may differ according to a UE. Therefore, in the NR system to which the present disclosure is applicable, a beam sweeping operation is being considered in which the BS transmits signals (at least synchronization signals, system information, paging, and the like) by applying different analog beams in a specific subframe (SF) or slot on a symbol-by-symbol basis so that all UEs may have reception opportunities. FIG.9is a diagram schematically illustrating an exemplary beam sweeping operation for a synchronization signal and system information in a DL transmission procedure according to the present disclosure. InFIG.9below, a physical resource (or physical channel) on which the system information of the NR system to which the present disclosure is applicable is transmitted in a broadcasting manner is referred to as an xPBCH. Here, analog beams belonging to different antenna panels within one symbol may be simultaneously transmitted. As illustrated inFIG.9, in order to measure a channel for each analog beam in the NR system to which the present disclosure is applicable, introducing a beam RS (BRS), which is a reference signal (RS) transmitted by applying a single analog beam (corresponding to a specific antenna panel), is being discussed. The BRS may be defined for a plurality of antenna ports and each antenna port of the BRS may correspond to a single analog beam. In this case, unlike the BRS, a synchronization signal or the xPBCH may be transmitted by applying all analog beams in an analog beam group such that any UE may receive the signal well. 1.4. Demodulation Reference Signal (DMRS) In the NR system to which the present disclosure is applicable, a DMRS may be transmitted and received in a front-loaded structure. Alternatively, an additional DMRS may be transmitted and received in addition to the front-loaded DMRS. The front-loaded DMRS may support fast decoding. The first OFDM symbol in which the front-loaded DMRS is carried may be determined as the third (e.g., 1=2) or fourth (e.g., 1=3) OFDM symbol. The first OFDM symbol position may be indicated by a PBCH. The number of OFDM symbols in which the front-loaded DMRS is occupied may be indicated by a combination of DCI and radio resource control (RRC) signaling. The additional DMRS may be configured for a high-speed UE. The additional DMRS may be positioned in the middle/last symbol(s) in a slot. If one front-loaded DMRS is configured, the additional DMRS may be allocated to 0 to 3 OFDM symbols. If two front-loaded DMRS symbols are configured, the additional DMRS may be allocated to 0 to 2 OFDM symbols. The front-loaded DMRS may be divided into two types and one of the two types may be indicated through higher layer signaling (e.g., RRC signaling). FIG.8is a diagram schematically illustrating two DMRS configuration types applicable to the present disclosure. InFIG.8, P0to P11may correspond to port numbers 1000 to 1011, respectively. Among of the two DMRS configuration types, a DMRS configuration type that is actually configured for a UE may be indicated by higher layer signaling (e.g., RRC signaling). DMRS configuration type 1 may be subdivided as follows depending on the number of OFDM symbols allocated for the front-loaded DMRS. DMRS Configuration Type 1 and Number of OFDM Symbols to which the Front-Loaded DMRS is Allocated=1 Up to 4 ports (e.g., P0to P3) may be multiplexed based on length-2 frequency code division multiplexing (F-CDM) and frequency division multiplexing (FDM) schemes. RS density may be set to 6 REs per port in a resource block (RB). DMRS Configuration Type 1 and Number of OFDM Symbols to which the Front-Loaded DMRS is Allocated=2 Up to 8 ports (e.g., P0to P7) may be multiplexed based on length-2 F-CDM, length-2 time CDM (T-CDM), and FDM schemes. If presence of a PT-RS is configured by higher layer signaling, T-CDM may be fixed to [1 1]. RS density may be set to 12 REs per port in the RB. DMRS configuration type 2 may be classified as follows according to the number of OFDM symbols to which the front-loaded DMRS is allocated. DMRS Configuration Type 2 and Number of OFDM Symbols to which the Front-Loaded DMRS is Allocated=1 Up to 6 ports (e.g., P0to P5) may be multiplexed based on length-2 F-CDM and FDM schemes. RS density may be set to 4 REs per port in the RB. DMRS Configuration Type 2 and Number of OFDM Symbols to which the Front-Loaded DMRS is Allocated=2 Up to 12 ports (e.g., P0to P11) may be multiplexed based on length-2 F-CDM, length-2 T-CDM, and FDM schemes. If presence of the PT-RS is configured by higher layer signaling, T-CDM may be fixed to [1 1]. RS density may be set to 8 REs per port in the RB. FIG.10is a diagram schematically illustrating an example of a front loaded DMRS of a first DMRS configuration type applicable to the present disclosure. More specifically,FIG.10(a)illustrates a front-loaded DMRS with one symbol andFIG.10(b)illustrates a front-loaded DMRS with two symbols. InFIG.10, A represents a DMRS offset value on the frequency axis. In this case, DMRS ports having the same DMRS offset A may be subjected to code division multiplexing in the frequency domain (CDM-F) or code division multiplexing in the time domain (CDM-T). In addition, DMRS ports having different DMRS offsets A may be subjected to CDM-F. A UE may obtain DMRS port configuration information configured by a BS from DCI. 1.5. DMRS Port Group In the present disclosure, a DMRS port group may refer to a set of DMRS ports that are quasi co-located (QCL) or partially QCL with each other. Herein, quasi co-location (QCL) may mean that long-term channel parameters such as a Doppler spread, a Doppler shift, an average delay, a delay spread, etc. are assumed to be the same, and partial QCL may mean that some of the long-term channel parameters are assumed to be the same. 1.6. DCI Format In the NR system to which the present disclosure is applicable, the following DCI formats may be supported. First, the NR system may support DCI format 0_0 and DCI format 0_1 as a DCI format for PUSCH scheduling and support DCI format 1_0 and DCI format 1_1 as a DCI format for PDSCH scheduling. In addition, as DCI formats usable for other purposes, the NR system may additionally support DCI format 2_0, DCI format 2_1, DCI format 2_2, and DCI format 2_3. Herein, DCI format 0_0 is used to schedule a transmission block (TB)-based (or TB-level) PUSCH. DCI format 0_1 may be used to schedule a TB-based (or TB-level) PUSCH or code block group (CBG)-based (or CBG-level) PUSCH (in the case in which CBG-based signal transmission and reception is configured). In addition, DCI format 1_0 may be used to schedule TB-based (or TB-level) PDSCH. DCI format 1_1 may be used to schedule TB-based (or TB-level) PDSCH or CBG-based (or CBG-level) PDSCH (in the case in which CBG-based signal transmission and reception is configured). In addition, DCI format 2_0 may be used to notify UEs of a slot format. DCI format 2_1 may be used to notify UEs of PRB(s) and OFDM symbol(s) in which a specific UE assumes that no transmission is intended therefor. DCI format 2_2 may be used to transmit transmission power control (TPC) commands for a PUCCH and a PUSCH. DCI format 2_3 may be used to transmit a group of TPC commands for SRS transmission by one or more UEs. Detailed features of the DCI formats may be supported by 3GPP TS 38.212. That is, obvious steps or parts which are not explained by DCI format-related features may be explained with reference to the above document. In addition, all terms disclosed in the present document may be explained by the above standard document. 1.7. Control Resource Set (CORESET) One CORESET includes NCORESETRBRBs in the frequency domain and NCORESETsymbsymbols (having a value of 1, 2, or 3) in the time domain. One control channel element (CCE) includes 6 resource element groups (REGs) and one REG is equal to one RB in one OFDM symbol. REGs in the CORESET are numbered in a time-first manner. Specifically, the REGs are numbered starting with ‘0’ for the first OFDM symbol and the lowest-numbered RB in the CORESET. A plurality of CORESETs may be configured for one UE. Each CORESET is related only to one CCE-to-REG mapping. CCE-to-REG mapping for one CORESET may be interleaved or non-interleaved. Configuration information for the CORESET may be configured by a higher layer parameter ControlResourceSet IE. In addition, configuration information for CORESET 0 (e.g., common CORESET) may be configured by a higher layer parameter ControlResourceSetZero IE. 1.8. Antenna Port Quasi Co-Location One UE may be configured with a list of up to M transmission configuration indicator (TCI) state configurations. The M TCI-state configurations may be configured by a higher layer parameter PDSCH-Config to decode a PDSCH (by the UE) according to a detected PDCCH with DCI intended for the UE and the given serving cell. Herein, M may be determined depending on the capability of the UE. Each TCI state contains parameters for configuring a quasi co-location (QCL) relationship between one or two DL reference signals and the DMRS ports of the PDSCH. The QCL relationship is configured by the higher layer parameter qcl-Type1 for a first DL RS and a higher layer parameter qcl-Type2 for a second DL RS (if configured). For the case of two DL RSs, the QCL types should not be the same, regardless of whether the RSs are the same DL RS or different DL RSs. The QCL type corresponding to each DL RS is given by a higher layer parameter qcl-Type within a higher layer parameter QCL-Info and may have one of the following values.‘QCL-TypeA’: {Doppler shift, Doppler spread, average delay, delay spread}‘QCL-TypeB’: {Doppler shift, Doppler spread}‘QCL-TypeC’: {Doppler shift, average delay}‘QCL-TypeD’: {Spatial Rx parameter} The UE receives an activation command used to map up to 8 TCI states to codepoints of a TCI field in the DCI. When a HARQ-ACK signal corresponding to the PDSCH carrying the activation command is transmitted in slot #n, mapping between the TCI states and codepoints of the TCI field in the DCI may be applied starting from slot #(n+3*Nsubframe,μslot+1) In this case, Nsubframe,μslotis determined based on Table 1 or Table 2 described above. After the UE receives initial higher layer configuration of TCI states and before the UE receives the activation command, the UE assumes that DM-RS port(s) of a PDSCH of a serving cell are quasi co-located with an SS/PBCH block determined in the initial access procedure with respect to ‘QCL-TypeA’. Additionally, the UE may assume that the DM-RS port(s) of the PDSCH of the serving cell are quasi co-located with the SS/PBCH block determined in the initial access procedure also with respect to ‘QCL-TypeD’ at the above timing. If a higher layer parameter tci-PresentInDCI is set as ‘enabled’ for a CORESET scheduling the PDSCH, the UE assumes that the TCI field is present in a PDCCH of DCI format 1_1 transmitted on the CORESET. If the higher layer parameter tci-PresentInDCI is not configured for the CORESET scheduling the PDSCH or the PDSCH is scheduled by DCI format 1_0 and if a time offset between the reception of the DL DCI and the reception of the corresponding PDSCH is equal to or greater than a threshold Threshold-Sched-Offset (where the threshold is based on UE capability), for determining PDSCH antenna port QCL, the UE assumes that a TCI state or QCL assumption for the PDSCH is identical to a TCI state or QCL assumption applied to a CORESET used for PDCCH transmission. If the higher layer parameter tci-PresentInDCI is set as ‘enabled’, the TCI field in the DCI scheduling a component carrier (CC) points to activated TCI states in the scheduled CC or a DL BW, and the PDSCH is scheduled by DCI format 1_1, the UE uses a TCI-state according to the TCI field in the DCI in a detected PDCCH to determine PDSCH antenna port QCL. The UE may assume that DMRS ports of the PDSCH of a serving cell are quasi co-located with RS(s) in the TCI state with respect to QCL type parameter(s) given by an indicated TCI state if the time offset between the reception of the DL DCI and the reception of the corresponding PDSCH is equal to or greater than the threshold Threshold-Sched-Offset (where the threshold is determined based on reported UE capability). When the UE is configured with a single slot PDSCH, the indicated TCI state should be based on the activated TCI states in a slot with the scheduled PDSCH. When the UE is configured with CORESET associated with a search space set for cross-carrier scheduling, the UE expects that the higher layer parameter tci-PresentInDci is set as ‘enabled’ for the CORESET. If one or more of the TCI states configured for the serving cell scheduled by the search space set contains ‘QCL-TypeD’, the UE expects the time offset between the reception of the detected PDCCH in the search space set and the reception of the corresponding PDSCH is greater than or equal to the threshold timeDurationForQCL. For both the cases when higher layer parameter tci-PresentInDCI is set to ‘enabled’ and the higher layer parameter tci-PresentInDCI is not configured in RRC connected mode, if the offset between the reception of the DL DCI and the reception of the corresponding PDSCH is less than the threshold Threshold-Sched-Offset, the UE makes the following assumptions. (i) DM-RS ports of a PDSCH of a serving cell are quasi co-located with the RS(s) in a TCI state with respect to QCL parameter(s). (ii) In this case, the QCL parameter(s) are used for PDCCH QCL indication of the CORESET associated with a monitored search space with the lowest CORESET-ID in the latest slot in which one or more CORESETs within an active BWP of the serving cell are monitored by the UE. In this case, if the ‘QCL-TypeD’ of a PDSCH DM-RS is different from ‘QCL-TypeD’ of a PDCCH DM-RS with which overlapping occurs in at least one symbol, the UE is expected to prioritize the reception of the ePDCCH associated with the corresponding CORESET. This operation may also be applied to an intra-band CA case (when the PDSCH and the CORESET are in different CCs). If none of configured TCI states contains ‘QCL-TypeD’, the UE obtains the other QCL assumptions from the indicated TCI states for a scheduled PDSCH irrespective of the time offset between the reception of the DL DCI and the reception of the corresponding PDSCH. For a periodic CSI-RS resource in an NZP-CSI-RS-ResourceSet configured with a higher layer parameter trs-Info, the UE should assume that that a TCI state indicates one of the following QCL type(s):‘QCL-TypeC’ with an SS/PBCH block and, when (QCL-TypeD) is applicable, ‘QCL-TypeD’ with the same SS/PBCH block, or‘QCL-TypeC’ with an SS/PBCH block and, when (QCL-TypeD) is applicable, ‘QCL-TypeD’ with a periodic CSI-RS resource in a higher layer parameter NZPCSI-RS-ResourceSet configured with higher layer parameter repetition,For a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with the higher layer parameter trs-Info and without the higher layer parameter repetition, the UE should assume that a TCI state indicates one of the following QCL type(s):‘QCL-TypeA’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with higher layer parameter trs-Info and, when (QCL-TypeD) is applicable, ‘QCL-TypeD’ with the same CSI-RS resource, or‘QCL-TypeA’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with higher layer parameter trs-Info and, when (QCL-TypeD) is applicable, ‘QCL-TypeD’ with an SS/PBCH, or‘QCL-TypeA’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with the higher layer parameter trs-Info and, when (QCL-TypeD is) applicable, ‘QCL-TypeD’ with a periodic CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with the higher layer parameter repetition, or‘QCL-TypeB’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with the higher layer parameter trs-Info when ‘QCL-TypeD’ is not applicable. For a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with the higher layer parameter repetition, the UE should assume that a TCI state indicates one of the following QCL type(s):‘QCL-TypeA’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with the higher layer parameter trs-Info and, when (‘QCL-TypeD) is applicable, ‘QCL-TypeD’ with the same CSI-RS resource, or‘QCL-TypeA’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with the higher layer parameter trs-Info and, when (‘QCL-TypeD’ is) applicable, ‘QCL-TypeD’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with higher layer parameter repetition, or‘QCL-TypeC’ with an SS/PBCH block and, when (QCL-TypeD) is applicable, ‘QCL-TypeD’ with the same SS/PBCH block. For the DM-RS of PDCCH, the UE should assume that a TCI state indicates one of the following QCL type(s):‘QCL-TypeA’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with the higher layer parameter trs-Info and, when (QCL-TypeD) is applicable, ‘QCL-TypeD’ with the same CSI-RS resource, or‘QCL-TypeA’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with higher layer parameter trs-Info and, when (QCL-TypeD) is applicable, ‘QCL-TypeD’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with the higher layer parameter repetition, or‘QCL-TypeA’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured without higher layer parameter trs-Info and without the higher layer parameter repetition and, when (QCL-TypeD) is applicable, ‘QCL-TypeD’ with the same CSI-RS resource. For the DM-RS of the PDSCH, the UE should assume that a TCI state indicates one of the following QCL type(s):‘QCL-TypeA’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with the higher layer parameter trs-Info and, when (QCL-TypeD) is applicable, ‘QCL-TypeD’ with the same CSI-RS resource, or‘QCL-TypeA’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with the higher layer parameter trs-Info and, when (QCL-TypeD) is applicable, ‘QCL-TypeD’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured with the higher layer parameter repetition, orQCL-TypeA’ with a CSI-RS resource in the higher layer parameter NZP-CSI-RS-ResourceSet configured without the higher layer parameter trs-Info and without the higher layer parameter repetition and, when (QCL-TypeD) is applicable, ‘QCL-TypeD’ with the same CSI-RS resource. Additionally, the UE and BS according to the present disclosure may operate as follows. TABLE 4QCL linkage for above 6 GHz after RRCsignallingSSB → TRS w.r.t average delay, Doppler shift, spatial RX parametersQCL type: C + DTRS → CSI-RS for BM w.r.t. average delay, Doppler shift, delay spread,QCL type: A + DDoppler spread estimationTRS → CSI-RS for CSI w.r.t. average delay, Doppler shift, delay spread,QCL type: ADoppler spread estimationTRS → DMRS for PDCCH w.r.t. average delay, Doppler shift, delayQCL type: A + Dspread, Doppler spread estimationTRS → DMRS for PDSCH w.r.t. average delay, Doppler shift, delayQCL type: A + Dspread, Doppler spread estimationSSB → CSI-RS for BM w.r.t average delay, Doppler shift, spatial RXQCL type: C + DparametersSSB → CSI-RS for CSI w.r.t, spatial RX parametersQCL type: DSSB → DMRS for PDCCH (before TRS is configured) w.r.t. average delay,QCL type: A + DDoppler shift, delay spread, Doppler spread, spatial RX parametersSSB → DMRS for PDSCH (before TRS is configured) w.r.t. average delay,QCL type: A + DDoppler shift, delay spread, Doppler spread, spatial RX parametersCSI-RS for BM → DMRS for PDCCH w.r.t. spatial RX parametersQCL type: DCSI-RS for BM → DMRS for PDSCH w.r.t., spatial RX parametersQCL type: DCSI-RS for CSI → DMRS for PDSCH w.r.t. average delay, Doppler shift,QCL type: A + Ddelay spread, Doppler spread, spatial RX parameters; Note: QCLparameters may not be derived directly from CSI-RS for CSICSI-RS for BM → CSI-RS for TRS/BM/CSI w.r.t. spatial RX parametersQCL type: D Specifically, the QCL linkage and signaling shown in Table 4 may be applied between the UE and BS according to the present disclosure after the UE establishes an RRC connection. In the present disclosure, the above operations may be applied not only to bands above 6 GHz but also to bands below 6 GHz. In the following description, if one row in the tables below has the same RS type, the same RS ID may be assumed for the row. In the present disclosure, when a CSI-RS resource is included in the higher layer parameter NZP-CSI-RS-ResourceSet in which the higher layer parameter trs-Info is configured, the UE expects the following two possible configurations for a higher layer parameter TCI-state. TABLE 5Valid TCI stateDL RS 2qcl-Type2ConfigurationDL RS 1qd-Type1(if configured)(if configured)1*SS/PBCH BlockQCL-TypeCSS/PBCH BlockQCL-TypeD2*SS/PBCH BlockQCL-TypeCCSI-RS (BM)QCL-TypeDIn Table 5, *represents a case in which QCL type-D is applicable. When QCL type-D is applicable, DL RS 2 and QCL type-2 need to be configured for the UE. In the present disclosure, when a CSI-RS resource is included in the higher layer parameter NZP-CSI-RS-ResourceSet in which the higher layer parameter trs-Info and the higher layer parameter repetition are not configured, the UE expects the following three possible configurations for the higher layer parameter TCI-state. TABLE 6Valid TCI stateDL RS 2qcl-Type2ConfigurationDL RS 1qcl-Type1(if configured)(if configured)1**TRSQCL-TypeATRSQCL-TypeD2**TRSQCL-TypeASS/PBCH BlockQCL-TypeD3**TRSQCL-TypeACSI-RS (BM)QCL-TypeD4*TRSQCL-TypeBIn Table 6, *represents a case in which QCL type-D is not applicable.In Table 6, **represents a case in which QCL type-D is applicable. When QCL type-D is applicable, DL RS 2 and QCL type-2 need to be configured for the UE. In the present disclosure, when a CSI-RS resource is included in the higher layer parameter NZP-CSI-RS-ResourceSet in which the higher layer parameter repetition is configured, the UE expects the following three possible configurations for the higher layer parameter TCI-state. TABLE 7Valid TCI stateDL RS 2qcl-Type2ConfigurationDL RS 1qcl-Type1(if configured)(if configured)1TRSQCL-TypeATRSQCL-TypeD2TRSQCL-TypeACSI-RS (BM)QCL-TypeD3SS/PBCH BlockQCL-TypeCSS/PBCH BlockQCL-TypeD In Tables 8 and 9 below, if QCL type-D is applicable, DL RS 2 and QLC type-2 need to be configured for the UE except a default case (e.g., the fourth row in Tables 8 and 9). When a TRS for DL is used for QCL type-D, the TRS is a source RS for QCL type-D and thus needs to have an SS/PBCH block or CSI-RS. For a PDCCH DMRS, the UE expects the following three possible configurations for the higher layer parameter TCI-state. The fourth configuration is a default configuration and valid before the TRS is configured. TABLE 8Valid TCI stateDL RS 2qcl-Type2ConfigurationDL RS 1qcl-Type1(if configured)(if configured)1TRSQCL-TypeATRSQCL-TypeD2TRSQCL-TypeACSI-RS (BM)QCL-TypeD3**CSI-RS (CSI)QCL-TypeACSI-RS (CSI)QCL-TypeD4*SS/PBCHQCL-TypeASS/PBCHQCL-TypeDBlock*Block*In Table 8, * represents that the TRS is not configured yet. In this case, the configuration may be a valid QCL assumption rather than a TCI state.In Table 8, ** represents that QCL parameters may not be directly derived fro, CSI-RS(s) (CSI). For a PDSCH DMRS, the UE expects the following three possible configurations for the higher layer parameter TCI-state. The fourth configuration is a default configuration and valid before the TRS is configured. TABLE 9Valid TCI stateDL RS 2qd-Type2ConfigurationDL RS 1qcl-Type1(if configured)(if configured)1TRSQCL-TypeATRSQCL-TypeD2TRSQCL-TypeACSI-RS (BM)QCL-TypeD3**CSI-RS (CSI)QCL-TypeACSI-RS (CSI)QCL-TypeD4*SS/PBCHQCL-TypeASS/PBCHQCL-TypeDBlock*Block*In Table 9, * represents that the TRS is not configured yet. In this case, the configuration may correspond to a valid QCL assumption rather than a TCI state.In Table 9, ** represents that QCL parameters may not be directly derived from CSI-RS(s) (CSI). Hereinafter, a description will be given of how higher layer parameters used for the above operations are configured. A higher layer parameter CSI-ResourceConfig applicable to the present disclosure may be configured as follows. The parameter may include at least one higher layer parameter NZP-CSI-RS-ResourceSet, at least one higher layer parameter CSI-IM-ResourceSet and/or at least one higher layer parameter CSI-SSB-ResourceSet. TABLE 10CSI-ResourceConfig information element-- ASN1START-- TAG-CSI-RESOURCECONFIG-STARTCSI-ResourceConfig ::=SEQUENCE {csi-ResourceConfigIdCSI-ResourceConfigId,csi-RS-ResourceSetIdCHOICE {nzp-CSI-RS-SSBSEQUENCE {nzp-CSI-RS-ResourceSetListSEQUENCE (SIZE (1..maxNrofNZP-CSI-RSResourceSetsPerConfig)) OF NZP-CSI-RS-ResourceSetIdOPTIONAL, -- Need Rcsi-SSB-ResourceSetListSEQUENCE (SIZE(1..maxNrofCSI-SSBResourceSetsPerConfig)) OF CSI-SSB-ResourceSetIdOPTIONAL -- Need R},csi-IM-ResourceSetListSEQUENCE (SIZE (1..maxNrofCSI-IMResourceSetsPerConfig)) OF CSI-IM-ResourceSetId},bwp-IdBWP-Id,resourceTypeENUMERATED { aperiodic, semiPersistent, periodic },...}-- TAG-CSI-RESOURCECONFIG-STOP-- ASN1STOP Each field included in the parameter may be defined as follows. TABLE 11CSI-ResourceConfig field descriptionsbwp-IdThe DL BWP which the CSI-RS associated with this CSI-ResourceConfigare located in (see TS 38.214 [19], clause 5.2.1.2.csi-IM-ResourceSetListList of references to CSI-IM resources used for CSI measurement andreporting in a CSI-RS resource set. Contains up to maxNrofCSI-IM-ResourceSetsPerConfig resource sets if resourceType is ‘aperiodic’and 1 otherwise (see TS 38.214 [19], clause 5.2.1.2).csi-ResourceConfigIdUsed in CSI-ReportConfig to refer to an instance of CSI-ResourceConfig.csi-SSB-ResourceSetListList of references to SSB resources used for CSI measurment andreporting in a CSI-RS resource set (see TS 38.214 [19], clause 5.2.1.2).nzp-CSI-RS-ResourceSetListList of references to NZP CSI-RS resource used for beam measurementand reporting in a CSI-RS resource set. Contains up to maxNrofNZP-CSI-RS-ResourceSetsPerConfig resource sets if resourceType is‘aperiodic’ and 1 otherwise (se TS 38.214 [19], clause 5.2.1.2).resourceTypeTime domain behavior of resource configuration (see TS 38.214 [19],clause 5.2.1.2). It does not apply to resources provided in the csi-SSB-ResourceSetList. The higher layer parameter NZP-CSI-RS-ResourceSet applicable to the present disclosure may be configured as follows. The parameter may include at least one higher layer parameter NZP-CSI-RS-Resource. TABLE 12NZP-CSI-RS-ResourceSet information element-- ASN1START-- TAG-NZP-CSI-RS-RESOURCESET-STARTNZP-CSI-RS-ResourceSet ::=SEQUENCE {nzp-CSI-ResourceSetIdNZP-CSI-RS-ResourceSetId,nzp-CSI-RS-ResourcesSEQUENCE (SIZE (1..maxNrofNZP-CSI-RSResourcesPerSet)) OF NZP-CSI-RS-ResourceId,repetitionENUMERATED ( on, off )OPTIONAL, -- Need SaperiodicTriggeringOffsetINTEGER(0..6)OPTIONAL, -- Need Strs-InfoENUMERATED (true)OPTIONAL, -- Need R...,}-- TAG-NZP-CSI-RS-RESOURCESET-STOP-- ASN1STOP Each field included in the parameter may be defined as follows. TABLE 13NZP-CSI-RS-ResourceSet field descriptionsaperiodicTriggeringOffset, aperiodicTriggeringOffset-r16Offset X between the slot containing the DCI that triggers aset of aperiodicNZP CSI-RS resources and the slot in which the CSI-RSresource set is transmitted. For aperiodicTriggeringOffset.the value 0corresponds to 0 slots, value 1 corresponds to 1 slot, value 2 correspondsto 2 slots, value 3 corresponds to 3 slots, value 4 corresponds to 4slots, value 5 corresponds to 16 slots, value 6 corresponds to 24 slots.For aperiodic-TriggeringOffset-r16, the value indicates thenumber of slots. The network configures only one of the fields.When neither field is included, the UE applies the value 0.nzp-CSI-RS-ResourcesNZP-CSI-RS-Resources associated with this NZP-CSI-RS resource set(see TS 38.214 [19], clause 5.2). For CSI, there are at most 8 NZP CSIRS resources per resource setrepetitionIndicates whether repetition is on/off. If the field is set to off or if the feildis absent, the UE may not assume that the NZP-CSI-RS resources withinthe resource set are transmitted with the same downlink spatial domaintransmission filter (see TS 38.214 [19], clauses 5.2.2.3.1 and 5.1.6.1.2).It can only be configured for CSI-RS resource sets which are associatedwith CSI-ReportConfig with report of L1 RSRP, L1 SINR or “noreport”.trs-InfoIndicates that the antenna port for all NZP-CSI-RS resources in theCSI-RS resource set is same. If the field is absent or released the UEapplies the value false (see TS 38.214 [19], clause 5.2.2.3.1). The higher layer parameter NZP-CSI-RS-Resource applicable to the present disclosure may be configured as follows. TABLE 14NZP-CSI-RS-Resource information element-- ASN1START-- TAG-NZP-CSI-RS-RESOURCE-STARTNZP-CSI-RS-Resource ::=SEQUENCE {nzp-CSI-RS-ResourceIdNZP-CSI-RS-ResourceId,resourceMappingCSI-RS-ResourceMapping,powerControlOffsetINTERGER (−8..15),powerControlOffsetSSENUMERATED(db−3, db0, db3, db6)OPTIONAL, -- Need RscramblingIDScramblingIdperiodicityAndOffsetCSI-ResourcePeriodicityAndOffsetOPTIONAL, -- Cond PeriodicOrSemiPersistentqcl-InfoPeriodicityCSI-RSTCI-StateIdOPTIONAL, -- CondPeriodic..}-- TAG-NZP-CSI-RS-RESOURCE-STOP-- ASN1STOP Each field included in the parameter may be defined as follows. TABLE 15NZP-CSI-RS-Resource field descriptionsperiodicityAndOffsetPeriodicity and slot offset sl1 corresponds to a periodicity of 1 slot, sl2 to aperiodicity of two slots, and so on. The corresponding offset is also given innumber of slots (see TS 38.214 [19], clause 5.2.2.3.1). Network always configuresthe UE with a value for this field for periodic and semi-persistent NZP-CSI-RS-Resource (as indicated in CSI-ResourceConfig).powerControlOffsetPower offset of PDSCH RE to NZP CSI-RS RE. Value in dB(see TS 38.214 [19], clauses 5.2.2.3.1 and 4.1).powerControlOffsetSSPower offset of NZP CSI-RS RE to SSS RE. Value in dB (see TS 38.214 [19],clause 5.2.2.3.1).qcl-InfoPeriodicCSI-RSFor a target periodic CSI-RS, contains a reference to one TCI-State in TCI-Statesfor providing the QCL source and QCL type. For periodic CSI-RS, the sourcecan be SSB or another periodic-CSI-RS. Refers to the TCI-State which has this valuefor tci-StateId and is defined in tci-StatesToAddModList in the PDSCH-Configin the BWP-Downlink corresponding to the serving cell and to the DL BWPto which the resource belongs to (see TS 38.214 [19], clause 5.2.2.3.1).resourceMappingOFDM symbol location(s) in a slot and subcarrier occupancy in a PRB ofthe CSI-RS resource.scramblingIDScrambling ID (see TS 38.214 [19], clause 5.2.2.3.1). In the parameter, conditional presence may be defined as follows. TABLE 16Conditional PresenceExplanationPeriodicThe field is optionally present, Need M, for periodic NZP-CSI-RS-Resources (as indicated in CSI-ResourceConfig).The field is absent otherwise.PeriodicOrSemiPersistentThe field is optionally present, Need M, for periodic andsemi-persistent NZP-CSI-RS-Resources (as indicatedin CSI-ResourceConfig). The field is absent otherwise. The higher layer parameter CSI-IM-ResourceSet applicable to the present disclosure may be configured as follows. The parameter may include at least one higher layer parameter CSI-IM-resources IE. TABLE 17CSI-IM-ResourceSet information element-- ASN1START-- TAG-CSI-IM-RESOURCESET-STARTCSI-IM-ResourceSet ::=SEQUENCE {csi-IM-ResourceSetIdCSI-IM-ResourceSetId,csi-IM-ResourcesSEQUENCE (SIZE(1..maxNrofCSI-IM-ResourcesPerSet)) OF CSI-IM-ResourceId,...}-- TAG-CSI-IM-RESOURCESET-STOP-- ASN1STOP Each field included in the parameter may be defined as follows. TABLE 18CSI-IM-ResourceSet field descriptionscsi-IM-ResourcesCSI-IM-Resources associated with this CSI-IM-ResourceSet(see TS 38.214 [19], clause 5.2). A higher layer parameter CSI-IM-Resource applicable to the present disclosure may be configured as follows. TABLE 19CSI-IM-Resource information element-- ASN1START-- TAG-CSI-IM-RESOURCE-STARTCSI-IM-Resource ::=SEQUENCE {csi-IM-ResourceIdCSI-IM-ResourceId,csi-IM-resourceElementPatternCHOICE {pattern0SEQUENCE {subcarrierLocation-p0ENUMERATED ( s0, s2, s4, s6, s8, s10 ),symbolLocation-p0INTEGER (0..12)},pattern1SEQUENCE{subcarrierLocation-p1ENUMERATED ( s0, s4, s6 ),symbolLocation-p1INTEGER (0..13)}}OPTIONAL, --Need MfreqBandCSI-FrequencyOccupationOPTIONAL, -- Need MperiodicityAndOffsetCSI-ResourcePeriodicityAndOffsetOPTIONAL, -- CondPeriodicOrSemiPersistent...}-- TAG-CSI-IM-RESOURCE-STOP-- ASN1STOP Each field included in the parameter may be defined as follows. TABLE 20CSI-IM-Resource field descriptionscsi-IM-ResourceElementPatternThe resource element pattern (Pattern0 (2.2) or Pattern1 (4.1)) withcorresponding parameters (see TS 38.214 [19], clause 5.2.2.4)freqBandFrequency-occupancy of CSI-IM (see TS 38.214 [19], clause 5.2.2.4)periodicityAndOffsetPeriodicity and slot offset for periodic/semi-persistent CSI-IM. Networkalwasy configures the UE with a value for this field for periodic andsemi-persistent CSI-IM-Resources (as indicated in CSI-ResourceConfig).A change of configuration between periodic or semi-persistentand aperiodic for a CSI-IM-Resource is not supportedwithout a release and add.subcarrierLocation-p0OFDM subcarrier occupancy of the CSI-IM resource for Pattern0 (see TS38.214 [19], clause 5.2.2.4)subcarrierLocation-p1OFDM subcarrier occupancy of the CSI-IM resource for pattern1 (see TS38.214 [19], clause 5.2.2.4)symbolLocation-p0OFDM symbol location of the CSI-IM resource for Pattern0(see TS 38.214 [19], clause 5.2.2.4)symbolLocation-p1OFDM symbol location of the CSI-IM resource for Pattern1(see TS 38.214 [19], clause 5.2.2.4) In the parameter, conditional presence may be defined as follows. TABLE 21Conditional PresenceExplanationPeriodicOrSemiPersistentThe field is optionally present, Need M, for periodicand semi-persistent CSI-IM-Resources (as indicatedin CSI-ResourceConfig). The field is absent otherwise. A higher layer parameter CSI-RS-ResourceConfigMobility applicable to the present disclosure may be configured as follows. TABLE 22CSI-RS-ResourceConfigMobility element-- ASN1START-- TAG-CSI-RS-RESOURCECONFIGMOBILITY-STARTCSI-RS-ResourceConfigMobility :=SEQUENCE {subcarrierSpacingSubcarrierSpacing,csi-RS-CellList-MobilitySEQUENCE (SIZE (1..maxNro(CSI-RS-CellsRPM)) OF CSI-RS-CellMobility,...,}CSI-RS-CellMobility ::=SEQUENCE {cellIdPhysCellId,csi-rs-MeasurementBWSEQUENTCE {nrofPRBsENUMERATED ( size24, size48, size96, size192, size264 ),startPRBINTEGER(0..2169)},densityENUMERATED {d1,d3}OPTIONAL, -- Need Rcsi-rs-ResourceList-MobilitySEQUENCE (SIZE (1..maxNrofCSI-RS-ResourceRRM)) OF CSI-RS-Resource-Mobility}CSI-RS-Resource-Mobility ::=SEQUENCE {csi-RS-IndexCSI-RS-IndexslotConfigCHOICE {ms4INTEGER (0..31),ms5INTEGER (0..39),ms10INTEGER (0..79),ms20INTEGER (0..159),ms40INTEGER (0..319)},associatedSSBSEQUENCE {ssb-IndexSSB-Index,isQuasiColocatedBOOLEAN}OPTIONAL, -- Need RfrequencyDomainAllocationCHOICE {row1BIT STRING (SIZE (4)),row2BIT STRING (SIZE (12))}, TABLE 23firstOFDMSymbolInTimeDomainINTEGER (0..13),sequenceGenerationConfigINTEGER (0..1023),...}CSI-RS-Index ::=INTEGER (0..maxNrofCSI-RS-ResourceRRM-1)-- TAG_CSI-RS-RESOURCECINFIGMOBILITY-STOP-- ASN1STOP Each field included in the parameter may be defined as follows. TABLE 24CSI-RS-CellMobility field descriptionscsi-rs-ResourceList-MobilityList of CSI-RS resources for mobility. The maximum number of CSI-RSresources that can be configured per measObjectNR depends on theconfiguration of associatedSSB and the support ofincreasedNumberofCSIRSPerMO capability (see TS 38.214 [19],clause 5.1.6.1.3).densityFrequency domain density for the 1-port CSI-RS for L3 mobility.See TS 38.211 [16], clause 7.4.1.nrofPRBsAllowed size of measurement BW in PRBs.See TS 38.211 [16], clause 7.4.1.startPRBStarting PRB index of the measurement bandwidth. See TS 38.211 [16],clause 7.4.1. TABLE 25CSI-RS-ResourceConfigMobility field descriptionscsi-RS-CellList-MobilityList of cells for CSI-RS based RRM measurements.refServCellIndexIndicates the serving cell providing the timing reference for CSI-RS resources withoutassociatedSSB. The field may be present only if there is at least one CSI-RS resourceconfigured without associatedSSB. If this field is absent, the UE shall use the timing of thePCell for measurements on the CSI-RS resources without associatedSSB. The CSI-RSresources and the serving cell indicated by refServCellIndex for timing reference should belocated in the same band.subcarrierSpacingSubcarrier spacing of CSI-RS. Only the values 15, 30 kHz or 60 kHz (FR1), and 60 or120 kHz (FR2) are applicable. TABLE 26CSI-RS-Resource-Mobility field descriptionsassociatedSSBIf this field is present, the UE may base the timing of the CSI-RS resourceindicated in CSI-RS-Resource-Mobility on the timing of the cell indicatedby the cellId in the CSI-RS-CellMobility. In this case, the UE is not requiredto monitor that CSI-RS resource if the UE cannot detect the SS/PBCH blockindicated by this associatedSSB and CellId. If this field is absent, the UEshall base the timing of the CSI-RS resource indicated in CSI-RS-Resource-Mobilityon the timing of the serving cell indicated by refServCellIndex. In this case, the UE isrequired to measure the CSI-RS resource even if SS/PBCH block(s) with cellId in theCSI-RS-CellMobility are not detected. CSI-RS resources with and without associatedSSBmay be configured in accordance with the rules in TS 38.214 [19], clause 5.1.6.1.3.csi-RS-IndexCSI-RS resource index associated to the CSI-RS resource to be measured (and usedfor reporting).firstOFDMSymbolInTimeDomainTime domain allocation within a physical resource block. The field indicates the firstOFDM symbol in the PRB used for CSI-RS, see TS 38.211 [16], clause 7.4.1.5.3.Value 2 is supported only when dmrs-TypeA-Position equals pos3.frequencyDomainAllocationFrequency domain allocation within a physcial resource block in accordance withTS 38.211 [16], clause 7.4.1.5.3 including table 7.4.1.5.2-1. The number of bits thatmay be set to one depend on the chosen row in that table.isQuasiColocatedIndicates that the CSI-RS resource is quasi co-located with the associated SS/PBCHblock, see TS 38.214 [19], clause 5.1.6.1.3.sequenceGenerationConfigScrambling ID for CSI-RS (see TS 38.211 [16], clause 7.4.1.5.2).slotConfigIndicates the CSI-RS periodicity (in miliseconds) and for each periodicity the offset(in number of slots). When subcarrierSpacingCSI-RS is set to kHz15, the maximumoffset values for periodicities ms4/ms5/ms10/ms20/ms40 are 3/4/9/19/39 slots. WhensubcarrierSpacingCSI-RS is set to kHz30, the maximum offset values for periodicitiesms4/ms5/ms10/ms10/ms20/ms40 are 7/9/19/39/79 slots. When subcarrierSpacingCSI-RSis set to kHz60, the maximum offset values for periodicities ms4/ms5/ms10/ms20/ms40are 15/19/39/79/159 slots. When subcarrierSpacingCSI-RS is set kHz120, the maximumoffset values for periodicities ms4/ms5/ms10/ms20/ms40 are 31/39/79/159/319 slots. A higher layer parameter CSI-ReportConfig applicable to the present disclosure may be configured as follows. TABLE 27CSAReportConfig information element-- ASN1START-- TAG-CSI-REPORTCONFIG-STARTCSI-ReportConfig ::=SEQUENCE {reportConfigIdCSI-ReportConfigId,carrierServCellIndexOPTIONAL, -- Need SresourcesForChannelMeasurementCSI-ResourceConfigId,csi-IM-ResourcesForInterferenceCSI-ResourceConfigIdOPTIONAL, -- Need Rnzp-CSI-RS-ResourcesForInterferenceCSI-ResourceConfigIdOPTIONAL, -- Need RreportConfigTypeCHOICE {periodicSEQUENCE {reportSlotConfigCSI-ReportPeriodicityAndOffset,pucch-CSI-ResourceListSEQUENCE (SIZE(1..maxNrofBWPs)) OFPUCCH-CSI-Resource},semiPersistentOnPUCCHSEQUENCE {reportSlotConfigCSI-ReportPeriodicityAndOffset,pucch-CSI-ResourceListSEQUENCE (SIZE(1..maxNrofBWPs)) OFPUCCH-CSI-Resource},semiPersistentOnPUSCHSEQUENCE {reportSlotConfigENUMERATED {sl5, sl10,sl20, sl40, sl80, sl160,sl320},reportSlotOffsetListSEQUENCE (SIZE (1..maxNrofUL-Allocations)) OFINTEGER(0..32),p0alphaP0-PUSCH-AlphaSetId},aperiodicSEQUENCE {reportSlotOffsetListSEQUENCE (SIZE(1..maxNrofUL-Allocations)) OFINTEGER(0..32)}}, TABLE 28reportQuantityCHOICE {noneNULL,cri-RI-PMI-CQINULL,cri-RI-i1NULL,cri-RI-i1-CQISEQUENCE {pdsch-BundleSizeForCSIENUMERATED {n2, n4}OPTIONAL -- Need S},cri-RI-CQINULL,cri-RSRPNULL,ssb-Index-RSRPNULL,cri-RI-LI-PMI-CQINULL}, TABLE 29reportFreqConfigurationSEQUENCE {cqi-FormatIndicatorENUMERATED { widebandCQI, subbandCQI }OPTIONAL, -- Need Rpmi-FormatIndicatorENUMERATED { widebandPMI, subbandPMI }OPTIONAL, -- Need Rcsi-ReportingBandCHOICE {subbands3BIT STRING(SIZE(3)),subbands4BIT STRING(SIZE(4)),subbands5BIT STRING(SIZE(5)),subbands6BIT STRING(SIZE(6)),subbands7BIT STRING(SIZE(7)),subbands8BIT STRING(SIZE(8)),subbands9BIT STRING(SIZE(9)),subbands10BIT STRING(SIZE(10)),subbands11BIT STRING(SIZE(11)),subbands12BIT STRING(SIZE(12)),subbands13BIT STRING(SIZE(13)),subbands14BIT STRING(SIZE(14)),subbands15BIT STRING(SIZE(15)),subbands16BIT STRING(SIZE(16)),subbands17BIT STRING(SIZE(17)),subbands18BIT STRING(SIZE(18)),...,subbands19-v1530BIT STRING(SIZE(19))}  OPTIONAL -- Need S}OPTIONAL, -- Need RtimeRestrictionForChannelMeasurementsENUMERATED {configured, notConfigured},timeRestrictionForInterferenceMeasurementsENUMERATED {configured, notConfigured}codebookConfigCodebookConfigOPTIONAL, -- Need RdummyENUMERATED {n1, n2}OPTIONAL, -- Need RgroupBasedBeamReportingCHOICE {enabledNULL,disabledSEQUENCE {nrofReportedRSENUMERATED {n1, n2, n3, n4}OPTIONAL -- Need S}},cqi-TableENUMERATED {table1, table2, table3, spare1}OPTIONAL, -- Need RsubbandSizeENUMERATED {value1, value2},non-PMI-PortIndicationSEQUENCE (SIZE (1 maxNrofNZP-CSI-RS-ResourcesPerConfig)) OF PortIndexFor8Ranks OPTIONAL, -- Need R...,[[semiPersistentOnPUSCH-v1530SEQUENCE {reportSlotConfig-v1530ENUMERATED {sl4, sl8, sl16}}OPTIONAL, -- Need R In Table 28, reportQuantity denotes CSI-related quantity to be reported by the UE. Each field included in the parameter may be defined as shown in the following tables. TABLE 30CSI-ReportConfig field descriptionscarrierIndicates in which serving cell the CSI-ResourceConfig indicated beloware to be found. If the field is absent, the resources are on the same servingcell as this report configuration.codebookConfigCodebook configuration for Type-1 or Type-2 including codebook subset restriction.Network does not configure codebookConfig and codebookConfig-r16simultaneously to a UEcqi-FormatIndicatorIndicates whether the UE shall report a single (wideband) or multiple (subband)CQI (see TS 38.214 [19], clause 5.2.1.4).cqi-TableWhich CQI table to use for CQI calculation (see TS 38.214 [19], clause 5.2.2.1).csi-IM-ResourcesForInterferenceCSI-IM resources for interference measurement. csi-ResourceConfigId of aCSI-ResourceConfig included in the configuration of the serving cell indicated withthe field “carrier” above. The CSI-ResourceConfig indicated here contains onlyCSI-IM resources. The bwp-Id in that CSI-ResourceConfig is the same value as thebwp-Id in the CSI-ResourceConfig indicated by resourcesForChannelMeasurement.csi-ReportingBandIndicates a contiguous or non-contigous subset of subbands in the bandwidth partwhich CSI shall be reported for. Each bit in the bit-string represents one subband.The right-most bit in the bit string represents the lowest subband in the BWP.The choice determines the number of subbands (subbands3 for 3 subbands, subbands4for 4 subbands, and so on) (see TS 38.214 [19], clause 5.2.1.4). This field is absentif there are less than 24 PRBs (no sub band) and present otherwise (see TS 38.214[19], clause 5.2.1.4).dummyThis field is not used in the specification. If received it shall be ignored by the UE.groupBasedBeamReportingTurning on/off group beam based reporting (see TS 38.214 [19], clause 5.2.1.4).non-PMI-PortIndicationPort indication for RI/CQI calculation. For each CSI-RS resource in the linkedResourceConfig for channel measurement, a port indication for each rank R,indicating which R ports to use. Applicable only for non-PMI feedback (seeTS 38.214 [19], clause 5.2.1.4.2).The first entry in non-PMI-PortIndication corresponds to the NZP-CSI-RS-Resourceindicated by the first entry in nzp-CSI-RS-Resources in the NZP-CSI-RS-ResourceSetindicated in the first entry of nzp-CSI-RS-ResourcesetList of the CSI-ResourceConfigwhose CSI-ResourceConfigId is indicated in a CSI-MeasId together with the aboveCSI-ReportConfigId, the second entry in non-PMI-PortIndication corresponds tothe NZP-CSI-RS-Resource indicated by the second entry in nzp-CSI-RS-Resourcesin the NZP-CSI-RS-ResourceSet indicated in the first entry ofnzp-CSI-RS-ResourceSetList of the same CSI-ResourceConfig, and so on until theNZP-CSI-RS-Resource indicated by the last entry in nzp-CSI-RS-Resources in thein the NZP-CSI-RS-ResourceSet indicated in the first entry of nzp-CSI-RS-ResourceSetListof the same CSI-ResourceConfig. Then the next entry corresponds to theNZP-CSI-RS-Resource indicated by the first entry in nzp-CSI-RS-ResourceSet indicated inthe second entry of nzp-CSI-RS-ResourceSetList of the same CSI-ResourceConfig and so on.nrofReportedRSThe number (N) of measured RS resources to be reported per report setting in a non-group-based report. N <= N max, where N max is either 2 or 4 depending on UE capability.(see TS 38.214 [19], clause 5.2.1.4) When the field is absent the UE applies the value 1.nzp-CSI-RS-ResourcesForInterferenceNZP CSI RS resources for interference measurement csi-ResourceConfigId of aCSI-ResourceConfig included in the configuration of the serving cell indicated with thefield “carrier” above. The CSI-ResourceConfig indicated here contains onlyNZP-CSI-RS resources. The bwp-Id in that CSI-ResourceConfig is the same value as thebwp-Id in the CSI-ResourceConfig indicated by resourcesForChannelMeasurement. TABLE 31p0alphaIndex of the p0-alpha set determining the power control for this CSI reporttransmission (see TS 38.214 [19], clause 6.2.1.2).pdsch-BundleSizeForCSIPRB bundling size to assume for CQI calculation when reportQuantity is CRt/RI/t1/CQI. If the field is absent, the UE assumes that no PRB bundling isapplied (see TS 38.214 [19], clause 5.2.1.4.2)pmi-FormatIndicatorIndicates whether the UE shall report a single (wideband) or multiple(subband) PMI. (see TS 38.214 [19], clause 5.2.1.4).pucch-CSI-ResourceListIndicates which PUCCH resource to use for reporting PUCCH.reportConfigTypeTime domain behavior of reporting configuration.reportFreqConfigurationReporting configuration in the frequency domain. (see TS 38.214 [19],clause 5.2.1.4).reportQuantityThe CSI related quantities to report. see TS 38.214 [19], clause 5.2.1. If thefield reportQuantity-r16 is present, UE shall ignore reportQuantity (withoutsuffix).reportSlotConfigPeriodicity and slot offset (see TS 38.214 [19], clause 5.2.1.4). If the fieldreportSlotConfig-v1530 is present, the UE shall ignore the value provided inreportSlotConfig (without suffix).reportSlotOffsetList, reportSlotOffsetListDCI-0-1, reportSlotOffsetListDCI-0-2Timing offset Y for semi persistent reporting using PUSCH. This fieldlists the allowed offset values. The list must have the same number of entriesas the pusch-TimeDomainAllocationList in PUSCH-Config. A particularvalue is indicated in DCI. The network indicates in the DCI field of the ULgrant, which of the configured report slot offsets the UE shall apply. The DCIvalue 0 corresponds to the first report slot offset in this list, the DCI value 1corresponds to the second report slot offset in this list, and so on. The firstreport is transmitted in slot n + Y, second report in n + Y + P, where P isthe configured periodicity.Timing offset Y for aperiodic reporting using PUSCH. This field lists the allowedoffset values. This list must have the same number of entries as the pusch-TimeDomainAllocationList in PUSCH-Config. A particular value is indicatedin DCI. The network indicates in the DCI field of the UL grant, which of theconfigured report slot offsets the UE shall apply. The DCI value 0 correspondsto the first report slot offset in this list, the DCI value 1 corresponds to the secondreport slot offfset in this list, and so on (see TS 38.214 [19], clause 6.1.2.1).The field reportSlotOffsetListDCI-0-1 applies to DCI format 0_1 and the fieldreportSlotOffsetListDCI-0-2 applies to DCI format 0_2 (see TS 38.214 [19],clause 6.1.2.1).resourcesForChannelMeasurementResources for channel measurement. csi-ResourceConfigId of aCSI-Resource-Config included in the configuration of the serving cell indicatedwith the field “carrier” above.The CSI-ResourceConfig indicated here contains only NZP-CSI-RS resourcesand/or SSB resources. This CSI-ReportConfig is associated with the DL BWPindicated by bwp-Id in that CSI-ResourceConfig.subbandSizeIndicates one out of two possible BWP-dependent values for the subbandsize as indicated in TS 38.214 [19], table 5.2.1.4-2. If csi-ReportingBand isabsent, the UE shall ignore this field.timeRestrictionForChannelMeasurementsTime domain measurement restriction for the channel (signal) measurements(see TS 38.214 [19], clause 5.2.1.1).timeRestrictionForInterferenceMeasurementsTime domain measurement restriction for intereference measurements(see TS 38.214 [19], clause 5.2.1.1). 1.9. Asynchronous Multiple Cells FIG.11is a diagram schematically illustrating radio frame structures of two cells (BSs, carriers, etc.) applicable to the present disclosure. InFIG.11, a region represented by #n refers to an n-th slot (or subframe). As shown inFIG.11, cell #0 and cell #1 may have different radio frame boundaries. In other words, the radio frame boundary of cell #0 may not be aligned with that of cell #1. These two cells may be regarded as asynchronous cells in terms of time. Thus, when a UE is allocated slot #2, the transmission and reception time of the UE may depend on which cell the UE is associated with. Accordingly, a CSI-RS resource timing configured for neighboring cell measurement may need to be synchronized with the timing of a cell in which a CSI-RS resource is transmitted rather than the timing of a serving cell. In the present disclosure, when it is said the timings of two cells are asynchronous, it may mean that the time difference between the two cells is at least one OFDM symbol unit (for example, the time synchronization difference between the two cells is one OFDM symbol) or at least one sample unit. 1.10. White/Blacklisted Cell In the present disclosure, a whitelisted cell may refer to a neighboring cell that the UE needs to measure. For example, the BS may inform the UE of the identifier of a neighboring cell which corresponds to a measurement target (in the form of a whitelisted cell). In addition, even when the neighboring cell that the UE needs to measure is not specified, the UE may measure cells on a frequency corresponding to the measurement target. In the present disclosure, a blacklisted cell may refer to a cell that the UE should not measure or a cell that the UE should not report even though the UE performs measurement therefor. For example, the network may instruct the UE not to perform event evaluation for a specific cell or not to send a measurement report. By doing so, the network may prevent the UE from being handed over to the specific cell. When the specific cell has a lot of loads, the blacklisted cell may be used to prevent the UE, which is currently served by another cell, from being handed over to the specific cell. 2. Proposed Embodiments Hereinafter, the configurations according to the present disclosure will be described in detail based on the above-described technical features. FIG.12is a diagram schematically illustrating a relationship between a UE and BSs applicable to the present disclosure. Referring toFIG.12, when the UE is capable of reporting reference signal received power (RSRP) of CSI-RS resources #10 and #11 to the BS, the BS or network may operate as follows. For example, if a serving cell (or a serving transmission reception point (TRP)) provides services to the UE, a neighboring cell may transmit no signals on the resource (or beam) in the direction of CSI-RS resource #10, thereby reducing inter-cell interference. As another example, when the UE is provided with services on the resources (or beams) in the directions of both CSI-RS resource #00 of the serving cell and CSI-RS resource #10 of the neighboring cell (that is, when the UE is in a coordinated multi-point (CoMP) environment), the BS or network may improve the reception performance of the UE. Consequently, when the UE is capable of measuring and reporting a CSI-RS transmitted from the neighboring cell, the overall system throughput may be improved in terms of inter-cell interference management and/or CoMP operation. It is assumed herein that serving and neighboring cells are asynchronous for convenience of description. However, the configurations of the present disclosure may be equally applied when the serving and neighboring cells are synchronized. In this case, the UE may determine the timing of a CSI-RS (e.g., CSI-RS resource #10 or #11) of the neighboring cell with respect to the timing of the neighboring cell and perform CSI reporting based thereon. However, according to the current NR specifications, the timing of the higher layer parameter NZP-CSI-RS-resource is configured to follow the timing of the serving cell. In other words, a method of setting the timing of the higher layer parameter NZP-CSI-RS-Resource with respect to the timing of a neighboring cell is not defined in the current NR specifications. As a result, there is no choice but to determine the timings of CSI-RS resources with respect to a serving cell in the current NR technology. Regarding a CSI-RS for mobility (or CSI-RS-Resource-Mobility) defined in the recently discussed 5G specifications, the BS or network may instruct the UE to configure the timing of CSI-RS-Resource-Mobility with respect to the timing of another cell rather than the serving cell based on a cell ID configured in CSI-RS-CellMobility. However, since the CSI-RS for mobility has no connection with the aforementioned CSI-ReportConfig IE (that is, since a connection between the CSI-RS for mobility and the aforementioned CSI-ReportConfig IE is not defined), the UE may measure and report the CSI-RS for mobility (e.g., L3 reporting). Here, reporting based on the CSI-RS for mobility includes only L3 reporting and does not include L1 reporting. According to the current specifications, CSI-RS-Resource-Mobility may not be used for L1 beam measurement (i.e., L1 measurement and/or L1 reporting) due to the two issues. Accordingly, the present disclosure proposes a method for solving such problems. Specifically, the present disclosure describes a signaling method capable of using CSI-RS-Resource-Mobility for L1 beam measurement (or L1 measurement). According to the present disclosure, the BS or network may configure a CSI-RS resource for beam management of a neighboring cell for the UE with the minimum impact on the recently discussed 5G specifications. In the present disclosure, CSI-RS resource Type I denotes a CSI-RS resource defined in a CSI framework. For example, CSI-RS resource Type I may include a CSI-RS for (beam) measurement, CSI acquisition, and/or tracking. Alternatively, CSI-RS resource Type I may include a CSI-RS determined based on the higher layer parameter NZP-CSI-RS-Resource IE or CSI-IM resource IE described above. In the present disclosure, CSI-RS resource Type II denotes a CSI-RS resource for mobility. Alternatively, CSI-RS resource Type II may include a CSI-RS determined based on the higher layer parameter CSI-RS-resource-Mobility IE described above. In the present disclosure, a higher layer parameter may refer to a parameter defined by radio resource control (RRC), a medium access control-control element (MAC-CE), and/or a combination thereof. In the present disclosure, a synchronization signal block (SSB) ID refers to an SSB (time) index or an SSB (time) identification. Herein, a network may include a BS. In some embodiments, the network may be replaced with the BS. 2.1. First Proposal The network may configure a CSI-RS resource ID for mobility (CSI-RS-Resource-Mobility or csi-RS-Index of CSI-RS-Resource-Mobility) as a QCL source for CSI-RS resource Type I (NZP-CSI-RS-Resource or CSI-IM-resource) and at least one QCL type. The UE may configure the time synchronization (e.g., average delay or timing), frequency synchronization (Doppler shift or carrier frequency offset (CFO)), and/or spatial Rx information of CSI-RS resource Type I based on the time synchronization (e.g., average delay or timing), frequency synchronization (Doppler shift or CFO), and/or spatial Rx information of a CSI-RS resource for mobility, respectively. Specifically, the network may configure the CSI-RS resource ID for mobility (CSI-RS-Resource-Mobility or csi-RS-Index in CSI-RS-Resource-Mobility) as a QCL source of a CSI-RS resource for (beam) measurement (NZP-CSI-RS-Resource or CSI-IM-Resource). In this case, the QCL type may be set to QCL-Type A, QCL-Type B, QCL-Type C, and/or QCL-Type D. In this configuration, assuming that the network sets the QCL type to QCL-Type C+D, the UE may operate as follows.The configuration of QCL-Type C+D may mean that two RSs are QCL in terms of the average delay (time synchronization and/or timing), Doppler shift (CFO or frequency synchronization), and/spatial Rx parameter (Rx beam). Thus, to receive a CSI-RS for (beam) measurement, the UE may use the time/frequency synchronization and Rx beam information provided by CSI-RS-Resource-Mobility.In this case, if associatedSSB of CSI-RS-Resource-Mobility is configured, the UE may expect that QCL-Type C will be configured (or assume that the QCL-Type C has been configured). The reason for this is that the timing of CSI-RS-Resource-Mobility needs to follow the timing of a cell with a cell ID provided by CSI-RS-CellMobility. In this case, the UE may obtain the time and frequency synchronization of the cell from associatedSSB regardless of the timing. In some embodiments, the time synchronization may mean only the average delay. Alternatively, the time synchronization may refer to a configuration including not only the average delay but the timing.Alternatively, QCL-type C may be configured independently in the above configuration. For example, when QCL-Type D (e.g., spatial Rx parameter) is not applicable, QCL-type D may not be configured. Assuming that the network sets the QCL type to only QCL-Type D unlike the above assumption, the UE may operate as follows.Based on the fact that two RSs are QCL in terms of the spatial Rx parameter, the UE may use Rx beam information provided by CSI-RS-Resource-Mobility when receiving a CSI-RS for (beam) measurement.In the above configuration, when determining the timing of NZP-CSI-RS-Resource including CSI-RS-Resource-Mobility as a QCL source, the UE may determine the timing of NZP-CSI-RS-Resource with respect to the timing of a serving cell (or PCell)Alternatively, when CSI-RS-Resource-Mobility is configured without the higher layer parameter associatedSSB and a higher layer parameter refServCellIndex is not configured in CSI-RS-ResourceConfigMobility, the UE may determine the timing of NZP-CSI-RS-Resource with respect to the timing of the serving cell, regardless of whether QCL-Type C is configured or not.In this case, if associatedSSB of CSI-RS-Resource-Mobility is not configured, the UE may expect that QCL-type C will not be configured. The reason for this is that when associatedSSB is not configured, the UE is incapable of configuring time/frequency synchronization based on an SSB.Alternatively, when QCL-Type D is not applicable in the above configuration, no QCL types may be configured. To support the above-described operations, the network may configure for the UE csi-RS-Index of CSI-RS-Resource-Mobility as a QCL source of NZP-CSI-RS-Resource. In this case, csi-RS-Index may have a value from 0 to 95. In the present disclosure, NZP-CSI-RS-ResourceId may have a value from 0 to 191. According to the current NR specifications, NZP-CSI-RS-Resource may be set to the QCL source of NZP-CSI-RS-Resource, but there is a restriction that csi-RS-Index is not configured. Thus, the present disclosure proposes a method of setting csi-RS-Index to a QCL source using CSI-RS-Resource-Mobility. When not only NZP-CSI-RS-Resource but csi-RS-Index are set to the QCL source of NZP-CSI-RS-Resource as proposed by the present disclosure, the UE may be incapable of distinguishing whether the QCL source with a value from 0 to 95 is for NZP-CSI-RS-Resource or CSI-RS-Resource-Mobility. To solve such a problem, the present disclosure proposes to include a higher layer parameter csi-rs-mobility for CSI-RS-Resource-Mobility in the higher layer parameter QCL-Info as follows. By doing so, the ambiguity between NZP-CSI-RS-Resource and CSI-RS-Resource-Mobility may be resolved. TABLE 32QCL-Info :: = SEQUENCE {cellservCellIndex,bwp-IdBWP-Id OPTIONALreferenceSignalCHOICE {csi-rsNZP-CSI-RS-ResourceId,ssbSSB-Indexcsi-rs-mobilitycsi-RS-Index},qcl-Type ENUMERATED {typeA, typeB, typeC, typeD},} As described above, some parameters in the NZP-CSI-RS-Resource IE may overlap with those in the CSI-RS-Resource-Mobility IE (see Tables 12 and 22). Thus, the UE operation may need to be clarified regarding the overlapping parameters. According to the present disclosure, since CSI-RS-Resource-Mobility is used as the QCL source, the UE may (preferentially) obtain the time/frequency location, period, and/or scrambling ID for an RS from IEs included in NZP-CSI-RS-Resource (e.g., resourceMapping, periodicAndOffset, scramblingID, etc.). Alternatively, the UE may (preferentially) obtain the time/frequency location, period, and/or scrambling ID for an RS from IE(s) included in CSI-RS-Resource-Mobility. Alternatively, the UE according to the present disclosure may expect that periodicAndOffset and scramblingID are always equal to slotConfig and sequenceGenerationConfig, respectively. The higher layer parameter CSI-RS-Resource-Mobility includes no frequency information (e.g., BW information, frequency density, etc.). However, since the parameter is included in a higher IE, CSI-RS-CellMobility, the UE may obtain frequency information for CSI-RS-Resource-Mobility, which is used as the QCL source, from CSI-RS-CellMobility. The UE may obtain information about the subcarrier spacing (or numerology) of NZP-CSI-RS-Resource having CSI-RS-Resource-Mobility as the QCL source from subcarrierSpacing in a higher layer parameter CSI-RS-ResourceConfigMobility IE (which is a higher IE than CSI-RS-CellMobility). According to the method, even when the serving and neighboring cells have different time/frequency synchronization, the network may simply resolve such a synchronization problem by setting the CSI-RS resource ID for mobility to the QCL source of CSI-RS resource Type I, and at the same time, the network may use the conventional (L1) CSI reporting method as it is. When the serving and neighboring cells have different numerologies, the network may provide/configure information about a subcarrier spacing to/for the UE through the CSI-RS-ResourceConfigMobility IE. In summary, the network and UE may use a CSI-RS from the neighboring cell for (L1 beam) measurement while minimizing the impact on the current 5G NR specifications. When CSI-RS resource Type I is QCL with the CSI-RS resource for mobility (or when CSI-RS-Resource-Mobility is set to the QCL source of NZP-CSI-RS-Resource), the UE may know which cell transmits CSI-RS resource Type I since the higher IE CSI-RS-CellMobility including the higher layer parameter CSI-RS-Resource-Mobility contains cell ID information. FIG.13is a diagram schematically illustrating the operations of a UE and a BS applicable to the present disclosure. As shown inFIG.13, the BS may configure CSI-RS-Resource-Mobility as the QCL source of NZP-CSI-RS-Resource for the UE. In this case, the BS may configure QCL-Type A, QCL-Type B, QCL-Type C, and/or QCL-Type D as QCL type information or may not configure any QCL types. Next, the BS transmits a CSI-RS corresponding to NZP-CSI-RS-Resource (or CSI-RS-Resource-Mobility) to the UE. In this case, if the BS does not configure QCL-Type C, the UE may receive the CSI-RS by configuring the time and/or frequency synchronization of NZP-CSI-RS-Resource with respect to a serving cell. In other words, the UE may receive the CSI-RS from the BS based on the time and/or frequency synchronization of the serving cell. If the BS configures QCL-Type C, the UE may receive the CSI-RS by configuring the time and/or frequency synchronization of NZP-CSI-RS-Resource with respect to a cell with a cell ID indicated by CSI-RS-CellMobility. Alternatively, the UE may receive the CSI-RS by configuring the time and/or frequency synchronization with respect to associatedSSB of CSI-RS-Resource-Mobility. In other words, the UE may receive the CSI-RS from the BS based on the time and/or frequency synchronization of a cell indicated by associatedSSB of CSI-RS-Resource-Mobility or CSI-RS-Resource-Mobility. 2.2. Second Method A BS may inform a UE that CSI-RS resource Type I is to be transmitted from a neighboring cell rather than a serving cell through higher layer signaling (e.g., higher layer parameter) and/or DCI. In this case, the UE may interpret/consider the value of N_ID (e.g., ScramblingID) configured in a CSI-RS resource as a cell ID. In addition, the UE may interpret/consider an SSB ID configured in the CSI-RS resource as the SSB ID of a cell with the cell ID. Further, the UE may configure the timing of the CSI-RS resource based on the timing of the cell with the cell ID. For example, the BS may inform the UE that a CSI-RS resource for (beam) measurement is to be transmitted from a cell that is not the serving cell through a higher layer parameter. The UE may interpret/consider N_ID and the SSB ID configured for the CSI-RS resource as the cell ID and the SSB ID of the cell with the cell ID, respectively, and determine the timing of the cell with respect to an indicated SSB. According to this method, the BS may inform the UE that CSI-RS resource Type I configured for the UE is to be transmitted from the neighboring cell through a single higher layer parameter. Accordingly, since other preconfigured parameters may be reevaluated based on the signaling, the existing parameters may be used (without defining new parameters). That is, according to the proposed method, the impact on the current 5G specification may be minimized. Further, according to this method, the BS may inform the UE CSI-RS resource type I of the neighboring cell without defining a CSI-RS resource for mobility. Additionally, in the case of CSI-RS resource Type I, the BS may configure a higher layer parameter QuasiColocatedforType1 for the UE. The BS may inform the UE whether the CSI-RS resource is QCL with the configured SSB in terms of QCL-Type D through the parameter. In other words, the parameter may perform the same functionality as a higher layer parameter QuasiColocated in a CSI-RS for mobility. 2.3. Third Method A BS may directly indicate a neighboring cell ID to a UE to instruct the UE to measure a CSI-RS resource transmitted in a cell with the cell ID. In this case, the UE may apply various Rx beams to receive the CSI-RS resource. However, the UE may consume a large amount of resources to search for the best Rx beam for receiving the CSI-RS. If the BS configures the cell ID with respect to whitelisted cells, the UE may know the Rx beam for receiving the cell and thus minimize the resource consumption. Specifically, when the BS informs the UE that CSI-RS resource Type I is to be transmitted from a neighboring cell rather than a serving cell through a higher layer parameter or DCI, the UE may expect that the cell ID configured for the CSI-RS resource is included in the whitelisted cells. Alternatively, the UE may not expect that a cell ID not included in the whitelisted cells is configured for the CSI-RS resource. 2.4. Third Method InFIG.12, the RSRP of CSI-RS resource #00 may be different from that of CSI-RS resource #10 depending on the Rx beam of the UE (e.g., Rx #0 or #1). Thus, if the UE reports the RSRP of CSI-RS resource #00 with respect to Rx #0 and report the RSRP of CSI-RS resource #10 with respect to Rx #1, the following problems may occur. For example, it is assumed that each of the RSRP reported by the UE based on Rx #0 and the RSRP reported by the UE based on Rx #1 have a large value (or is more than or equal to a predetermined threshold). In this case, the serving cell may provide services to the UE using a resources (or beam) based on CSI-RS resource #00, and the neighboring cell may not use the same time/frequency resource as the serving cell to avoid interference, that is, the neighboring cell may provide services to the UE in the neighboring cell using a resource (or beam) based on CSI-RS resource #10. However, in this case, the UE may receive a signal by selecting Rx #0 and thus avoid most of signals (or interference) transmitted from the neighboring cell. That is, the interference avoidance performed by the neighboring cell may be unnecessary. Consequently, the network may perform inefficient scheduling. To solve such a problem, when a specific UE reports RSRP for two CSI-RS resources, the network may instruct the UE to report the RSRP with respect to the same Rx beam Specifically, when reporting of a CSI-RS resource indicator (CRI) and RSRP is configured to be performed based on the measurement of a plurality of CSI-RS resources included in different resource sets or settings, the network may instruct the UE to perform the measurement and reporting with respect to the same UE Rx beam (through a higher layer parameter and/or DCI). In this case, if the number of CSI resources that can be reported by the UE is less than the total number of CSI-RS resources, the network may instruct the UE to report at least one CSI-RS resource for each resource set or setting (through a higher layer parameter and/or DCI). FIG.14is a diagram schematically illustrating a method of transmitting and receiving CSI between a UE and a BS according to the present disclosure,FIG.15is a flowchart illustrating a method for the UE to report the CSI according to the present disclosure, andFIG.16is a flowchart illustrating a method for the BS to receive the CSI from the UE according to the present disclosure. In the following description, the BS may refer to a configuration including both the serving and neighboring cells shown inFIG.12. That is, “BS” may be replaced with “network”. According to the present disclosure, the UE receives configuration information related to a first CSI-RS resource for measurement from the BS (S1410and S1510). The BS transmits the configuration information related to the first CSI-RS resource for measurement to the UE (S1410and S1610). In this case, the BS may transmit the configuration information to the UE through higher layer signaling. In addition, the BS may transmit the configuration information to the UE through a serving cell that provides services to the UE. In the present disclosure, the configuration information may include QCL information between the first CSI-RS resource and a second CSI-RS resource related to a neighboring cell. Specifically, the QCL information may include at least one of the following information.QCL type A information indicating that the first CSI-RS resource and the second CSI-RS resource are QCL in terms of the Doppler shift, the Doppler spread, the average delay, and the delay spread.QCL type B information indicating that the first CSI-RS resource and the second CSI-RS resource are QCL in terms of the Doppler shift and the Doppler spreadQCL type C information indicating that the first CSI-RS resource and the second CSI-RS resource are QCL in terms of the Doppler shift and the average delayQCL type D information indicating that the first CSI-RS resource and the second CSI-RS resource are QCL in terms of the spatial Rx parameter The UE receives a CSI-RS transmitted from the neighboring cell based on the configuration information (S1420and S1520). The UE transmits the CSI-RS to the UE through the neighboring cell based on the configuration information (S1420and S1620). Specifically, the UE may receive the CSI-RS from the neighboring cell as follows depending on the QCL type information included in the received QCL information. For example, when the QCL information includes the QCL type C information, the UE may receive the CSI-RS transmitted from the neighboring cell based on Doppler shift information and average delay information related to the second CSI-RS resource. As another example, when the QCL information includes the QCL type D information, the UE may receive the CSI-RS transmitted from the neighboring cell based on Rx beam information related to the second CSI-RS resource. As a further example, when the QCL information includes the QCL type C information and the QCL type D information, the UE may receive the CSI-RS transmitted from the neighboring cell based on the Doppler shift information, the average delay information, and the Rx beam information related to the second CSI-RS resource. The UE measures CSI based on the received CSI-RS (S1430and S1630). In the present disclosure, the CSI may include at least one of channel quality information (CQI), a precoding matrix indicator (PMI), a CRI, an SS/PBCH resource block indicator (SSBRI), a layer indicator (LI), and an RI. The UE transmits the measured CSI to the BS (S1440and S1540). Specifically, the UE transmits the measured CSI to the serving cell. The BS receives the measured CSI from the UE through the serving cell (S1440and S1630). Through the above processes, the UE may measure and report the CSI for the neighboring cell, and the BS may receive the CSI for the neighboring cell from the UE. Additionally, the timing of the serving cell may not be aligned with that of the neighboring cell as described above. In other words, the serving cell and the neighboring cell may be in an asynchronous state. Here, the asynchronous state may mean that a frame boundary difference between the serving and neighboring cells is at least one (OFDM) symbol. That is, when the serving and neighboring cells are in the asynchronous state, the frame boundary difference therebetween may be at least one symbol or at least one slot. In this case, the UE may receive the CSI-RS from the neighboring cell as follows. Specifically, the UE may receive the CSI-RS from the neighboring cell based on the timing of the CSI-RS, which is determined based on the QCL information and the configuration information. The timing of the CSI-RS transmitted from the neighboring cell may be determined as follows.When SSB information related to the second CSI-RS resource is configured, the timing of the CSI-RS is determined with respect to a cell configured in relation to the second CSI-RS resourceWhen SSB information related to the second CSI-RS resource is not configured and reference serving cell information related to the second CSI-RS resource is configured, the timing of the CSI-RS is determined with respect to a cell determined based on the reference serving cell information.When SSB information related to the second CSI-RS resource is not configured and reference serving cell information related to the second CSI-RS resource is not configured, the timing of the CSI-RS is determined with respect to the serving cell connected to the UE. In the above-described configuration, a resource for receiving the CSI-RS from the neighboring cell may be determined in various ways. For example, the resource for receiving the CSI-RS from the neighboring cell may be determined based on a resource configuration related to the first CSI-RS resource. Thus, the UE may receive the CSI-RS from the neighboring cell based on the resource configuration related to the first CSI-RS resource. In particular, the location of the resource for transmitting the CSI-RS may be determined based on a higher layer parameter NZP-CSI-RS-Resource related to the first CSI-RS resource. As another example, the resource for receiving the CSI-RS from the neighboring cell may be determined based on a resource configuration related to the second CSI-RS resource. Thus, the UE may receive the CSI-RS from the neighboring cell based on the resource configuration related to the second CSI-RS resource. In the present disclosure, the resource configuration related to the second CSI-RS resource may include at least one of the following configurations.A time resource configuration related to the second CSI-RS resourceA frequency resource configuration related to the second CSI-RS resourceA numerology configuration related to the second CSI-RS resource In particular, the time/frequency resource for transmitting the CSI-RS maybe determined based on a higher layer parameter CSI-RS-Resource-Mobility related to the second CSI-RS resource. As a further example, the resource for receiving the CSI-RS from the neighboring cell may be determined based on a resource configuration that satisfies both a first resource configuration related to the first CSI-RS resource and a second resource configuration related to the second CSI-RS resource. Thus, the UE may receive the CSI-RS from the neighboring cell based on the resource configuration satisfying both the first resource configuration related to the first CSI-RS resource and the second resource configuration related to the second CSI-RS resource. In particular, the UE may receive the CSI-RS from the neighboring cell based on a frequency resource in which a first frequency resource related to the first CSI-RS resource included in the first resource configuration overlaps with a second frequency resource related to the second CSI-RS resource included in the second resource configuration. In the above-described configuration, the first CSI-RS resource may be a non-zero power (NZP) CSI-RS resource or a channel state information interference measurement (CSI-IM) resource. The second CSI-RS resource may be a CSI-RS resource configured for radio resource management (RRM). Since each of the examples of the proposed methods may be included as one method for implementing the present disclosure, it is apparent that each example may be regarded as a proposed method. Although the proposed methods may be implemented independently, some of the proposed methods may be combined (or merged) for implementation. In addition, it may be regulated that information on whether the proposed methods are applied (or information on rules related to the proposed methods) should be transmitted from a BS to a UE through a predefined signal (e.g., a physical layer signal, a higher layer signal, etc.). 3. Device Configuration FIG.17is a diagram illustrating configurations of a UE and a BS by which proposed embodiments can be implemented. The UE and the BS illustrated inFIG.19operate to implement the embodiments of the above-described DL signal transmission and reception method between the UE and the BS. The UE1001may operate as a transmission end on UL and as a reception end on DL. The BS (eNB or gNB)1100may operate as a reception end on UL and as a transmission end on DL That is, the UE and the BS may include transmitters1010and1110and receivers1020and1120, respectively, to control transmission and reception of information, data and/or messages and may include antennas1030and1130, respectively, to transmit and receive information, data, and/or messages. The UE and the BS further include processors1040and1140, respectively, for performing the above-described embodiments of the present disclosure. The processors1040and1140control memories1050and1150, the transmitters1010and1110, and/or the receivers1020and1120, respectively, to implement the above-described/proposed procedures and/or methods. For example, the processors1040and1140include communication modems designed to implement radio communication technology (e.g., LTE or NR). The memories1050and1150are connected to the processors1040and1140and store various information related to operations of the processors1040and1140. As an example, the memories1050and1150may perform a part or all of processes controlled by the processors1040and1140or store software code including the above-described/proposed procedures and/or methods. The transmitters1010and1110and/or the receivers1020and1120are connected to the processors1040and1140and transmit and/or receive radio signals. The processors1040and1140and the memories1050and1150may be a part of a processing chip (e.g., system-on-chip (SoC)). The transmitters and receivers included in the UE and the BS may perform a packet modulation and demodulation function, a high-speed packet channel coding function, an OFDMA packet scheduling function, and/or a channelization function, for data transmission. The UE and the BS ofFIG.17may further include low-power radio frequency (RF)/intermediate frequency (IF) units. FIG.18is a block diagram of a communication device by which proposed embodiments can be implemented. The device illustrated inFIG.18may be a UE and/or a BS (e.g., an eNB or a gNB) adapted to perform the above mechanism or may be any device for performing the same operation. As illustrated inFIG.18, the device may include a digital signal processor (DSP)/microprocessor2210and an RF module (transceiver)2235. The DSP/microprocessor2210is electrically connected to the transceiver2235to control the transceiver2235. The device may further include a power management module2205, a battery2255, a display2215, a keypad2220, a SIM card2225, a memory device2230, a speaker2245, and an input device2250, according to the selection of a designer. Specifically,FIG.18illustrates a UE including the receiver2235configured to receive a request message from a network and the transmitter2235configured to transmit transmission or reception timing information to the network. The receiver and the transmitter may constitute the transceiver2235. The UE may further include the processor2210connected to the transceiver2235(receiver and transmitter). In addition,FIG.18illustrates a network device including the transmitter2235configured to transmit a request message to the UE and the receiver2235configured to receive transmission or reception timing information from the UE. These transmitter and receiver may constitute the transceiver2235. The network further includes the processor2210connected to the transmitter and the receiver. This processor2210may be configured to calculate latency based on the transmission or reception timing information. Thus, the processor included in the UE (or a communication device included in the UE) according to the present disclosure and the processor included in the BS (or a communication device included in the BS) according to the present disclosure may control the corresponding memories and operate as follows. In the present disclosure, the UE may include at least one radio frequency (RF) module; at least one processor; and at least one memory operably connected to the at least one processor, for storing instructions for causing the at least one processor to perform a specific operation when the at least one processor is executed. In this case, the communication device included in the UE may be configured to include the at least one processor and the at least one memory. The communication device may be configured to include that at least one RF module or may be configured to be connected to at least one RF module without including the at least one RF module. The at least one processor included in the UE (or the at least one processor of the communication device included in the UE) may be configured to receive configuration information related to a first CSI-RS resource for measurement by controlling the at least one RF module. In this case, the configuration information may include QCL information between the first CSI-RS resource and a second CSI-RS resource related to a neighboring cell. The at least one processor may be configured to receive a CSI-RS transmitted from the neighboring cell based on the configuration information by controlling the at least one RF module. The at least one processor may be configured to report CSI measured based on the received CSI-RS to a serving cell by controlling the at least one RF module. The UE (or the communication device included in the UE) may be configured to communicate with at least one of a mobile terminal, a network, or a self-driving vehicle other than a vehicle in which the UE is included. In the present disclosure, the BS may include at least one radio frequency (RF) module; at least one processor; and at least one memory operably connected to the at least one processor, for storing instructions for causing the at least one processor to perform a specific operation when the at least one processor is executed. In this case, the communication device included in the BS may be configured to include the at least one processor and the at least one memory. The communication device may be configured to include that at least one RF module or may be configured to be connected to at least one RF module without including the at least one RF module. The at least one processor included in the BS (or the at least one processor of the communication device included in the BS) may be configured to transmit the configuration information related to the first CSI-RS resource for measurement to the UE by controlling the at least one RF module. In this case, the configuration information may include the QCL information between the first CSI-RS resource and the second CSI-RS resource related to the neighboring cell. The at least one processor may be configured to receive the CSI measured by the UE by controlling the at least one RF module. In this case, the CSI may include measurement information for the CSI-RS transmitted from the neighboring cell to the UE based on the configuration information. The UE in the present disclosure may use a personal digital assistant (PDA), a cellular phone, a personal communication service (PCS) phone, a global system for mobile (GSM) phone, a wideband code division multiple access (WCDMA) phone, a mobile broadband system (MBS) phone, a hand-held PC, a laptop PC, a smartphone, or a multi-mode multi-band (MM-MB) terminal. In this case, the smartphone refers to a terminal taking the advantages of both a mobile communication terminal and a PDA and may be a terminal which incorporates functions of the PDA, i.e., a scheduling function and a data communication function such as fax transmission and reception and Internet connection, into the mobile communication terminal. The MM-MB terminal refers to a terminal which has a multi-modem chip therein and which can operate in any of a mobile Internet system and other mobile communication systems (e.g. a code division multiple access (CDMA) 2000 system, a WCDMA system, etc.). Embodiments of the present disclosure may be implemented by various means, for example, hardware, firmware, software, or a combination thereof. In a hardware implementation, methods according to the embodiments of the present disclosure may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, etc. In a firmware or software implementation, the methods according to the embodiments of the present disclosure may be implemented in the form of a module, a procedure, a function, etc. for performing the above-described functions or operations. For example, software code may be stored in the memory11050or1150and executed by the processor1040or1140. The memory is located at the interior or exterior of the processor and may transmit and receive data to and from the processor via various known means. The above-described communication device may be a BS, a network node, a transmission terminal, a wireless device, a wireless communication device, a vehicle, a vehicle having a self-driving function, an unmanned aerial vehicle (UAV), an artificial intelligence (AI) module, a robot, an augmented reality (AR) device, a virtual reality (VR) device, or the like. For example, the UE may include a cellular phone, a smartphone, a laptop computer, a digital broadcast terminal, a PDA, a portable multimedia player (PMP), a navigation device, a slate PC, a tablet PC, an ultrabook, or a wearable device (e.g., a smartwatch, smartglasses, or a head mounted display (HMD)). For example, the UAV may be an unmanned aircraft flying according to a wireless control signal. For example, the HMD is a display device wearable on the head, which may be used to implement VR or AR. Those skilled in the art will appreciate that the present disclosure may be carried out in other specific ways than those set forth herein without departing from the spirit and essential characteristics of the present disclosure. The above embodiments are therefore to be construed in all aspects as illustrative and not restrictive. The scope of the disclosure should be determined by the appended claims and their legal equivalents, not by the above description, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein. It is obvious to those skilled in the art that claims that are not explicitly cited in each other in the appended claims may be presented in combination as an embodiment of the present disclosure or included as a new claim by a subsequent amendment after the application is filed. INDUSTRIAL APPLICABILITY The present disclosure is applicable to various wireless access systems including a 3GPP system, and/or a 3GPP2 system. Besides these wireless access systems, the embodiments of the present disclosure are applicable to all technical fields in which the wireless access systems find their applications. Moreover, the proposed method can also be applied to mmWave communication using an ultra-high frequency band. Additionally, the embodiments of the present disclosure are applicable to various applications such as a self-driving vehicle, a UAV, etc.
114,073
11863477
DETAILED DESCRIPTION The following detailed description refers to the accompanying drawings. The same reference numbers may be used in different drawings to identify the same or similar elements. In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular structures, architectures, interfaces, techniques, etc. in order to provide a thorough understanding of the various aspects of various embodiments. However, it will be apparent to those skilled in the art having the benefit of the present disclosure that the various aspects of the various embodiments may be practiced in other examples that depart from these specific details. In certain instances, descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the description of the various embodiments with unnecessary detail. For the purposes of the present document, the phrase “A or B” means (A), (B), or (A and B). Various embodiments herein provide techniques for radio link monitoring (RLM) evaluation periods for in-sync and out-of-sync detection in new radio unlicensed (NR-U) spectrums. Additionally, embodiments provide techniques for reference signal time difference (RSTD) timing uncertainty configuration. Radio Link Monitoring (RLM) Evaluation Periods for out-of-Sync Detection in New Radio-Unlicensed (NR-U) Spectrums New Radio Unlicensed (NR-U) targets the efficient spectrum sharing between 5G NR and WI-FI (e.g. 802.11AC). To allow that, 5G NR needs to be enhanced to support carrier-sense multiple access (CSMA) like access scheme, so as to be co-existing with WI-FI within the same spectrum in a fair manner. One key functionality to enable CSMA for NR-U is listen-before-talk (LBT). It means that a NR-U gNB needs to detect the channel occupy (which can be occupied by WI-FI) immediately before transmitting a 5G downlink (DL) signal. When the shared channel is already occupied, the 5G DL signal needs to be skipped or postponed. On the other hand, similar to NR and LTE, Radio Link Monitoring (RLM) is still required by NR-U so as to monitor the link quality within unlicensed bands. RLM in NR-U is based on hypothetical physical downlink control channel (PDCCH) Block Error Rate (BLER) estimation, which is further mapped from downlink (DL) signal-to-interference-plus-noise ratio (SINR) measurements of the received discovery reference signals (DRS), so as to make the correct detection for In-sync (IS) and Out-of-sync (OOS). The RLM in NR-U is impacted by listen before talk (LBT) procedures. When LBT fails on the next-generation NodeB (gNB) side, a pre-allocated RLM DRS (or synchronization signal block (SSB)) could be skipped by gNB. Since a user equipment (UE) does not know about the DRS skipping on the gNB side, this could lead to a very pessimistic hypothetical PDCCH BLER estimation on the UE side, which may cause false alarm for IS/OOS detection. Hence, for NR-U, RLM evaluation period design needs to consider the DRS mis-detection error due to LBT failure in UE receive (Rx) side, especially under low SINR conditions. To address that, one simple existing method is to extend the legacy RLM evaluation latency requirements for NR in 3GPP Rel-15 with the additional the non-available DRS occasions. For examples, one existing proposal is to scale the evaluation time for IS detection based on the following table: TABLE 1Existing method for evaluation periodscaling for IS detection for NR-U RLMConfigurationTEvaluate_in_SSB (MS)no DRXmax(100, ceil((5 + Lin) * P) * Tssb)DRX cycle ≤320Max(100, ceil((7.5 + Lin) * P) *max(TDRx, Tssb))DRX cycle >320ceil((5 + Lin) * P) * TssbNote 1:Tssbis the periodicity of SSB RLM-RS configured for RLM. TDRx However, if the same method for OOS detection is applied, it would lead to very long evaluation latency. Among other things, embodiments of the present disclosure are directed to optimizations for defining the RLM OOS evaluation period for NR-U, so as to optimize the trade-off between evaluation latency and OOS detection accuracy. For example, for OOS detection in NR-U RLM, in some embodiments a DRS occasion with a skipped DRS transmission (due to LBT failure in gNB side) could be viewed as an normal OOS event. In such cases, the UE may measure a very low SINR for such occasions, similar to the low SINRs due to OOS. Based on that, for OOS detection, the impacts of DRS mis-detection at the UE side could be neglected. Accordingly, the significant extension of OOS evaluation period due to DRS mis-detection could be avoided. Instead, only the measurement gain loss due to the reduced DRS measurements need to be compensated by proper scaling. For OOS some detection scenarios in NR-U RLM, the SINR side condition for OOS detection can be too low to ensure reliable DRS detection on the UE side. Because of that, there may need to be a significant extension of the OOS evaluation period if DRS mis-detection needs to be considered in OOS scenarios and needs to be compensated for. This would result in significantly increased RLM OOS detection latency. Accordingly, embodiments of the present disclosure help optimize the trade-off between OOS evaluation latency and OOS detection accuracy, for OOS detection in NR-U RLM. In some embodiments, a DRS occasion with a skipped DRS transmission (due to LBT failure by the gNB) could be viewed as a normal OOS event. One reason behind this is that the UE may still measure a very low SINR value for such an occasion (with empty DRS), similar to other low SINRs due to OOS. Based on that, for OOS detection, the impacts of DRS mis-detection at the UE side could be neglected. And UE does not have to explicitly detect the existence of a DRS for OOS detection. In other embodiments, though the UE does not have to detect the existence DRS in OOS scenario (as explained before), a skipped DRS by the gNB (due to LBT failure) can still introduce measurement accuracy loss on the UE side. Hence, embodiments herein provide compensation by proper scaling of the RLM OOS evaluation period to overcome that. In some embodiments, the OOS evaluation period for NR-U RLM may be as shown in the following two tables: TABLE 2Proposed evaluation period scalingfor OOS detection for NR-U RLM in FR1Table 8.1.2.2-1: Evaluation period TEvaluate_out for FR1ConfigurationTEvaluate_out (ms)no DRX[Sf] * max(200, ceil(10 * P) * Tsmtc)DRX cycle ≤320[Sf] * max(200, ceil(15 * P) * max(TDRX,Tsmtc)DRX cycle >320ceil(10 * P) * TDRXNOTE:TSSBis the periodicity of SSB configured for RLM.TDRXis the DRX cycle length. TABLE 3Proposed evaluation period scalingfor OOS detection for NR-U RLM in FR2Table 8.1.2.2-2: Evaluation period TEvaluate_out and TEvaluate_in for FR2ConfigurationTEvaluate_out (ms)no DRX[Sf] * max(200, ceil(10 * P * N) * TSSB)DRX cycle ≤320[Sf] * max(200, ceil(15 * P * N) * max(TDRX,TSSB))DRX cycle >320[Sf] * ceil(10 * P * N) * TDRXNOTE:TSSBis the periodicity of SSB configured for RLM.TDRXis the DRX cycle length. Reference Signal Time Difference (RSTD) Timing Uncertainty Configuration Cellular technology based user equipment (UE) positioning is a multilateration method wherein the serving base station estimates the UE location, based on UE reported downlink (DL) reference signal (RS) measurements (e.g. timing, angle, cell ID, etc.), or based on direct measurement of UE transmitted uplink (UL) reference signals, which are received by the base stations. For 3GPP design, 4G LTE based positioning technology has been developed since release 9, while 5G NR based positioning technology are currently under development for release 16. For DL based positioning methods, one method is known as observed time difference of arrival (OTDOA), which is further based on reference signal time difference (RSTD) measurements reported from the UE side. For RSTD measurement, UE needs to estimate the received timing differences of different base stations, based on the received DL positioning reference signals (PRSs) from different base stations, and then report the RSTD results to the serving base station. Currently, the RSTD measurement accuracy testing for 5G NR are not yet defined by RAN4 for release 16. As part of RSTD accuracy testing, RSTD timing uncertainty is one parameter within the OTDOA assistant data, so as to help the UE to optimize the Rx timing measurement. Some embodiments of this disclosure are directed to optimizing the RSTD timing uncertainty configuration for 5G NR RSTD accuracy testing. Among other things, some embodiments of this disclosure are directed to adapting the RSTD timing uncertainty configuration based on the timing offset estimation capture range of the DL RS which is used for RSTD measurement, which can be further determined based on the frequency spacing of the adjacent reference resource elements (ref. REs) within a same reference OFDM symbol in the frequency domain. In particular, for SSB based RSTD measurement, the RSTD timing uncertainty can be adapted based on the sub-carrier-spacing (SCS) of the SSB. For PRS based RSTD measurement, the RSTD uncertainty can be adapted jointly based on the PRS SCS as well as the RE density of PRS. Embodiments of the present disclosure may further introduce different SNR side conditions for RSTD accuracy testing by RSTD timing uncertainties, so as to optimize the UE processing complexity and NR positioning performance. For Legacy RSTD measurements in LTE, the PRS has a very constant configuration (constant sub-carrier-spacing and constant reference resource element density), such that when applying RSTD accuracy testing, a same RSTD timing uncertainty can be configured for all RSTD performance tests, which is 5 us. However, for 5G NR, the waveform configuration of the DL RS which can be used for RSTD measurement is much more flexible. The variants include:(1) DL RS type (a RSTD can be measured by PRS and/or synchronization signal block (SSB));(2) SCS (PRS or SSB can be configured with different SCSs by different positioning cells);(3) PRS Reference resource element (RE) density (the number of PRS REs within a resource block (RB) can be different). Different waveform configurations can result in different capabilities for RSTD measurement. As a result, when applying a constant timing uncertainty to test RSTD accuracy performance using different DL RS waveform configurations, the comparison is unfair. In order to solve this issue and to optimize the trade-off between UE complexity and the NR positioning performance, embodiments of this disclosure may adapt the RSTD timing uncertainty configuration based on the structures of the reference signals, which are used for RSTD measurement. In some embodiments, the RSTD timing uncertainty configuration may be adapted based on the timing offset estimation capture range of the DL RS which is used for RSTD measurement, which can be further determined based on the frequency spacing of the adjacent reference resource elements within a same reference OFDM symbol in the frequency domain. Note that, the timing offset estimation capture range can be formulated by the following form: maxTimingPffsetCaptureRange=+−1/SCS*(12/D)/2 Wherein SCS is the sub-carrier-spacing, and D is the ref RE density which is defined to be the number reference REs within a RB. In some embodiments, for SSB based RSTD measurements, the maximal RSTD timing uncertainty can be adapted based on the sub-carrier-spacing (SCS) of the SSB. Table 4 shows one example of the proposed maximal RSTD timing uncertainties for RSTD measurement using SSBs with different SCSs. TABLE 4Example of maximal RSTD timing uncertaintiesfor SSB based RSTD accuracy testingProposed maximal RSTD timing uncertaintySSB SCSconfigurations for RSTD accuracy testing15kHz+−30us30kHz+−15us120kHz+−4us240kHz+−2us In this example, for PRS based RSTD measurements, the RSTD uncertainty can be adapted jointly based on the PRS SCS as well as the RE density of PRS. Table 5 shows one example of the proposed RSTD timing uncertainties for RSTD measurement using PRSs with different SCSs and reference RE densities. TABLE 5Example of maximal RSTD timing uncertaintiesfor PRS based RSTD accuracy testingPRS ref. REProposed maximal RSTD timing uncertaintyPRS SCSdensityconfigurations for RSTD accuracy testing15kHz1+−2.5us15kHz3+−7.5us30kHz1+−1.25us30kHz3+−2.5us120kHz1+−0.3us120kHz3+−1us240kHz1+−0.15us240kHz3+−0.45us Embodiments of the present disclosure may further introduce different SNR side conditions for RSTD accuracy testing by comparing the actually configured RSTD timing uncertainty and the maximal RSTD timing uncertainties (e.g., defined based on Table 4 and/or Table 5), so as to optimize the UE processing complexity and NR positioning performance. FIG.1shows an example process100in accordance with some embodiments. The process may be performed, for example, by a UE and/or a test apparatus (TE), or a portion thereof. At102, the maximal RSTD timing uncertainty is determined (e.g., using Table 4 or Table 5). At104, it is determined whether the actual RSTD timing uncertainty is less than or equal to the maximal RSTD timing uncertainty. If yes, then at106, then the SNR condition for PRS is setup as SNR1, wherein SNR1is less than SNR2. If no, then at108, the SNR condition for PRS is setup as SNR2. Systems and Implementations FIG.4illustrates an example architecture of a system400of a network, in accordance with various embodiments. The following description is provided for an example system400that operates in conjunction with the LTE system standards and 5G or NR system standards as provided by 3GPP technical specifications. However, the example embodiments are not limited in this regard and the described embodiments may apply to other networks that benefit from the principles described herein, such as future 3GPP systems (e.g., Sixth Generation (6G)) systems, IEEE 802.16 protocols (e.g., WMAN, WiMAX, etc.), or the like. As shown byFIG.4, the system400includes UE401aand UE401b(collectively referred to as “UEs401” or “UE401”). In this example, UEs401are illustrated as smartphones (e.g., handheld touchscreen mobile computing devices connectable to one or more cellular networks), but may also comprise any mobile or non-mobile computing device, such as consumer electronics devices, cellular phones, smartphones, feature phones, tablet computers, wearable computer devices, personal digital assistants (PDAs), pagers, wireless handsets, desktop computers, laptop computers, in-vehicle infotainment (IVI), in-car entertainment (ICE) devices, an Instrument Cluster (IC), head-up display (HUD) devices, onboard diagnostic (OBD) devices, dashtop mobile equipment (DME), mobile data terminals (MDTs), Electronic Engine Management System (EEMS), electronic/engine control units (ECUs), electronic/engine control modules (ECMs), embedded systems, microcontrollers, control modules, engine management systems (EMS), networked or “smart” appliances, MTC devices, M2M, IoT devices, and/or the like. In some embodiments, any of the UEs401may be IoT UEs, which may comprise a network access layer designed for low-power IoT applications utilizing short-lived UE connections. An IoT UE can utilize technologies such as M2M or MTC for exchanging data with an MTC server or device via a PLMN, ProSe or D2D communication, sensor networks, or IoT networks. The M2M or MTC exchange of data may be a machine-initiated exchange of data. An IoT network describes interconnecting IoT UEs, which may include uniquely identifiable embedded computing devices (within the Internet infrastructure), with short-lived connections. The IoT UEs may execute background applications (e.g., keep-alive messages, status updates, etc.) to facilitate the connections of the IoT network. The UEs401may be configured to connect, for example, communicatively couple, with an or RAN410. In embodiments, the RAN410may be an NG RAN or a 5G RAN, an E-UTRAN, or a legacy RAN, such as a UTRAN or GERAN. As used herein, the term “NG RAN” or the like may refer to a RAN410that operates in an NR or 5G system400, and the term “E-UTRAN” or the like may refer to a RAN410that operates in an LTE or 4G system400. The UEs401utilize connections (or channels)403and404, respectively, each of which comprises a physical communications interface or layer (discussed in further detail below). In this example, the connections403and404are illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols, such as a GSM protocol, a CDMA network protocol, a PTT protocol, a POC protocol, a UMTS protocol, a 3GPP LTE protocol, a 5G protocol, a NR protocol, and/or any of the other communications protocols discussed herein. In embodiments, the UEs401may directly exchange communication data via a ProSe interface405. The ProSe interface405may alternatively be referred to as a SL interface405and may comprise one or more logical channels, including but not limited to a PSCCH, a PSSCH, a PSDCH, and a PSBCH. The UE401bis shown to be configured to access an AP406(also referred to as “WLAN node406,” “WLAN406,” “WLAN Termination406,” “WT406” or the like) via connection407. The connection407can comprise a local wireless connection, such as a connection consistent with any IEEE 802.11 protocol, wherein the AP406would comprise a wireless fidelity (Wi-Fi®) router. In this example, the AP406is shown to be connected to the Internet without connecting to the core network of the wireless system (described in further detail below). In various embodiments, the UE401b, RAN410, and AP406may be configured to utilize LWA operation and/or LWIP operation. The LWA operation may involve the UE401bin RRC_CONNECTED being configured by a RAN node411a-bto utilize radio resources of LTE and WLAN. LWIP operation may involve the UE401busing WLAN radio resources (e.g., connection407) via IPsec protocol tunneling to authenticate and encrypt packets (e.g., IP packets) sent over the connection407. IPsec tunneling may include encapsulating the entirety of original IP packets and adding a new packet header, thereby protecting the original header of the IP packets. The RAN410can include one or more AN nodes or RAN nodes411aand411b(collectively referred to as “RAN nodes411” or “RAN node411”) that enable the connections403and404. As used herein, the terms “access node,” “access point,” or the like may describe equipment that provides the radio baseband functions for data and/or voice connectivity between a network and one or more users. These access nodes can be referred to as BS, gNBs, RAN nodes, eNBs, NodeBs, RSUs, TRxPs or TRPs, and so forth, and can comprise ground stations (e.g., terrestrial access points) or satellite stations providing coverage within a geographic area (e.g., a cell). As used herein, the term “NG RAN node” or the like may refer to a RAN node411that operates in an NR or 5G system400(for example, a gNB), and the term “E-UTRAN node” or the like may refer to a RAN node411that operates in an LTE or 4G system400(e.g., an eNB). According to various embodiments, the RAN nodes411may be implemented as one or more of a dedicated physical device such as a macrocell base station, and/or a low power (LP) base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells. In some embodiments, all or parts of the RAN nodes411may be implemented as one or more software entities running on server computers as part of a virtual network, which may be referred to as a CRAN and/or a virtual baseband unit pool (vBBUP). In these embodiments, the CRAN or vBBUP may implement a RAN function split, such as a PDCP split wherein RRC and PDCP layers are operated by the CRAN/vBBUP and other L2 protocol entities are operated by individual RAN nodes411; a MAC/PHY split wherein RRC, PDCP, RLC, and MAC layers are operated by the CRAN/vBBUP and the PHY layer is operated by individual RAN nodes411; or a “lower PHY” split wherein RRC, PDCP, RLC, MAC layers and upper portions of the PHY layer are operated by the CRAN/vBBUP and lower portions of the PHY layer are operated by individual RAN nodes411. This virtualized framework allows the freed-up processor cores of the RAN nodes411to perform other virtualized applications. In some implementations, an individual RAN node411may represent individual gNB-DUs that are connected to a gNB-CU via individual F1 interfaces (not shown byFIG.4). In these implementations, the gNB-DUs may include one or more remote radio heads or RFEMs (see, e.g.,FIG.5), and the gNB-CU may be operated by a server that is located in the RAN410(not shown) or by a server pool in a similar manner as the CRAN/vBBUP. Additionally or alternatively, one or more of the RAN nodes411may be next generation eNBs (ng-eNBs), which are RAN nodes that provide E-UTRA user plane and control plane protocol terminations toward the UEs401, and are connected to a 5GC via an NG interface (discussed infra). In V2X scenarios one or more of the RAN nodes411may be or act as RSUs. The term “Road Side Unit” or “RSU” may refer to any transportation infrastructure entity used for V2X communications. An RSU may be implemented in or by a suitable RAN node or a stationary (or relatively stationary) UE, where an RSU implemented in or by a UE may be referred to as a “UE-type RSU,” an RSU implemented in or by an eNB may be referred to as an “eNB-type RSU,” an RSU implemented in or by a gNB may be referred to as a “gNB-type RSU,” and the like. In one example, an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs401(vUEs401). The RSU may also include internal data storage circuitry to store intersection map geometry, traffic statistics, media, as well as applications/software to sense and control ongoing vehicular and pedestrian traffic. The RSU may operate on the 5.9 GHz Direct Short Range Communications (DSRC) band to provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may operate on the cellular V2X band to provide the aforementioned low latency communications, as well as other cellular communications services. Additionally or alternatively, the RSU may operate as a Wi-Fi hotspot (2.4 GHz band) and/or provide connectivity to one or more cellular networks to provide uplink and downlink communications. The computing device(s) and some or all of the radiofrequency circuitry of the RSU may be packaged in a weatherproof enclosure suitable for outdoor installation, and may include a network interface controller to provide a wired connection (e.g., Ethernet) to a traffic signal controller and/or a backhaul network. Any of the RAN nodes411can terminate the air interface protocol and can be the first point of contact for the UEs401. In some embodiments, any of the RAN nodes411can fulfill various logical functions for the RAN410including, but not limited to, radio network controller (RNC) functions such as radio bearer management, uplink and downlink dynamic radio resource management and data packet scheduling, and mobility management. In embodiments, the UEs401can be configured to communicate using OFDM communication signals with each other or with any of the RAN nodes411over a multicarrier communication channel in accordance with various communication techniques, such as, but not limited to, an OFDMA communication technique (e.g., for downlink communications) or a SC-FDMA communication technique (e.g., for uplink and ProSe or sidelink communications), although the scope of the embodiments is not limited in this respect. The OFDM signals can comprise a plurality of orthogonal subcarriers. In some embodiments, a downlink resource grid can be used for downlink transmissions from any of the RAN nodes411to the UEs401, while uplink transmissions can utilize similar techniques. The grid can be a time-frequency grid, called a resource grid or time-frequency resource grid, which is the physical resource in the downlink in each slot. Such a time-frequency plane representation is a common practice for OFDM systems, which makes it intuitive for radio resource allocation. Each column and each row of the resource grid corresponds to one OFDM symbol and one OFDM subcarrier, respectively. The duration of the resource grid in the time domain corresponds to one slot in a radio frame. The smallest time-frequency unit in a resource grid is denoted as a resource element. Each resource grid comprises a number of resource blocks, which describe the mapping of certain physical channels to resource elements. Each resource block comprises a collection of resource elements; in the frequency domain, this may represent the smallest quantity of resources that currently can be allocated. There are several different physical downlink channels that are conveyed using such resource blocks. According to various embodiments, the UEs401and the RAN nodes411communicate data (for example, transmit and receive) data over a licensed medium (also referred to as the “licensed spectrum” and/or the “licensed band”) and an unlicensed shared medium (also referred to as the “unlicensed spectrum” and/or the “unlicensed band”). The licensed spectrum may include channels that operate in the frequency range of approximately 400 MHz to approximately 3.8 GHz, whereas the unlicensed spectrum may include the 5 GHz band. To operate in the unlicensed spectrum, the UEs401and the RAN nodes411may operate using LAA, eLAA, and/or feLAA mechanisms. In these implementations, the UEs401and the RAN nodes411may perform one or more known medium-sensing operations and/or carrier-sensing operations in order to determine whether one or more channels in the unlicensed spectrum is unavailable or otherwise occupied prior to transmitting in the unlicensed spectrum. The medium/carrier sensing operations may be performed according to a listen-before-talk (LBT) protocol. LBT is a mechanism whereby equipment (for example, UEs401RAN nodes411, etc.) senses a medium (for example, a channel or carrier frequency) and transmits when the medium is sensed to be idle (or when a specific channel in the medium is sensed to be unoccupied). The medium sensing operation may include CCA, which utilizes at least ED to determine the presence or absence of other signals on a channel in order to determine if a channel is occupied or clear. This LBT mechanism allows cellular/LAA networks to coexist with incumbent systems in the unlicensed spectrum and with other LAA networks. ED may include sensing RF energy across an intended transmission band for a period of time and comparing the sensed RF energy to a predefined or configured threshold. Typically, the incumbent systems in the 5 GHz band are WLANs based on IEEE 802.11 technologies. WLAN employs a contention-based channel access mechanism, called CSMA/CA. Here, when a WLAN node (e.g., a mobile station (MS) such as UE401, AP406, or the like) intends to transmit, the WLAN node may first perform CCA before transmission. Additionally, a backoff mechanism is used to avoid collisions in situations where more than one WLAN node senses the channel as idle and transmits at the same time. The backoff mechanism may be a counter that is drawn randomly within the CWS, which is increased exponentially upon the occurrence of collision and reset to a minimum value when the transmission succeeds. The LBT mechanism designed for LAA is somewhat similar to the CSMA/CA of WLAN. In some implementations, the LBT procedure for DL or UL transmission bursts including PDSCH or PUSCH transmissions, respectively, may have an LAA contention window that is variable in length between X and Y ECCA slots, where X and Y are minimum and maximum values for the CWSs for LAA. In one example, the minimum CWS for an LAA transmission may be 9 microseconds (μs); however, the size of the CWS and a MCOT (for example, a transmission burst) may be based on governmental regulatory requirements. The LAA mechanisms are built upon CA technologies of LTE-Advanced systems. In CA, each aggregated carrier is referred to as a CC. A CC may have a bandwidth of 1.4, 3, 5, 10, 15 or 20 MHz and a maximum of five CCs can be aggregated, and therefore, a maximum aggregated bandwidth is 100 MHz. In FDD systems, the number of aggregated carriers can be different for DL and UL, where the number of UL CCs is equal to or lower than the number of DL component carriers. In some cases, individual CCs can have a different bandwidth than other CCs. In TDD systems, the number of CCs as well as the bandwidths of each CC is usually the same for DL and UL. CA also comprises individual serving cells to provide individual CCs. The coverage of the serving cells may differ, for example, because CCs on different frequency bands will experience different pathloss. A primary service cell or PCell may provide a PCC for both UL and DL, and may handle RRC and NAS related activities. The other serving cells are referred to as SCells, and each SCell may provide an individual SCC for both UL and DL. The SCCs may be added and removed as required, while changing the PCC may require the UE401to undergo a handover. In LAA, eLAA, and feLAA, some or all of the SCells may operate in the unlicensed spectrum (referred to as “LAA SCells”), and the LAA SCells are assisted by a PCell operating in the licensed spectrum. When a UE is configured with more than one LAA SCell, the UE may receive UL grants on the configured LAA SCells indicating different PUSCH starting positions within a same subframe. The PDSCH carries user data and higher-layer signaling to the UEs401. The PDCCH carries information about the transport format and resource allocations related to the PDSCH channel, among other things. It may also inform the UEs401about the transport format, resource allocation, and HARQ information related to the uplink shared channel. Typically, downlink scheduling (assigning control and shared channel resource blocks to the UE401bwithin a cell) may be performed at any of the RAN nodes411based on channel quality information fed back from any of the UEs401. The downlink resource assignment information may be sent on the PDCCH used for (e.g., assigned to) each of the UEs401. The PDCCH uses CCEs to convey the control information. Before being mapped to resource elements, the PDCCH complex-valued symbols may first be organized into quadruplets, which may then be permuted using a sub-block interleaver for rate matching. Each PDCCH may be transmitted using one or more of these CCEs, where each CCE may correspond to nine sets of four physical resource elements known as REGs. Four Quadrature Phase Shift Keying (QPSK) symbols may be mapped to each REG. The PDCCH can be transmitted using one or more CCEs, depending on the size of the DCI and the channel condition. There can be four or more different PDCCH formats defined in LTE with different numbers of CCEs (e.g., aggregation level, L=1, 2, 4, or 8). Some embodiments may use concepts for resource allocation for control channel information that are an extension of the above-described concepts. For example, some embodiments may utilize an EPDCCH that uses PDSCH resources for control information transmission. The EPDCCH may be transmitted using one or more ECCEs. Similar to above, each ECCE may correspond to nine sets of four physical resource elements known as an EREGs. An ECCE may have other numbers of EREGs in some situations. The RAN nodes411may be configured to communicate with one another via interface412. In embodiments where the system400is an LTE system (e.g., when CN420is an EPC), the interface412may be an X2 interface412. The X2 interface may be defined between two or more RAN nodes411(e.g., two or more eNBs and the like) that connect to EPC420, and/or between two eNBs connecting to EPC420. In some implementations, the X2 interface may include an X2 user plane interface (X2-U) and an X2 control plane interface (X2-C). The X2-U may provide flow control mechanisms for user data packets transferred over the X2 interface, and may be used to communicate information about the delivery of user data between eNBs. For example, the X2-U may provide specific sequence number information for user data transferred from a MeNB to an SeNB; information about successful in sequence delivery of PDCP PDUs to a UE401from an SeNB for user data; information of PDCP PDUs that were not delivered to a UE401; information about a current minimum desired buffer size at the SeNB for transmitting to the UE user data; and the like. The X2-C may provide intra-LTE access mobility functionality, including context transfers from source to target eNBs, user plane transport control, etc.; load management functionality; as well as inter-cell interference coordination functionality. In embodiments where the system400is a 5G or NR system (e.g., when CN420is an 5GC), the interface412may be an Xn interface412. The Xn interface is defined between two or more RAN nodes411(e.g., two or more gNBs and the like) that connect to 5GC420, between a RAN node411(e.g., a gNB) connecting to 5GC420and an eNB, and/or between two eNBs connecting to 5GC420. In some implementations, the Xn interface may include an Xn user plane (Xn-U) interface and an Xn control plane (Xn-C) interface. The Xn-U may provide non-guaranteed delivery of user plane PDUs and support/provide data forwarding and flow control functionality. The Xn-C may provide management and error handling functionality, functionality to manage the Xn-C interface; mobility support for UE401in a connected mode (e.g., CM-CONNECTED) including functionality to manage the UE mobility for connected mode between one or more RAN nodes411. The mobility support may include context transfer from an old (source) serving RAN node411to new (target) serving RAN node411; and control of user plane tunnels between old (source) serving RAN node411to new (target) serving RAN node411. A protocol stack of the Xn-U may include a transport network layer built on Internet Protocol (IP) transport layer, and a GTP-U layer on top of a UDP and/or IP layer(s) to carry user plane PDUs. The Xn-C protocol stack may include an application layer signaling protocol (referred to as Xn Application Protocol (Xn-AP)) and a transport network layer that is built on SCTP. The SCTP may be on top of an IP layer, and may provide the guaranteed delivery of application layer messages. In the transport IP layer, point-to-point transmission is used to deliver the signaling PDUs. In other implementations, the Xn-U protocol stack and/or the Xn-C protocol stack may be same or similar to the user plane and/or control plane protocol stack(s) shown and described herein. The RAN410is shown to be communicatively coupled to a core network—in this embodiment, core network (CN)420. The CN420may comprise a plurality of network elements422, which are configured to offer various data and telecommunications services to customers/subscribers (e.g., users of UEs401) who are connected to the CN420via the RAN410. The components of the CN420may be implemented in one physical node or separate physical nodes including components to read and execute instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium). In some embodiments, NFV may be utilized to virtualize any or all of the above-described network node functions via executable instructions stored in one or more computer-readable storage mediums (described in further detail below). A logical instantiation of the CN420may be referred to as a network slice, and a logical instantiation of a portion of the CN420may be referred to as a network sub-slice. NFV architectures and infrastructures may be used to virtualize one or more network functions, alternatively performed by proprietary hardware, onto physical resources comprising a combination of industry-standard server hardware, storage hardware, or switches. In other words, NFV systems can be used to execute virtual or reconfigurable implementations of one or more EPC components/functions. Generally, the application server430may be an element offering applications that use IP bearer resources with the core network (e.g., UMTS PS domain, LTE PS data services, etc.). The application server430can also be configured to support one or more communication services (e.g., VoIP sessions, PTT sessions, group communication sessions, social networking services, etc.) for the UEs401via the EPC420. In embodiments, the CN420may be a 5GC (referred to as “5GC420” or the like), and the RAN410may be connected with the CN420via an NG interface413. In embodiments, the NG interface413may be split into two parts, an NG user plane (NG-U) interface414, which carries traffic data between the RAN nodes411and a UPF, and the S1 control plane (NG-C) interface415, which is a signaling interface between the RAN nodes411and AMFs. In embodiments, the CN420may be a 5G CN (referred to as “5GC420” or the like), while in other embodiments, the CN420may be an EPC). Where CN420is an EPC (referred to as “EPC420” or the like), the RAN410may be connected with the CN420via an S1 interface413. In embodiments, the S1 interface413may be split into two parts, an S1 user plane (S1-U) interface414, which carries traffic data between the RAN nodes411and the S-GW, and the S1-MME interface415, which is a signaling interface between the RAN nodes411and MMES. FIG.5illustrates an example of infrastructure equipment500in accordance with various embodiments. The infrastructure equipment500(or “system500”) may be implemented as a base station, radio head, RAN node such as the RAN nodes411and/or AP406shown and described previously, application server(s)430, and/or any other element/device discussed herein. In other examples, the system500could be implemented in or by a UE. The system500includes application circuitry505, baseband circuitry510, one or more radio front end modules (RFEMs)515, memory circuitry520, power management integrated circuitry (PMIC)525, power tee circuitry530, network controller circuitry535, network interface connector540, satellite positioning circuitry545, and user interface550. In some embodiments, the device500may include additional elements such as, for example, memory/storage, display, camera, sensor, or input/output (I/O) interface. In other embodiments, the components described below may be included in more than one device. For example, said circuitries may be separately included in more than one device for CRAN, vBBU, or other like implementations. Application circuitry505includes circuitry such as, but not limited to one or more processors (or processor cores), cache memory, and one or more of low drop-out voltage regulators (LDOs), interrupt controllers, serial interfaces such as SPI, I2C or universal programmable serial interface module, real time clock (RTC), timer-counters including interval and watchdog timers, general purpose input/output (I/O or IO), memory card controllers such as Secure Digital (SD) MultiMediaCard (MMC) or similar, Universal Serial Bus (USB) interfaces, Mobile Industry Processor Interface (MIPI) interfaces and Joint Test Access Group (JTAG) test access ports. The processors (or cores) of the application circuitry505may be coupled with or may include memory/storage elements and may be configured to execute instructions stored in the memory/storage to enable various applications or operating systems to run on the system500. In some implementations, the memory/storage elements may be on-chip memory circuitry, which may include any suitable volatile and/or non-volatile memory, such as DRAM, SRAM, EPROM, EEPROM, Flash memory, solid-state memory, and/or any other type of memory device technology, such as those discussed herein. The processor(s) of application circuitry505may include, for example, one or more processor cores (CPUs), one or more application processors, one or more graphics processing units (GPUs), one or more reduced instruction set computing (RISC) processors, one or more Acorn RISC Machine (ARM) processors, one or more complex instruction set computing (CISC) processors, one or more digital signal processors (DSP), one or more FPGAs, one or more PLDs, one or more ASICs, one or more microprocessors or controllers, or any suitable combination thereof. In some embodiments, the application circuitry505may comprise, or may be, a special-purpose processor/controller to operate according to the various embodiments herein. As examples, the processor(s) of application circuitry505may include one or more Intel Pentium®, Core®, or Xeon® processor(s); Advanced Micro Devices (AMD) Ryzen® processor(s), Accelerated Processing Units (APUs), or Epyc® processors; ARM-based processor(s) licensed from ARM Holdings, Ltd. such as the ARM Cortex-A family of processors and the ThunderX2 ® provided by Cavium™, Inc.; a MIPS-based design from MIPS Technologies, Inc. such as MIPS Warrior P-class processors; and/or the like. In some embodiments, the system500may not utilize application circuitry505, and instead may include a special-purpose processor/controller to process IP data received from an EPC or 5GC, for example. In some implementations, the application circuitry505may include one or more hardware accelerators, which may be microprocessors, programmable processing devices, or the like. The one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators. As examples, the programmable processing devices may be one or more a field-programmable devices (FPDs) such as field-programmable gate arrays (FPGAs) and the like; programmable logic devices (PLDs) such as complex PLDs (CPLDs), high-capacity PLDs (HCPLDs), and the like; ASICs such as structured ASICs and the like; programmable SoCs (PSoCs); and the like. In such implementations, the circuitry of application circuitry505may comprise logic blocks or logic fabric, and other interconnected resources that may be programmed to perform various functions, such as the procedures, methods, functions, etc. of the various embodiments discussed herein. In such embodiments, the circuitry of application circuitry505may include memory cells (e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, static memory (e.g., static random access memory (SRAM), anti-fuses, etc.)) used to store logic blocks, logic fabric, data, etc. in look-up-tables (LUTs) and the like. The baseband circuitry510may be implemented, for example, as a solder-down substrate including one or more integrated circuits, a single packaged integrated circuit soldered to a main circuit board or a multi-chip module containing two or more integrated circuits. The various hardware electronic elements of baseband circuitry510are discussed infra with regard toFIG.7. User interface circuitry550may include one or more user interfaces designed to enable user interaction with the system500or peripheral component interfaces designed to enable peripheral component interaction with the system500. User interfaces may include, but are not limited to, one or more physical or virtual buttons (e.g., a reset button), one or more indicators (e.g., light emitting diodes (LEDs)), a physical keyboard or keypad, a mouse, a touchpad, a touchscreen, speakers or other audio emitting devices, microphones, a printer, a scanner, a headset, a display screen or display device, etc. Peripheral component interfaces may include, but are not limited to, a nonvolatile memory port, a universal serial bus (USB) port, an audio jack, a power supply interface, etc. The radio front end modules (RFEMs)515may comprise a millimeter wave (mmWave) RFEM and one or more sub-mmWave radio frequency integrated circuits (RFICs). In some implementations, the one or more sub-mmWave RFICs may be physically separated from the mmWave RFEM. The RFICs may include connections to one or more antennas or antenna arrays (see e.g., antenna array711ofFIG.7infra), and the RFEM may be connected to multiple antennas. In alternative implementations, both mmWave and sub-mmWave radio functions may be implemented in the same physical RFEM515, which incorporates both mmWave antennas and sub-mmWave. The memory circuitry520may include one or more of volatile memory including dynamic random access memory (DRAM) and/or synchronous dynamic random access memory (SDRAM), and nonvolatile memory (NVM) including high-speed electrically erasable memory (commonly referred to as Flash memory), phase change random access memory (PRAM), magnetoresistive random access memory (MRAM), etc., and may incorporate the three-dimensional (3D) cross-point (XPOINT) memories from Intel® and Micron®. Memory circuitry520may be implemented as one or more of solder down packaged integrated circuits, socketed memory modules and plug-in memory cards. The PMIC525may include voltage regulators, surge protectors, power alarm detection circuitry, and one or more backup power sources such as a battery or capacitor. The power alarm detection circuitry may detect one or more of brown out (under-voltage) and surge (over-voltage) conditions. The power tee circuitry530may provide for electrical power drawn from a network cable to provide both power supply and data connectivity to the infrastructure equipment500using a single cable. The network controller circuitry535may provide connectivity to a network using a standard network interface protocol such as Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), or some other suitable protocol. Network connectivity may be provided to/from the infrastructure equipment500via network interface connector540using a physical connection, which may be electrical (commonly referred to as a “copper interconnect”), optical, or wireless. The network controller circuitry535may include one or more dedicated processors and/or FPGAs to communicate using one or more of the aforementioned protocols. In some implementations, the network controller circuitry535may include multiple controllers to provide connectivity to other networks using the same or different protocols. The positioning circuitry545includes circuitry to receive and decode signals transmitted/broadcasted by a positioning network of a global navigation satellite system (GNSS). Examples of navigation satellite constellations (or GNSS) include United States' Global Positioning System (GPS), Russia's Global Navigation System (GLONASS), the European Union's Galileo system, China's BeiDou Navigation Satellite System, a regional navigation system or GNSS augmentation system (e.g., Navigation with Indian Constellation (NAVIC), Japan's Quasi-Zenith Satellite System (QZSS), France's Doppler Orbitography and Radio-positioning Integrated by Satellite (DORIS), etc.), or the like. The positioning circuitry545comprises various hardware elements (e.g., including hardware devices such as switches, filters, amplifiers, antenna elements, and the like to facilitate OTA communications) to communicate with components of a positioning network, such as navigation satellite constellation nodes. In some embodiments, the positioning circuitry545may include a Micro-Technology for Positioning, Navigation, and Timing (Micro-PNT) IC that uses a master timing clock to perform position tracking/estimation without GNSS assistance. The positioning circuitry545may also be part of, or interact with, the baseband circuitry510and/or RFEMs515to communicate with the nodes and components of the positioning network. The positioning circuitry545may also provide position data and/or time data to the application circuitry505, which may use the data to synchronize operations with various infrastructure (e.g., RAN nodes411, etc.), or the like. The components shown byFIG.5may communicate with one another using interface circuitry, which may include any number of bus and/or interconnect (IX) technologies such as industry standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extended (PCIx), PCI express (PCIe), or any number of other technologies. The bus/IX may be a proprietary bus, for example, used in a SoC based system. Other bus/IX systems may be included, such as an I2C interface, an SPI interface, point to point interfaces, and a power bus, among others. FIG.6illustrates an example of a platform600(or “device600”) in accordance with various embodiments. In embodiments, the computer platform600may be suitable for use as UEs401, application servers430, and/or any other element/device discussed herein. The platform600may include any combinations of the components shown in the example. The components of platform600may be implemented as integrated circuits (ICs), portions thereof, discrete electronic devices, or other modules, logic, hardware, software, firmware, or a combination thereof adapted in the computer platform600, or as components otherwise incorporated within a chassis of a larger system. The block diagram ofFIG.6is intended to show a high level view of components of the computer platform600. However, some of the components shown may be omitted, additional components may be present, and different arrangement of the components shown may occur in other implementations. Application circuitry605includes circuitry such as, but not limited to one or more processors (or processor cores), cache memory, and one or more of LDOs, interrupt controllers, serial interfaces such as SPI, I2C or universal programmable serial interface module, RTC, timer-counters including interval and watchdog timers, general purpose I/O, memory card controllers such as SD MMC or similar, USB interfaces, MIPI interfaces, and JTAG test access ports. The processors (or cores) of the application circuitry605may be coupled with or may include memory/storage elements and may be configured to execute instructions stored in the memory/storage to enable various applications or operating systems to run on the system600. In some implementations, the memory/storage elements may be on-chip memory circuitry, which may include any suitable volatile and/or non-volatile memory, such as DRAM, SRAM, EPROM, EEPROM, Flash memory, solid-state memory, and/or any other type of memory device technology, such as those discussed herein. The processor(s) of application circuitry505may include, for example, one or more processor cores, one or more application processors, one or more GPUs, one or more RISC processors, one or more ARM processors, one or more CISC processors, one or more DSP, one or more FPGAs, one or more PLDs, one or more ASICs, one or more microprocessors or controllers, a multithreaded processor, an ultra-low voltage processor, an embedded processor, some other known processing element, or any suitable combination thereof. In some embodiments, the application circuitry505may comprise, or may be, a special-purpose processor/controller to operate according to the various embodiments herein. As examples, the processor(s) of application circuitry605may include an Intel® Architecture Core™ based processor, such as a Quark™, an Atom™, an i3, an i5, an i7, or an MCU-class processor, or another such processor available from Intel® Corporation, Santa Clara, CA The processors of the application circuitry605may also be one or more of Advanced Micro Devices (AMD) Ryzen® processor(s) or Accelerated Processing Units (APUs); A5-A9 processor(s) from Apple® Inc., Snapdragon™ processor(s) from Qualcomm® Technologies, Inc., Texas Instruments, Inc.® Open Multimedia Applications Platform (OMAP)™ processor(s); a MIPS-based design from MIPS Technologies, Inc. such as MIPS Warrior M-class, Warrior I-class, and Warrior P-class processors; an ARM-based design licensed from ARM Holdings, Ltd., such as the ARM Cortex-A, Cortex-R, and Cortex-M family of processors; or the like. In some implementations, the application circuitry605may be a part of a system on a chip (SoC) in which the application circuitry605and other components are formed into a single integrated circuit, or a single package, such as the Edison™ or Galileo™ SoC boards from Intel® Corporation. Additionally or alternatively, application circuitry605may include circuitry such as, but not limited to, one or more a field-programmable devices (FPDs) such as FPGAs and the like; programmable logic devices (PLDs) such as complex PLDs (CPLDs), high-capacity PLDs (HCPLDs), and the like; ASICs such as structured ASICs and the like; programmable SoCs (PSoCs); and the like. In such embodiments, the circuitry of application circuitry605may comprise logic blocks or logic fabric, and other interconnected resources that may be programmed to perform various functions, such as the procedures, methods, functions, etc. of the various embodiments discussed herein. In such embodiments, the circuitry of application circuitry605may include memory cells (e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, static memory (e.g., static random access memory (SRAM), anti-fuses, etc.)) used to store logic blocks, logic fabric, data, etc. in look-up tables (LUTs) and the like. The baseband circuitry610may be implemented, for example, as a solder-down substrate including one or more integrated circuits, a single packaged integrated circuit soldered to a main circuit board or a multi-chip module containing two or more integrated circuits. The various hardware electronic elements of baseband circuitry610are discussed infra with regard toFIG.7. The RFEMs615may comprise a millimeter wave (mmWave) RFEM and one or more sub-mmWave radio frequency integrated circuits (RFICs). In some implementations, the one or more sub-mmWave RFICs may be physically separated from the mmWave RFEM. The RFICs may include connections to one or more antennas or antenna arrays (see e.g., antenna array711ofFIG.7infra), and the RFEM may be connected to multiple antennas. In alternative implementations, both mmWave and sub-mmWave radio functions may be implemented in the same physical RFEM615, which incorporates both mmWave antennas and sub-mmWave. The memory circuitry620may include any number and type of memory devices used to provide for a given amount of system memory. As examples, the memory circuitry620may include one or more of volatile memory including random access memory (RAM), dynamic RAM (DRAM) and/or synchronous dynamic RAM (SDRAM), and nonvolatile memory (NVM) including high-speed electrically erasable memory (commonly referred to as Flash memory), phase change random access memory (PRAM), magnetoresistive random access memory (MRAM), etc. The memory circuitry620may be developed in accordance with a Joint Electron Devices Engineering Council (JEDEC) low power double data rate (LPDDR)-based design, such as LPDDR2, LPDDR3, LPDDR4, or the like. Memory circuitry620may be implemented as one or more of solder down packaged integrated circuits, single die package (SDP), dual die package (DDP) or quad die package (Q17P), socketed memory modules, dual inline memory modules (DIMMs) including microDIMMs or MiniDIMMs, and/or soldered onto a motherboard via a ball grid array (BGA). In low power implementations, the memory circuitry620may be on-die memory or registers associated with the application circuitry605. To provide for persistent storage of information such as data, applications, operating systems and so forth, memory circuitry620may include one or more mass storage devices, which may include, inter alia, a solid state disk drive (SSDD), hard disk drive (HDD), a micro HDD, resistance change memories, phase change memories, holographic memories, or chemical memories, among others. For example, the computer platform600may incorporate the three-dimensional (3D) cross-point (XPOINT) memories from Intel® and Micron®. Removable memory circuitry623may include devices, circuitry, enclosures/housings, ports or receptacles, etc. used to couple portable data storage devices with the platform600. These portable data storage devices may be used for mass storage purposes, and may include, for example, flash memory cards (e.g., Secure Digital (SD) cards, microSD cards, xD picture cards, and the like), and USB flash drives, optical discs, external HDDs, and the like. The platform600may also include interface circuitry (not shown) that is used to connect external devices with the platform600. The external devices connected to the platform600via the interface circuitry include sensor circuitry621and electro-mechanical components (EMCs)622, as well as removable memory devices coupled to removable memory circuitry623. The sensor circuitry621include devices, modules, or subsystems whose purpose is to detect events or changes in its environment and send the information (sensor data) about the detected events to some other a device, module, subsystem, etc. Examples of such sensors include, inter alia, inertia measurement units (IMUS) comprising accelerometers, gyroscopes, and/or magnetometers; microelectromechanical systems (MEMS) or nanoelectromechanical systems (NEMS) comprising 3-axis accelerometers, 3-axis gyroscopes, and/or magnetometers; level sensors; flow sensors; temperature sensors (e.g., thermistors); pressure sensors; barometric pressure sensors; gravimeters; altimeters; image capture devices (e.g., cameras or lensless apertures); light detection and ranging (LiDAR) sensors; proximity sensors (e.g., infrared radiation detector and the like), depth sensors, ambient light sensors, ultrasonic transceivers; microphones or other like audio capture devices; etc. EMCs622include devices, modules, or subsystems whose purpose is to enable platform600to change its state, position, and/or orientation, or move or control a mechanism or (sub)system. Additionally, EMCs622may be configured to generate and send messages/signalling to other components of the platform600to indicate a current state of the EMCs622. Examples of the EMCs622include one or more power switches, relays including electromechanical relays (EMRs) and/or solid state relays (SSRs), actuators (e.g., valve actuators, etc.), an audible sound generator, a visual warning device, motors (e.g., DC motors, stepper motors, etc.), wheels, thrusters, propellers, claws, clamps, hooks, and/or other like electro-mechanical components. In embodiments, platform600is configured to operate one or more EMCs622based on one or more captured events and/or instructions or control signals received from a service provider and/or various clients. In some implementations, the interface circuitry may connect the platform600with positioning circuitry645. The positioning circuitry645includes circuitry to receive and decode signals transmitted/broadcasted by a positioning network of a GNSS. Examples of navigation satellite constellations (or GNSS) include United States' GPS, Russia's GLONASS, the European Union's Galileo system, China's BeiDou Navigation Satellite System, a regional navigation system or GNSS augmentation system (e.g., NAVIC), Japan's QZSS, France's DORIS, etc.), or the like. The positioning circuitry645comprises various hardware elements (e.g., including hardware devices such as switches, filters, amplifiers, antenna elements, and the like to facilitate OTA communications) to communicate with components of a positioning network, such as navigation satellite constellation nodes. In some embodiments, the positioning circuitry645may include a Micro-PNT IC that uses a master timing clock to perform position tracking/estimation without GNSS assistance. The positioning circuitry645may also be part of, or interact with, the baseband circuitry510and/or RFEMs615to communicate with the nodes and components of the positioning network. The positioning circuitry645may also provide position data and/or time data to the application circuitry605, which may use the data to synchronize operations with various infrastructure (e.g., radio base stations), for turn-by-turn navigation applications, or the like In some implementations, the interface circuitry may connect the platform600with Near-Field Communication (NFC) circuitry640. NFC circuitry640is configured to provide contactless, short-range communications based on radio frequency identification (RFID) standards, wherein magnetic field induction is used to enable communication between NFC circuitry640and NFC-enabled devices external to the platform600(e.g., an “NFC touchpoint”). NFC circuitry640comprises an NFC controller coupled with an antenna element and a processor coupled with the NFC controller. The NFC controller may be a chip/IC providing NFC functionalities to the NFC circuitry640by executing NFC controller firmware and an NFC stack. The NFC stack may be executed by the processor to control the NFC controller, and the NFC controller firmware may be executed by the NFC controller to control the antenna element to emit short-range RF signals. The RF signals may power a passive NFC tag (e.g., a microchip embedded in a sticker or wristband) to transmit stored data to the NFC circuitry640, or initiate data transfer between the NFC circuitry640and another active NFC device (e.g., a smartphone or an NFC-enabled POS terminal) that is proximate to the platform600. The driver circuitry646may include software and hardware elements that operate to control particular devices that are embedded in the platform600, attached to the platform600, or otherwise communicatively coupled with the platform600. The driver circuitry646may include individual drivers allowing other components of the platform600to interact with or control various input/output (I/O) devices that may be present within, or connected to, the platform600. For example, driver circuitry646may include a display driver to control and allow access to a display device, a touchscreen driver to control and allow access to a touchscreen interface of the platform600, sensor drivers to obtain sensor readings of sensor circuitry621and control and allow access to sensor circuitry621, EMC drivers to obtain actuator positions of the EMCs622and/or control and allow access to the EMCs622, a camera driver to control and allow access to an embedded image capture device, audio drivers to control and allow access to one or more audio devices. The power management integrated circuitry (PMIC)625(also referred to as “power management circuitry625”) may manage power provided to various components of the platform600. In particular, with respect to the baseband circuitry610, the PMIC625may control power-source selection, voltage scaling, battery charging, or DC-to-DC conversion. The PMIC625may often be included when the platform600is capable of being powered by a battery630, for example, when the device is included in a UE401. In some embodiments, the PMIC625may control, or otherwise be part of, various power saving mechanisms of the platform600. For example, if the platform600is in an RRC_Connected state, where it is still connected to the RAN node as it expects to receive traffic shortly, then it may enter a state known as Discontinuous Reception Mode (DRX) after a period of inactivity. During this state, the platform600may power down for brief intervals of time and thus save power. If there is no data traffic activity for an extended period of time, then the platform600may transition off to an RRC_Idle state, where it disconnects from the network and does not perform operations such as channel quality feedback, handover, etc. The platform600goes into a very low power state and it performs paging where again it periodically wakes up to listen to the network and then powers down again. The platform600may not receive data in this state; in order to receive data, it must transition back to RRC_Connected state. An additional power saving mode may allow a device to be unavailable to the network for periods longer than a paging interval (ranging from seconds to a few hours). During this time, the device is totally unreachable to the network and may power down completely. Any data sent during this time incurs a large delay and it is assumed the delay is acceptable. A battery630may power the platform600, although in some examples the platform600may be mounted deployed in a fixed location, and may have a power supply coupled to an electrical grid. The battery630may be a lithium ion battery, a metal-air battery, such as a zinc-air battery, an aluminum-air battery, a lithium-air battery, and the like. In some implementations, such as in V2X applications, the battery630may be a typical lead-acid automotive battery. In some implementations, the battery630may be a “smart battery,” which includes or is coupled with a Battery Management System (BMS) or battery monitoring integrated circuitry. The BMS may be included in the platform600to track the state of charge (SoCh) of the battery630. The BMS may be used to monitor other parameters of the battery630to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of the battery630. The BMS may communicate the information of the battery630to the application circuitry605or other components of the platform600. The BMS may also include an analog-to-digital (ADC) convertor that allows the application circuitry605to directly monitor the voltage of the battery630or the current flow from the battery630. The battery parameters may be used to determine actions that the platform600may perform, such as transmission frequency, network operation, sensing frequency, and the like. A power block, or other power supply coupled to an electrical grid may be coupled with the BMS to charge the battery630. In some examples, the power block XS30 may be replaced with a wireless power receiver to obtain the power wirelessly, for example, through a loop antenna in the computer platform600. In these examples, a wireless battery charging circuit may be included in the BMS. The specific charging circuits chosen may depend on the size of the battery630, and thus, the current required. The charging may be performed using the Airfuel standard promulgated by the Airfuel Alliance, the Qi wireless charging standard promulgated by the Wireless Power Consortium, or the Rezence charging standard promulgated by the Alliance for Wireless Power, among others. User interface circuitry650includes various input/output (I/O) devices present within, or connected to, the platform600, and includes one or more user interfaces designed to enable user interaction with the platform600and/or peripheral component interfaces designed to enable peripheral component interaction with the platform600. The user interface circuitry650includes input device circuitry and output device circuitry. Input device circuitry includes any physical or virtual means for accepting an input including, inter alia, one or more physical or virtual buttons (e.g., a reset button), a physical keyboard, keypad, mouse, touchpad, touchscreen, microphones, scanner, headset, and/or the like. The output device circuitry includes any physical or virtual means for showing information or otherwise conveying information, such as sensor readings, actuator position(s), or other like information. Output device circuitry may include any number and/or combinations of audio or visual display, including, inter alia, one or more simple visual outputs/indicators (e.g., binary status indicators (e.g., light emitting diodes (LEDs)) and multi-character visual outputs, or more complex outputs such as display devices or touchscreens (e.g., Liquid Chrystal Displays (LCD), LED displays, quantum dot displays, projectors, etc.), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of the platform600. The output device circuitry may also include speakers or other audio emitting devices, printer(s), and/or the like. In some embodiments, the sensor circuitry621may be used as the input device circuitry (e.g., an image capture device, motion capture device, or the like) and one or more EMCs may be used as the output device circuitry (e.g., an actuator to provide haptic feedback or the like). In another example, NFC circuitry comprising an NFC controller coupled with an antenna element and a processing device may be included to read electronic tags and/or connect with another NFC-enabled device. Peripheral component interfaces may include, but are not limited to, a non-volatile memory port, a USB port, an audio jack, a power supply interface, etc. Although not shown, the components of platform600may communicate with one another using a suitable bus or interconnect (IX) technology, which may include any number of technologies, including ISA, EISA, PCI, PCIx, PCIe, a Time-Trigger Protocol (TTP) system, a FlexRay system, or any number of other technologies. The bus/IX may be a proprietary bus/IX, for example, used in a SoC based system. Other bus/IX systems may be included, such as an I2C interface, an SPI interface, point-to-point interfaces, and a power bus, among others. FIG.7illustrates example components of baseband circuitry710and radio front end modules (RFEM)715in accordance with various embodiments. The baseband circuitry710corresponds to the baseband circuitry510and610ofFIGS.5and6, respectively. The RFEM715corresponds to the RFEM515and615ofFIGS.5and6, respectively. As shown, the RFEMs715may include Radio Frequency (RF) circuitry706, front-end module (FEM) circuitry708, antenna array711coupled together at least as shown. The baseband circuitry710includes circuitry and/or control logic configured to carry out various radio/network protocol and radio control functions that enable communication with one or more radio networks via the RF circuitry706. The radio control functions may include, but are not limited to, signal modulation/demodulation, encoding/decoding, radio frequency shifting, etc. In some embodiments, modulation/demodulation circuitry of the baseband circuitry710may include Fast-Fourier Transform (FFT), precoding, or constellation mapping/demapping functionality. In some embodiments, encoding/decoding circuitry of the baseband circuitry710may include convolution, tail-biting convolution, turbo, Viterbi, or Low Density Parity Check (LDPC) encoder/decoder functionality. Embodiments of modulation/demodulation and encoder/decoder functionality are not limited to these examples and may include other suitable functionality in other embodiments. The baseband circuitry710is configured to process baseband signals received from a receive signal path of the RF circuitry706and to generate baseband signals for a transmit signal path of the RF circuitry706. The baseband circuitry710is configured to interface with application circuitry505/605(seeFIGS.5and6) for generation and processing of the baseband signals and for controlling operations of the RF circuitry706. The baseband circuitry710may handle various radio control functions. The aforementioned circuitry and/or control logic of the baseband circuitry710may include one or more single or multi-core processors. For example, the one or more processors may include a 3G baseband processor704A, a 4G/LTE baseband processor704B, a 5G/NR baseband processor704C, or some other baseband processor(s)704D for other existing generations, generations in development or to be developed in the future (e.g., sixth generation (6G), etc.). In other embodiments, some or all of the functionality of baseband processors704A-D may be included in modules stored in the memory704G and executed via a Central Processing Unit (CPU)704E. In other embodiments, some or all of the functionality of baseband processors704A-D may be provided as hardware accelerators (e.g., FPGAs, ASICs, etc.) loaded with the appropriate bit streams or logic blocks stored in respective memory cells. In various embodiments, the memory704G may store program code of a real-time OS (RTOS), which when executed by the CPU704E (or other baseband processor), is to cause the CPU704E (or other baseband processor) to manage resources of the baseband circuitry710, schedule tasks, etc. Examples of the RTOS may include Operating System Embedded (OSE)™ provided by Enea®, Nucleus RTOS™ provided by Mentor Graphics®, Versatile Real-Time Executive (VRTX) provided by Mentor Graphics®, ThreadX™ provided by Express Logic®, FreeRTOS, REX OS provided by Qualcomm®, OKL4 provided by Open Kernel (OK) Labs®, or any other suitable RTOS, such as those discussed herein. In addition, the baseband circuitry710includes one or more audio digital signal processor(s) (DSP)704F. The audio DSP(s)704F include elements for compression/decompression and echo cancellation and may include other suitable processing elements in other embodiments. In some embodiments, each of the processors704A-704E include respective memory interfaces to send/receive data to/from the memory704G. The baseband circuitry710may further include one or more interfaces to communicatively couple to other circuitries/devices, such as an interface to send/receive data to/from memory external to the baseband circuitry710; an application circuitry interface to send/receive data to/from the application circuitry505/605ofFIGS.5-6); an RF circuitry interface to send/receive data to/from RF circuitry706ofFIG.7; a wireless hardware connectivity interface to send/receive data to/from one or more wireless hardware elements (e.g., Near Field Communication (NFC) components, Bluetooth®/Bluetooth® Low Energy components, Wi-Fi® components, and/or the like); and a power management interface to send/receive power or control signals to/from the PMIC625. In alternate embodiments (which may be combined with the above described embodiments), baseband circuitry710comprises one or more digital baseband systems, which are coupled with one another via an interconnect subsystem and to a CPU subsystem, an audio subsystem, and an interface subsystem. The digital baseband subsystems may also be coupled to a digital baseband interface and a mixed-signal baseband subsystem via another interconnect subsystem. Each of the interconnect subsystems may include a bus system, point-to-point connections, network-on-chip (NOC) structures, and/or some other suitable bus or interconnect technology, such as those discussed herein. The audio subsystem may include DSP circuitry, buffer memory, program memory, speech processing accelerator circuitry, data converter circuitry such as analog-to-digital and digital-to-analog converter circuitry, analog circuitry including one or more of amplifiers and filters, and/or other like components. In an aspect of the present disclosure, baseband circuitry710may include protocol processing circuitry with one or more instances of control circuitry (not shown) to provide control functions for the digital baseband circuitry and/or radio frequency circuitry (e.g., the radio front end modules715). Although not shown byFIG.7, in some embodiments, the baseband circuitry710includes individual processing device(s) to operate one or more wireless communication protocols (e.g., a “multi-protocol baseband processor” or “protocol processing circuitry”) and individual processing device(s) to implement PHY layer functions. In these embodiments, the PHY layer functions include the aforementioned radio control functions. In these embodiments, the protocol processing circuitry operates or implements various protocol layers/entities of one or more wireless communication protocols. In a first example, the protocol processing circuitry may operate LTE protocol entities and/or 5G/NR protocol entities when the baseband circuitry710and/or RF circuitry706are part of mmWave communication circuitry or some other suitable cellular communication circuitry. In the first example, the protocol processing circuitry would operate MAC, RLC, PDCP, SDAP, RRC, and NAS functions. In a second example, the protocol processing circuitry may operate one or more IEEE-based protocols when the baseband circuitry710and/or RF circuitry706are part of a Wi-Fi communication system. In the second example, the protocol processing circuitry would operate Wi-Fi MAC and logical link control (LLC) functions. The protocol processing circuitry may include one or more memory structures (e.g.,704G) to store program code and data for operating the protocol functions, as well as one or more processing cores to execute the program code and perform various operations using the data. The baseband circuitry710may also support radio communications for more than one wireless protocol. The various hardware elements of the baseband circuitry710discussed herein may be implemented, for example, as a solder-down substrate including one or more integrated circuits (ICs), a single packaged IC soldered to a main circuit board or a multi-chip module containing two or more ICs. In one example, the components of the baseband circuitry710may be suitably combined in a single chip or chipset, or disposed on a same circuit board. In another example, some or all of the constituent components of the baseband circuitry710and RF circuitry706may be implemented together such as, for example, a system on a chip (SoC) or System-in-Package (SiP). In another example, some or all of the constituent components of the baseband circuitry710may be implemented as a separate SoC that is communicatively coupled with and RF circuitry706(or multiple instances of RF circuitry706). In yet another example, some or all of the constituent components of the baseband circuitry710and the application circuitry505/605may be implemented together as individual SoCs mounted to a same circuit board (e.g., a “multi-chip package”). In some embodiments, the baseband circuitry710may provide for communication compatible with one or more radio technologies. For example, in some embodiments, the baseband circuitry710may support communication with an E-UTRAN or other WMAN, a WLAN, a WPAN. Embodiments in which the baseband circuitry710is configured to support radio communications of more than one wireless protocol may be referred to as multi-mode baseband circuitry. RF circuitry706may enable communication with wireless networks using modulated electromagnetic radiation through a non-solid medium. In various embodiments, the RF circuitry706may include switches, filters, amplifiers, etc. to facilitate the communication with the wireless network. RF circuitry706may include a receive signal path, which may include circuitry to down-convert RF signals received from the FEM circuitry708and provide baseband signals to the baseband circuitry710. RF circuitry706may also include a transmit signal path, which may include circuitry to up-convert baseband signals provided by the baseband circuitry710and provide RF output signals to the FEM circuitry708for transmission. In some embodiments, the receive signal path of the RF circuitry706may include mixer circuitry706a, amplifier circuitry706band filter circuitry706c. In some embodiments, the transmit signal path of the RF circuitry706may include filter circuitry706cand mixer circuitry706a. RF circuitry706may also include synthesizer circuitry706dfor synthesizing a frequency for use by the mixer circuitry706aof the receive signal path and the transmit signal path. In some embodiments, the mixer circuitry706aof the receive signal path may be configured to down-convert RF signals received from the FEM circuitry708based on the synthesized frequency provided by synthesizer circuitry706d. The amplifier circuitry706bmay be configured to amplify the down-converted signals and the filter circuitry706cmay be a low-pass filter (LPF) or band-pass filter (BPF) configured to remove unwanted signals from the down-converted signals to generate output baseband signals. Output baseband signals may be provided to the baseband circuitry710for further processing. In some embodiments, the output baseband signals may be zero-frequency baseband signals, although this is not a requirement. In some embodiments, mixer circuitry706aof the receive signal path may comprise passive mixers, although the scope of the embodiments is not limited in this respect. In some embodiments, the mixer circuitry706aof the transmit signal path may be configured to up-convert input baseband signals based on the synthesized frequency provided by the synthesizer circuitry706dto generate RF output signals for the FEM circuitry708. The baseband signals may be provided by the baseband circuitry710and may be filtered by filter circuitry706c. In some embodiments, the mixer circuitry706aof the receive signal path and the mixer circuitry706aof the transmit signal path may include two or more mixers and may be arranged for quadrature downconversion and upconversion, respectively. In some embodiments, the mixer circuitry706aof the receive signal path and the mixer circuitry706aof the transmit signal path may include two or more mixers and may be arranged for image rejection (e.g., Hartley image rejection). In some embodiments, the mixer circuitry706aof the receive signal path and the mixer circuitry706aof the transmit signal path may be arranged for direct downconversion and direct upconversion, respectively. In some embodiments, the mixer circuitry706aof the receive signal path and the mixer circuitry706aof the transmit signal path may be configured for super-heterodyne operation. In some embodiments, the output baseband signals and the input baseband signals may be analog baseband signals, although the scope of the embodiments is not limited in this respect. In some alternate embodiments, the output baseband signals and the input baseband signals may be digital baseband signals. In these alternate embodiments, the RF circuitry706may include analog-to-digital converter (ADC) and digital-to-analog converter (DAC) circuitry and the baseband circuitry710may include a digital baseband interface to communicate with the RF circuitry706. In some dual-mode embodiments, a separate radio IC circuitry may be provided for processing signals for each spectrum, although the scope of the embodiments is not limited in this respect. In some embodiments, the synthesizer circuitry706dmay be a fractional-N synthesizer or a fractional N/N+1 synthesizer, although the scope of the embodiments is not limited in this respect as other types of frequency synthesizers may be suitable. For example, synthesizer circuitry706dmay be a delta-sigma synthesizer, a frequency multiplier, or a synthesizer comprising a phase-locked loop with a frequency divider. The synthesizer circuitry706dmay be configured to synthesize an output frequency for use by the mixer circuitry706aof the RF circuitry706based on a frequency input and a divider control input. In some embodiments, the synthesizer circuitry706dmay be a fractional N/N+1 synthesizer. In some embodiments, frequency input may be provided by a voltage controlled oscillator (VCO), although that is not a requirement. Divider control input may be provided by either the baseband circuitry710or the application circuitry505/605depending on the desired output frequency. In some embodiments, a divider control input (e.g., N) may be determined from a look-up table based on a channel indicated by the application circuitry505/605. Synthesizer circuitry706dof the RF circuitry706may include a divider, a delay-locked loop (DLL), a multiplexer and a phase accumulator. In some embodiments, the divider may be a dual modulus divider (DMD) and the phase accumulator may be a digital phase accumulator (DPA). In some embodiments, the DMD may be configured to divide the input signal by either N or N+1 (e.g., based on a carry out) to provide a fractional division ratio. In some example embodiments, the DLL may include a set of cascaded, tunable, delay elements, a phase detector, a charge pump and a D-type flip-flop. In these embodiments, the delay elements may be configured to break a VCO period up into Nd equal packets of phase, where Nd is the number of delay elements in the delay line. In this way, the DLL provides negative feedback to help ensure that the total delay through the delay line is one VCO cycle. In some embodiments, synthesizer circuitry706dmay be configured to generate a carrier frequency as the output frequency, while in other embodiments, the output frequency may be a multiple of the carrier frequency (e.g., twice the carrier frequency, four times the carrier frequency) and used in conjunction with quadrature generator and divider circuitry to generate multiple signals at the carrier frequency with multiple different phases with respect to each other. In some embodiments, the output frequency may be a LO frequency (fLO). In some embodiments, the RF circuitry706may include an IQ/polar converter. FEM circuitry708may include a receive signal path, which may include circuitry configured to operate on RF signals received from antenna array711, amplify the received signals and provide the amplified versions of the received signals to the RF circuitry706for further processing. FEM circuitry708may also include a transmit signal path, which may include circuitry configured to amplify signals for transmission provided by the RF circuitry706for transmission by one or more of antenna elements of antenna array711. In various embodiments, the amplification through the transmit or receive signal paths may be done solely in the RF circuitry706, solely in the FEM circuitry708, or in both the RF circuitry706and the FEM circuitry708. In some embodiments, the FEM circuitry708may include a TX/RX switch to switch between transmit mode and receive mode operation. The FEM circuitry708may include a receive signal path and a transmit signal path. The receive signal path of the FEM circuitry708may include an LNA to amplify received RF signals and provide the amplified received RF signals as an output (e.g., to the RF circuitry706). The transmit signal path of the FEM circuitry708may include a power amplifier (PA) to amplify input RF signals (e.g., provided by RF circuitry706), and one or more filters to generate RF signals for subsequent transmission by one or more antenna elements of the antenna array711. The antenna array711comprises one or more antenna elements, each of which is configured convert electrical signals into radio waves to travel through the air and to convert received radio waves into electrical signals. For example, digital baseband signals provided by the baseband circuitry710is converted into analog RF signals (e.g., modulated waveform) that will be amplified and transmitted via the antenna elements of the antenna array711including one or more antenna elements (not shown). The antenna elements may be omnidirectional, direction, or a combination thereof. The antenna elements may be formed in a multitude of arranges as are known and/or discussed herein. The antenna array711may comprise microstrip antennas or printed antennas that are fabricated on the surface of one or more printed circuit boards. The antenna array711may be formed in as a patch of metal foil (e.g., a patch antenna) in a variety of shapes, and may be coupled with the RF circuitry706and/or FEM circuitry708using metal transmission lines or the like. Processors of the application circuitry505/605and processors of the baseband circuitry710may be used to execute elements of one or more instances of a protocol stack. For example, processors of the baseband circuitry710, alone or in combination, may be used execute Layer 3, Layer 2, or Layer 1 functionality, while processors of the application circuitry505/605may utilize data (e.g., packet data) received from these layers and further execute Layer 4 functionality (e.g., TCP and UDP layers). As referred to herein, Layer 3 may comprise a RRC layer, described in further detail below. As referred to herein, Layer 2 may comprise a MAC layer, an RLC layer, and a PDCP layer, described in further detail below. As referred to herein, Layer 1 may comprise a PHY layer of a UE/RAN node, described in further detail below. FIG.8is a block diagram illustrating components, according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.8shows a diagrammatic representation of hardware resources800including one or more processors (or processor cores)810, one or more memory/storage devices820, and one or more communication resources830, each of which may be communicatively coupled via a bus840. For embodiments where node virtualization (e.g., NFV) is utilized, a hypervisor802may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources800. The processors810may include, for example, a processor812and a processor814. The processor(s)810may be, for example, a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a DSP such as a baseband processor, an ASIC, an FPGA, a radio-frequency integrated circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof. The memory/storage devices820may include main memory, disk storage, or any suitable combination thereof. The memory/storage devices820may include, but are not limited to, any type of volatile or nonvolatile memory such as dynamic random access memory (DRAM), static random access memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, etc. The communication resources830may include interconnection or network interface components or other suitable devices to communicate with one or more peripheral devices804or one or more databases806via a network808. For example, the communication resources830may include wired communication components (e.g., for coupling via USB), cellular communication components, NFC components, Bluetooth® (or Bluetooth® Low Energy) components, Wi-Fi® components, and other communication components. Instructions850may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors810to perform any one or more of the methodologies discussed herein. The instructions850may reside, completely or partially, within at least one of the processors810(e.g., within the processor's cache memory), the memory/storage devices820, or any suitable combination thereof. Furthermore, any portion of the instructions850may be transferred to the hardware resources800from any combination of the peripheral devices804or the databases806. Accordingly, the memory of processors810, the memory/storage devices820, the peripheral devices804, and the databases806are examples of computer-readable and machine-readable media. Example Procedures In some embodiments, the electronic device(s), network(s), system(s), chip(s) or component(s), or portions or implementations thereof, ofFIGS.4-8, or some other figure herein, may be configured to perform one or more processes, techniques, or methods as described herein, or portions thereof. One such process200is depicted inFIG.2. For example, the process200may include, at202, measuring discovery reference signals (DRSs) transmitted within DRS occasions. The process further includes, at204, updating an out-of-sync (OOS) counter in response to a listen-before-talk failure associated with the DRS transmissions. In embodiments, the process200may be performed by a UE and/or test equipment, or a portion thereof. FIG.3illustrates another process300in accordance with various embodiments. For example, the process300may include, at302, receiving a first downlink (DL) reference signal and a second DL reference signal. The process further includes, at304, receiving a message that includes a configured timing uncertainty associated with the first DL reference signal and the second DL reference signal. The process further includes, at306, performing a reference signal timing difference (RSTD) measurement based on the configured timing uncertainty. In embodiments, the process300may be performed by a UE and/or test equipment, or a portion thereof. For one or more embodiments, at least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below. For example, the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below. For another example, circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section. EXAMPLES Example 1 may include a method wherein a user equipment applies out-of-sync (OOS) detection within a NR-U band, by measuring discovery reference signals (DRS), which are transmitted by a base station, within pre-defined DRS occasions: the UE updates an OOS counter, even though a DRS is not transmitted by the base station within a pre-defined DRS occasion, due to listen-before-talk failure. Example 2 may include the method of example 1 or some other example herein, wherein the UE updates the OOS counter, by receiving the signals from one DRS occasion, measures a DRS quality metric, maps the measured DRS quality metric into a hypothetical PDCCH BLER value, and increments the OOS counter if the mapped hypothetical PDCCH BLER value is higher than a pre-defined threshold. Example 3 may include the method of example 2 or some other example herein, wherein the pre-defined hypothetical PDCCH BLER threshold could be 10%. Example 4 may include the method of examples 1 and 2 or some other example herein, wherein the UE updates the OOS counter, by assuming that DRS is transmitted in all pre-defined DRS occasions. Example 5 may include the method of examples 1 to 4 or some other example herein, wherein may include the method of example, the maximal OOS evaluation period can be determined based on the maximal value between a pre-configured SMTC, and the DRX cycle length, wherein the maximal value is scaled by a scaling factor. Example 6 includes a method comprising: measuring discovery reference signals (DRSs) transmitted within DRS occasions; and updating an out-of-sync (OOS) counter in response to a listen-before-talk failure associated with the DRS transmissions. Example 7 includes the method of example 6 or some other example herein, wherein the DRSs are received from a base station or portion thereof. Example 8 includes the method of example 6 or some other example herein, wherein the OOS counter is updated when a DRS is not transmitted within a DRS occasion. Example 9 includes the method of example 6 or some other example herein, further comprising: measuring a DRS quality metric associated with signals from one DRS occasion; mapping the measured DRS quality metric into a hypothetical physical downlink control channel (PDCCH) block error rate (BLER) value; and incrementing the OOS counter in response to the hypothetical PDCCH BLER value exceeding a predetermined threshold. Example 10 includes the method of example 9 or some other example herein, wherein the predetermined threshold is ten percent or higher. Example 11 includes the method of any of examples 6-10 or some other example herein, wherein the method is performed by a user equipment (UE) or portion thereof. Example 12 may include a method wherein a test equipment (TE) transmits at least two downlink reference signals (DL RSs) to a user equipment (UE), and configures a timing uncertainty to the UE through a higher layer message for UE to apply reference signal timing difference (RSTD) measurement by receiving the DL RSs, wherein the configured timing uncertainty of the two DL RSs are determined based on the waveform patterns of the two DL RSs. Example 13 may include the method of example 12 or some other example herein, wherein the waveform pattern could be the type of the DL RS, which can be either SSB or PRS, which is used for RSTD measurement. Example 14 may include the method of example 12 or some other example herein, wherein the waveform pattern could be the sub-carrier-spacing (SCS) of the DL RS, which is used for RSTD measurement. Example 15 may include the method of example 12 or some other example herein, wherein the waveform pattern could be the reference resource element density, D, of a PRS, which is used for RSTD measurement. Example 16 may include the method of examples 12-15 or some other example herein, wherein the TE further determines a first SNR side condition for transmitting the DL RSs for UE to apply RSTD measurement, if the actually configured RSTD timing uncertainty between two DL RSs is lower than a pre-defined threshold. Example 17 may include the method of examples 12-15 or some other example herein, wherein the TE further determines a second SNR side condition for transmitting the DL RSs for UE to apply RSTD measurement, if the actually configured RSTD timing uncertainty between two DL RSs is higher than a pre-defined threshold. Example 18 may include the method of examples 16-17 or some other example herein, wherein the first SNR side condition is lower than the second SNR side condition. Example 19 includes a method comprising: receiving a first downlink (DL) reference signal and a second DL reference signal; receiving a message that includes a configured timing uncertainty associated with the first DL reference signal and the second DL reference signal; and performing a reference signal timing difference (RSTD) measurement based on the configured timing uncertainty. Example 20 includes the method of example 19 or some other example herein, wherein the RSTD measurement is a synchronization signal block (SSB) based RSTD measurement. Example 21 includes the method of example 20 or some other example herein, wherein the configured timing uncertainty is based on a subcarrier spacing (SCS) of an SSB. Example 22 includes the method of example 19 or some other example herein, wherein the configured timing uncertainty is based on positioning reference signal (PRS) subcarrier spacing (SCS) or PRS resource element (RE) density. Example 23 includes the method of any of examples 19-22 or some other example herein, wherein the method is performed by a user equipment (UE) or a portion thereof. Example 24 includes the method of any of examples 19-22 or some other example herein, wherein the method is performed by a test equipment or a portion thereof. Example 25 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1-24, or any other method or process described herein. Example 26 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-11, or any other method or process described herein. Example 27 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1-24, or any other method or process described herein. Example 28 may include a method, technique, or process as described in or related to any of examples 1-24, or portions or parts thereof. Example 29 may include an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-24, or portions thereof. Example 30 may include a signal as described in or related to any of examples 1-24, or portions or parts thereof. Example 31 may include a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-24, or portions or parts thereof, or otherwise described in the present disclosure. Example 32 may include a signal encoded with data as described in or related to any of examples 1-24, or portions or parts thereof, or otherwise described in the present disclosure. Example 33 may include a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-24, or portions or parts thereof, or otherwise described in the present disclosure. Example 34 may include an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-24, or portions thereof. Example 35 may include a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples 1-24, or portions thereof. Example 36 may include a signal in a wireless network as shown and described herein. Example 37 may include a method of communicating in a wireless network as shown and described herein. Example 38 may include a system for providing wireless communication as shown and described herein. Example 39 may include a device for providing wireless communication as shown and described herein. Any of the above-described examples may be combined with any other example (or combination of examples), unless explicitly stated otherwise. The foregoing description of one or more implementations provides illustration and description, but is not intended to be exhaustive or to limit the scope of embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. Terminology For the purposes of the present document, the following terms and definitions are applicable to the examples and embodiments discussed herein. The term “circuitry” as used herein refers to, is part of, or includes hardware components such as an electronic circuit, a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable device (FPD) (e.g., a field-programmable gate array (FPGA), a programmable logic device (PLD), a complex PLD (CPLD), a high-capacity PLD (HCPLD), a structured ASIC, or a programmable SoC), digital signal processors (DSPs), etc., that are configured to provide the described functionality. In some embodiments, the circuitry may execute one or more software or firmware programs to provide at least some of the described functionality. The term “circuitry” may also refer to a combination of one or more hardware elements (or a combination of circuits used in an electrical or electronic system) with the program code used to carry out the functionality of that program code. In these embodiments, the combination of hardware elements and program code may be referred to as a particular type of circuitry. The term “processor circuitry” as used herein refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data. The term “processor circuitry” may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computer-executable instructions, such as program code, software modules, and/or functional processes. The terms “application circuitry” and/or “baseband circuitry” may be considered synonymous to, and may be referred to as, “processor circuitry.” The term “interface circuitry” as used herein refers to, is part of, or includes circuitry that enables the exchange of information between two or more components or devices. The term “interface circuitry” may refer to one or more hardware interfaces, for example, buses, I/O interfaces, peripheral component interfaces, network interface cards, and/or the like. The term “user equipment” or “UE” as used herein refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network. The term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, etc. Furthermore, the term “user equipment” or “UE” may include any type of wireless/wired device or any computing device including a wireless communications interface. The term “network element” as used herein refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services. The term “network element” may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, RAN device, RAN node, gateway, server, virtualized VNF, NFVI, and/or the like. The term “computer system” as used herein refers to any type of interconnected electronic devices, computer devices, or components thereof. Additionally, the term “computer system” and/or “system” may refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources. The term “appliance,” “computer appliance,” or the like, as used herein refers to a computer device or computer system with program code (e.g., software or firmware) that is specifically designed to provide a specific computing resource. A “virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource. The term “resource” as used herein refers to a physical or virtual device, a physical or virtual component within a computing environment, and/or a physical or virtual component within a particular device, such as computer devices, mechanical devices, memory space, processor/CPU time, processor/CPU usage, processor and accelerator loads, hardware time or usage, electrical power, input/output operations, ports or network sockets, channel/link allocation, throughput, memory usage, storage, network, database and applications, workload units, and/or the like. A “hardware resource” may refer to compute, storage, and/or network resources provided by physical hardware element(s). A “virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, device, system, etc. The term “network resource” or “communication resource” may refer to resources that are accessible by computer devices/systems via a communications network. The term “system resources” may refer to any kind of shared entities to provide services, and may include computing and/or network resources. System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable. The term “channel” as used herein refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream. The term “channel” may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated. Additionally, the term “link” as used herein refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information. The terms “instantiate,” “instantiation,” and the like as used herein refers to the creation of an instance. An “instance” also refers to a concrete occurrence of an object, which may occur, for example, during execution of program code. The terms “coupled,” “communicatively coupled,” along with derivatives thereof are used herein. The term “coupled” may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term “directly coupled” may mean that two or more elements are in direct contact with one another. The term “communicatively coupled” may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or ink, and/or the like. The term “information element” refers to a structural element containing one or more fields. The term “field” refers to individual contents of an information element, or a data element that contains content. The term “SMTC” refers to an SSB-based measurement timing configuration configured by SSB-MeasurementTimingConfiguration. The term “SSB” refers to an SS/PBCH block. The term “a “Primary Cell” refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure. The term “Primary SCG Cell” refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation. The term “Secondary Cell” refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA. The term “Secondary Cell Group” refers to the subset of serving cells comprising the PSCell and zero or more secondary cells for a UE configured with DC. The term “Serving Cell” refers to the primary cell for a UE in RRC_CONNECTED not configured with CA/DC there is only one serving cell comprising of the primary cell. The term “serving cell” or “serving cells” refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC_CONNECTED configured with CA/. The term “Special Cell” refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell.
112,244
11863478
DESCRIPTION OF EMBODIMENTS [Circumstances Leading to the Present Disclosure] First, the circumstances leading to the present disclosure will be described. Consideration is being given to a “DL data self-contained” operation for realizing low delay in downlink communication, and a “UL data self-contained” operation for realizing low delay in uplink communication, using the aforementioned time unit. In a DL data self-contained operation, a base station transmits a control signal (a DL assignment or a DL grant) that is required for a terminal to receive downlink data, and downlink data (DL data) assigned by means of the control signal, in a downlink transmission region. The terminal then transmits a response signal for the downlink data and an uplink control signal (a UCI: uplink control indicator) in an uplink transmission region. Furthermore, in a UL data self-contained operation, the base station transmits a control signal (a UL assignment or a UL grant) that is required for the terminal to transmit uplink data, in a downlink transmission region. The terminal then transmits uplink data (UL data) assigned by means of the control signal and a UCI, in an uplink transmission region. Furthermore, in NR, as a time unit configuration that realizes low delay, it is necessary for the time interval from the transmission of a response signal to the transmission of retransmission data to also be reduced as much as possible (for example, see NPL 3). Furthermore, in NR, similar to a subframe of LTE, it has been agreed that a time unit configuration that includes 14 symbols (OFDM symbols) per 1 ms with a subcarrier interval of 15 kHz is to be considered as a basis (for example, see NPL 4). As a time unit configuration that enables a self-contained operation in a TDD (time division duplex) system, consideration is being given to the configurations depicted inFIG.1AandFIG.1B(for example, see NPL 3).FIG.1Adepicts a time unit configuration that enables a DL data self-contained operation, andFIG.1Bdepicts a time unit configuration that enables a UL data self-contained operation. A gap period (the gap arranged first within each time unit of 1 ms inFIG.1AandFIG.1B; hereinafter, referred to as “gap #1”) between a downlink transmission region (the period depicted as “DL” inFIG.1AandFIG.1B) and an uplink transmission region (the period depicted as “UL” inFIG.1AandFIG.1B) is set with consideration being given to a propagation delay time between the base station and the terminal and the processing time of the terminal (UE processing time). It should be noted that there is a possibility of the length of the gap period changing in a dynamic or semi-static manner (for example, see NPL 5). Here, the processing time of the terminal indicates the processing time for the terminal to decode downlink data (DL data) and generate a response signal (an ACK inFIG.1AandFIG.1B) in the case of a DL data self-contained operation, and indicates the processing time for the terminal to decode a control signal (a UL assignment) and generate UL data in the case of a UL data self-contained operation. Furthermore, a gap period (the gap arranged second within each time unit of 1 ms inFIG.1AandFIG.1B; hereinafter referred to as “gap #2”) at the end of a time unit, after the uplink transmission region, is set with consideration being given to the processing time of the base station (eNB processing time). Here, the processing time of the base station indicates the processing time for the base station to decode a response signal and generate scheduling for the next time unit and a control signal (a DL assignment) in the case of a DL data self-contained operation, and indicates the processing time for the base station to decode UL data and generate scheduling for the next time unit and a control signal (a UL assignment) in the case of a UL data self-contained operation. In the time unit configurations ofFIG.1AandFIG.1B, a gap period for which consideration has been given to the processing time of the base station is provided at the end of a time unit, thereby enabling data retransmission in the next time unit, and therefore a delay in data communication can be reduced. However, in the time unit configurations for the self-contained operations depicted inFIG.1AandFIG.1B, there are a plurality of gap periods. Therefore, it is necessary to set the gap periods to increase as the processing times of the base station and the terminal increase, and therefore the utilization efficiency of radio resources deteriorates. Thus, an aspect of the present disclosure provides a base station that can suppress a decline in the utilization efficiency of radio resources caused by gap periods, by transmitting a signal/channel for which a delay is tolerated (hereinafter, referred to as a “delay tolerant signal”), at the end of a downlink transmission region or an uplink transmission region within a time unit, in a case where a self-contained operation is employed. Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. Embodiment 1 [Overview of Communication System] A communication system that carries out a DL data self-contained operation according to the present embodiment is provided with a base station100and a terminal200. Furthermore, a communication system that carries out a UL data self-contained operation according to each embodiment of the present disclosure is provided with a base station300and a terminal400. It should be noted that, hereinafter, a description will be given based on the premise of a TDD system. However, an aspect of the present disclosure can be similarly applied also as an FDD system as described hereinafter. Furthermore, one base station may have the configurations of both the base station100and the base station300, or may have the configuration of either one. Similarly, one terminal may have the configurations of both the terminal200and the terminal400, or may have the configuration of either one. FIG.2is a block diagram depicting a main configuration of the base stations100and300according to each embodiment of the present disclosure. In the base stations100and300depicted inFIG.2, a transmitter109transmits a downlink signal in a downlink transmission region, in a time unit that includes the downlink transmission region, an uplink transmission region, and a gap period that is a switching point between the downlink transmission region and the uplink transmission region. A receiver111receives an uplink signal in the uplink transmission region, in the time unit. Furthermore, a delay tolerant signal for which a delay is tolerated more than for the downlink signal and the uplink signal is mapped to within the gap period. FIG.3is a block diagram depicting a main configuration of the terminals200and400according to each embodiment of the present disclosure. In the terminals200and400depicted inFIG.3, a receiver202receives a downlink signal in a downlink transmission region, in a time unit that includes a downlink transmission region, an uplink transmission region, and a gap period that is a switching point between the downlink transmission region and the uplink transmission region. A transmitter213transmits an uplink signal in the uplink transmission region, in the time unit. A delay tolerant signal for which a delay is tolerated more than for the downlink signal and the uplink signal is mapped to within the gap period. [Configuration of Base Station (During DL Data Self-Contained Operation)] FIG.4is a block diagram depicting a configuration of the base station100that carries out a DL data self-contained operation according to the present embodiment. InFIG.4, the base station100has a scheduler101, a delay tolerant signal controller102, a control signal generator103, a control signal encoder/modulator104, a data encoder105, a retransmission controller106, a data modulator107, a signal assignment unit108, the transmitter109, an antenna110, the receiver111, a signal extraction unit112, a delay tolerant signal demodulator/decoder113, a delay tolerant signal determination unit114, a demodulator/decoder115, and a determination unit116. The base station100depicted inFIG.4transmits a downlink signal that includes a control signal (a DL assignment) or downlink data (DL data) in a downlink transmission region of a time unit (DL data self-contained time unit) that includes the “downlink transmission region”, an “uplink transmission region”, and a “gap period”. Furthermore, the base station100receives an uplink signal that includes a response signal (and may also include a delay tolerant signal or a UCI) that is transmitted from the terminal200in the uplink transmission region of the time unit. In the base station100, the scheduler101determines scheduling information (for example, the ID of an assigned terminal, assigned resource information for the terminal200(a frequency, a time, and a coding resource), data demodulation reference signal information, a modulation/encoding scheme, assigned resource information for a response signal (a frequency, a time, and a coding resource), or the like) relating to a delay tolerant signal (described hereinafter), a control signal (a DL assignment), and downlink data (DL data) in the time unit, with respect to the terminal200. The scheduler101outputs the determined scheduling information to the control signal generator103, the data encoder105, and the signal assignment unit108. The delay tolerant signal controller102determines information regarding a signal (for example, the signal type) that is generated as a delay tolerant signal, which is a signal or a channel that is transmitted from the terminal200at the end of an uplink transmission region within a time unit, and outputs information indicating the determined content to the control signal generator103. The delay tolerant signal is, for example, a signal or a channel for which a delay is tolerated more than for a downlink signal that is transmitted in a downlink transmission region and an uplink signal that is transmitted in an uplink transmission region within a time unit. Furthermore, a signal for which a delay is tolerated is, for example, a signal for which it is not necessary to carry out reception/decoding processing or the like by the time unit that is subsequent to the time unit in which the signal has been transmitted. It should be noted that the details of the delay tolerant signal that is transmitted at the end of an uplink transmission region within a time unit will be described hereinafter. Furthermore, the delay tolerant signal controller102outputs information indicating that the transmission of the delay tolerant signal is a retransmission, to the control signal generator103in a case where the delay tolerant signal is a retransmission signal, on the basis of information indicating a delay tolerant signal reception error, which is input from the delay tolerant signal determination unit114. The control signal generator103generates a control signal (a DL assignment) for the terminal200on the basis of information that is input from each of the scheduler101and the delay tolerant signal controller102. Control signals include a signal of a cell-specific higher layer, a signal of a group or RAT-specific higher layer, a signal of a terminal-specific higher layer, assigned resource information for downlink data, assigned resource information for a delay tolerant signal, information instructing the transmission of a delay tolerant signal (hereinafter, referred to as “delay tolerant signal instruction information”), assigned resource information for a response signal, or the like. An assigned resource for a delay tolerant signal is assumed to be at the end of an uplink transmission region within a time unit (namely, the gap period at the end of a time unit). Furthermore, in a case where the base station100requests the terminal200for the retransmission of a delay tolerant signal, the control signal generator103may include retransmission request information for a delay tolerant signal in the delay tolerant signal instruction information. The control signal generator103generates a control information bit string using such control information, and outputs the generated control information bit string to the control signal encoder/modulator104. It should be noted that the details of the delay tolerant signal instruction information will be described hereinafter. It should be noted that assigned resource information for a delay tolerant signal may be notified in advance by means of a higher layer notification from the base station100to the terminal200. In this case, assigned resource information for a delay tolerant signal is not included in a control signal (a DL assignment). The control signal encoder/modulator104encodes and modulates the control signal (a bit string) received from the control signal generator103, and outputs a modulated control signal to the signal assignment unit108. The data encoder105carries out error correction encoding on transmission data (downlink data) in accordance with an encoding scheme received from the scheduler101, and outputs an encoded data signal to the retransmission controller106. The retransmission controller106, at the time of the first transmission, retains the encoded data signal received from the data encoder105and also outputs the encoded data signal to the data modulator107. Furthermore, the retransmission controller106, at the time of a retransmission, controls the retained data on the basis of a determination result (an ACK/NACK) from the determination unit116. Specifically, the retransmission controller106, upon receiving a NACK with respect to the data signal, outputs the corresponding retained data to the data modulator107. Furthermore, the retransmission controller106, upon receiving an ACK with respect to the data signal, discards the corresponding retained data and ends the transmission of downlink data. The data modulator107modulates a data signal received from the retransmission controller106, and outputs a modulated data signal (symbol string) to the signal assignment unit108. The signal assignment unit108maps a control signal received from the control signal encoder/modulator104and a data signal received from the data modulator107to a radio resource instructed from the scheduler101. The signal assignment unit108outputs a downlink signal for which signal mapping has been carried out, to the transmitter109. The transmitter109carries out RF (radio frequency) processing such as D/A (digital-to-analog) conversion and up-conversion on the signal received from the signal assignment unit108, and transmits a radio signal to the terminal200via the antenna110. The receiver111carries out RF processing such as down-conversion or ND (analog-to-digital) conversion with respect to the signal waveform of an uplink from the terminal200received via the antenna110, and outputs an obtained reception signal to the signal extraction unit112. The signal extraction unit112extracts a radio resource portion in which an uplink response signal from the terminal200has been transmitted, from the reception signal, and outputs a reception response signal to the demodulator/decoder115. Furthermore, the signal extraction unit112extracts a radio resource portion in which a delay tolerant signal from the terminal200has been transmitted, from the reception signal, and outputs the delay tolerant signal to the delay tolerant signal demodulator/decoder113. The delay tolerant signal demodulator/decoder113carries out equalization, demodulation, and error correction decoding for the delay tolerant signal that is input from the signal extraction unit112, and outputs a decoded bit sequence to the determination unit116and the delay tolerant signal determination unit114. The delay tolerant signal determination unit114determines whether or not the delay tolerant signal (a bit sequence) that is input from the delay tolerant signal demodulator/decoder113has been correctly received. The delay tolerant signal determination unit114, when having determined that the delay tolerant signal has been correctly received, outputs the delay tolerant signal. However, the delay tolerant signal determination unit114, when having determined that the delay tolerant signal has not been correctly received and is a signal for which it is necessary to request a retransmission of the delay tolerant signal, outputs information indicating a reception error for the delay tolerant signal to the delay tolerant signal controller102. The demodulator/decoder115carries out equalization, demodulation, and decoding on the reception response signal that is received from the signal extraction unit112, and outputs a decoded bit sequence to the determination unit116. The determination unit116determines whether a response signal for downlink data, transmitted from the terminal200, indicates an ACK or NACK with respect to the downlink data, on the basis of the bit sequence that is input from the demodulator/decoder115. It should be noted that the determination unit116may carry out the determination for the response signal with consideration also being given to a bit sequence (for example, some or all of the response signal) that is input from the delay tolerant signal demodulator/decoder113. The determination unit116outputs a determination result (an ACK or NACK) to the retransmission controller106. [Configuration of Terminal (During DL Data Self-Contained Operation)] FIG.5is a block diagram depicting a configuration of the terminal200that carries out a DL data self-contained operation according to the present embodiment. InFIG.5, the terminal200has an antenna201, the receiver202, a signal extraction unit203, a control signal demodulator/decoder204, a data demodulator205, a data decoder206, an error detector207, a response signal generator208, an encoder/modulator209, a delay tolerant signal generator210, a delay tolerant signal encoder/modulator211, a signal assignment unit212, and the transmitter213. The terminal200depicted inFIG.5receives a downlink signal that includes a control signal (a DL assignment) or downlink data (DL data) transmitted from the base station100in a downlink transmission region of a time unit (self-contained time unit) that includes the “downlink transmission region”, a “gap period”, and an “uplink transmission region”. Furthermore, the terminal200transmits an uplink signal that includes a response signal for downlink data (and may also include a delay tolerant signal or a UCI), in the uplink transmission region of the time unit. In the terminal200, the receiver202receives, via the antenna201, a control signal and downlink data transmitted from the base station100, carries out RF processing such as down-conversion or AD conversion with respect to a radio reception signal, and obtains a baseband signal. The receiver202outputs the baseband signal to the signal extraction unit203. The signal extraction unit203extracts a signal portion that includes the control signal, from the baseband signal received from the receiver202, and outputs the signal portion to the control signal demodulator/decoder204. Furthermore, the signal extraction unit203extracts a signal portion that includes the downlink data from the baseband signal, and outputs the signal portion to the data demodulator205. The control signal demodulator/decoder204carries out blind decoding on the control signal received from the signal extraction unit203, and attempts decoding for a control signal addressed thereto. The control signal demodulator/decoder204, when having determined as a result of the blind decoding that the control signal is a control signal addressed thereto, outputs assigned resource information for downlink data included in the control signal (the ID of an assigned terminal, assigned resource information (a frequency, a time, and a coding resource), data demodulation reference signal information, a modulation/encoding scheme, or the like) to the data demodulator205, outputs assigned resource information for a response signal and assigned resource information for a delay tolerant signal to the signal assignment unit212, and outputs delay tolerant signal instruction information to the delay tolerant signal generator210. The data demodulator205demodulates downlink data received from the signal extraction unit203, on the basis of the assigned resource information for downlink data, received from the control signal demodulator/decoder204, and outputs demodulated downlink data to the data decoder206. The data decoder206decodes the downlink data received from the data demodulator205, and outputs decoded downlink data to the error detector207. The error detector207carries out error detection by means of a CRC, for example, with respect to the downlink data received from the data decoder206, and outputs an error detection result (an ACK or NACK) to the response signal generator208. Furthermore, the error detector207outputs, as reception data, downlink data determined as having no errors as a result of the error detection. The response signal generator208, using the error detection result (an ACK or NACK) received from the error detector207, generates a response signal (a bit sequence) for the received downlink data, and outputs the response signal to the encoder/modulator209. The encoder/modulator209carries out error correction encoding on the response signal (a bit sequence) received from the response signal generator208, modulates an encoded bit sequence, and outputs a modulated symbol sequence to the signal assignment unit212. The delay tolerant signal generator210generates a delay tolerant signal on the basis of delay tolerant signal instruction information that has been input from the control signal demodulator/decoder204, information that is predetermined by the system, information that is preset in the terminal200by means of a higher layer notification from the base station100, or the like. The delay tolerant signal generator210outputs the generated delay tolerant signal (a bit sequence) to the delay tolerant signal encoder/modulator211. Furthermore, the delay tolerant signal generator210determines whether the transmission of the delay tolerant signal is the first transmission or a retransmission on the basis of whether or not retransmission request information is included in the delay tolerant signal instruction information that is input from the control signal demodulator/decoder204. The delay tolerant signal generator210retains the delay tolerant signal at the time of the first transmission, and outputs a corresponding retained signal to the delay tolerant signal encoder/modulator211at the time of a retransmission. The delay tolerant signal encoder/modulator211carries out encoding processing and modulation processing on the bit sequence that is input from the delay tolerant signal generator210, and outputs a modulated delay tolerant signal to the signal assignment unit212. The signal assignment unit212maps a signal received from the encoder/modulator209and a signal received from the delay tolerant signal encoder/modulator211to a resource (a time, a frequency, and a coding resource) within a time unit for a self-contained operation, instructed from the control signal demodulator/decoder204. It should be noted that a radio resource to which a delay tolerant signal is mapped may be notified in advance by means of a higher layer notification from the base station100to the terminal200without notification being performed by means of a control signal (a DL assignment). The transmitter213carries out RF processing such as D/A conversion and up-conversion on the signal received from the signal assignment unit212, and transmits a radio signal to the base station100via the antenna201. [Configuration of Base Station (During UL Data Self-Contained Operation)] FIG.6is a block diagram depicting a configuration of the base station300that carries out a UL data self-contained operation according to the present embodiment. InFIG.6, the base station300has a scheduler301, a delay tolerant signal controller302, a control signal generator303, a control signal encoder/modulator304, a signal assignment unit305, the transmitter109, the antenna110, the receiver111, a signal extraction unit306, a delay tolerant signal demodulator/decoder307, a delay tolerant signal determination unit308, a data demodulator309, a retransmission synthesis decoder310, and an error detector311. The base station300depicted inFIG.6transmits a downlink signal that includes a UL assignment in a downlink transmission region of a time unit (UL data self-contained time unit) that includes the “downlink transmission region”, a “gap period”, and an “uplink transmission region”. Furthermore, the base station300receives an uplink signal that includes uplink data (and may also include a delay tolerant signal or a UCI) that has been transmitted from the terminal400in the uplink transmission region of the time unit. In the base station300, the scheduler301schedules the retransmission of uplink data in a case where an error detection result indicating that there is an error in the previous uplink data is input from the error detector311. Furthermore, the scheduler301schedules a new packet for the terminal400in a case where an error detection result indicating that there are no errors in the previous uplink data is input from the error detector311. For example, the scheduler301determines scheduling information (for example, the ID of an assigned terminal, assigned resource information for the terminal400(a frequency, a time, and a coding resource), data demodulation reference signal information, a modulation/encoding scheme for uplink data, or the like) relating to a delay tolerant signal, a control signal (a UL assignment), and uplink data (UL data) in a time unit, with respect to the terminal400. The scheduler301outputs the determined scheduling information to the control signal generator303and the signal assignment unit305. The delay tolerant signal controller302determines information (for example, the type of delay tolerant signal) relating to a signal that is generated as a delay tolerant signal, which is a signal or a channel that is transmitted from the terminal400at the end of an uplink transmission region within a time unit, and outputs information indicating the determined content to the control signal generator303. It should be noted that the details of the delay tolerant signal that is transmitted at the end of an uplink transmission region within a time unit will be described hereinafter. Furthermore, the delay tolerant signal controller302outputs information indicating that the transmission of the delay tolerant signal is a retransmission, to the control signal generator303in a case where the delay tolerant signal is a retransmission signal, on the basis of information indicating a delay tolerant signal reception error, which is input from the delay tolerant signal determination unit308. The control signal generator303generates a control signal (a UL assignment) for the terminal400on the basis of information that is input from each of the scheduler301and the delay tolerant signal controller302. Control signals include a signal of a cell-specific higher layer, a signal of a group or RAT-specific higher layer, a signal of a terminal-specific higher layer, assigned resource information for uplink data, information instructing a retransmission or a new transmission of uplink data, assigned resource information for a delay tolerant signal, delay tolerant signal instruction information, or the like. Furthermore, in a case where the base station300requests the terminal400for the retransmission of a delay tolerant signal, the control signal generator303may include retransmission request information for a delay tolerant signal in the delay tolerant signal instruction information. The control signal generator303generates a control information bit string using such control information, and outputs the control information bit string to the control signal encoder/modulator304. It should be noted that the details of the delay tolerant signal instruction information will be described hereinafter. It should be noted that assigned resource information for a delay tolerant signal may be notified in advance by means of a higher layer notification from the base station300to the terminal400. In this case, assigned resource information for a delay tolerant signal is not included in a control signal (a UL assignment). The control signal encoder/modulator304encodes and modulates a control signal received from the control signal generator303, and outputs a modulated control signal to the signal assignment unit305. The signal assignment unit305maps a control signal received from the control signal encoder/modulator304to a radio resource (an assigned time/frequency/coding resource) instructed from the scheduler301. The signal assignment unit305outputs a downlink signal for which signal mapping has been carried out, to the transmitter109. The transmitter109, the antenna110, and the receiver111operate in a manner similar to the transmitter109, the antenna110, and the receiver111provided in the base station100. The signal extraction unit306extracts a radio resource portion in which uplink data from the terminal400has been transmitted, from a reception signal that is input from the receiver111, and outputs the radio resource portion to the data demodulator309. Furthermore, the signal extraction unit306extracts a radio resource portion in which a delay tolerant signal from the terminal400has been transmitted, from the reception signal, and outputs the delay tolerant signal to the delay tolerant signal demodulator/decoder307. The delay tolerant signal demodulator/decoder307carries out equalization, demodulation, and error correction decoding for the delay tolerant signal that is input from the signal extraction unit306, and outputs a decoded bit sequence to the delay tolerant signal determination unit308and the retransmission synthesis decoder310. The delay tolerant signal determination unit308determines whether or not the delay tolerant signal (a bit sequence) that is input from the delay tolerant signal demodulator/decoder307has been correctly received. The delay tolerant signal determination unit308, when having determined that the delay tolerant signal has been correctly received, outputs the delay tolerant signal. However, the delay tolerant signal determination unit308, when having determined that the delay tolerant signal has not been correctly received and is a signal for which it is necessary to request a retransmission of the delay tolerant signal, outputs information indicating a reception error for the delay tolerant signal, to the delay tolerant signal controller302. The data demodulator309carries out equalization and demodulation processing on uplink data received from the signal extraction unit306, and outputs demodulated uplink data (a bit sequence) to the retransmission synthesis decoder310. The retransmission synthesis decoder310, in a case where uplink data to be decoded of the terminal400is retained (a case where the uplink data is retransmission data), synthesizes the retained uplink data and uplink data that has been output from the data demodulator309, and carries out decoding processing on the synthesized uplink data. The retransmission synthesis decoder310, in a case where uplink data of the terminal400is not retained (a case where the uplink data is the first packet), carries out decoding processing without carrying out synthesis processing for uplink data. It should be noted that the retransmission synthesis decoder310may carry out retransmission synthesis and decoding processing with consideration also being given to a bit sequence (for example, some or all of the uplink data) that is input from the delay tolerant signal demodulator/decoder307. The retransmission synthesis decoder310then outputs decoded uplink data to the error detector311. Furthermore, the retransmission synthesis decoder310, in a case where a detection result from the error detector311indicates that there are no errors, deletes the retained uplink data of the terminal400. The error detector311carries out error detection by means of a CRC, for example, with respect to uplink data received from the retransmission synthesis decoder310, and outputs an error detection result (an ACK or NACK) to the scheduler301and the retransmission synthesis decoder310. Furthermore, the error detector311outputs, as reception data, uplink data determined as having no errors as a result of the error detection. [Configuration of Terminal (During UL Data Self-Contained Operation)] FIG.7is a block diagram depicting a configuration of the terminal400that carries out a UL data self-contained operation according to the present embodiment. InFIG.7, the terminal400has the antenna201, the receiver202, a signal extraction unit401, a control signal demodulator/decoder402, a data encoder403, a retransmission controller404, a data modulator405, a delay tolerant signal generator406, a delay tolerant signal encoder/modulator407, a signal assignment unit408, and the transmitter213. The terminal400depicted inFIG.7receives a downlink signal that includes a control signal (a UL assignment) transmitted from the base station300in a downlink transmission region of a time unit (UL data self-contained time unit) that includes the “downlink transmission region”, a “gap period”, and an “uplink transmission region”. Furthermore, the terminal400transmits an uplink signal that includes uplink data (and may also include a delay tolerant signal or a UCI) in the uplink transmission region of the time unit. In the terminal400, the antenna201and the receiver202operate in a manner similar to the antenna201and the receiver202provided in the terminal200. The signal extraction unit401extracts a control signal from a baseband signal received from the receiver202, and outputs the control signal to the control signal demodulator/decoder402. The control signal demodulator/decoder402carries out blind decoding on the control signal received from the signal extraction unit401, and attempts decoding for a control signal addressed thereto. The control signal demodulator/decoder402, when having determined as a result of the blind decoding that the control signal is a control signal addressed thereto, outputs, to the signal assignment unit408, assigned resource information for uplink data (the ID of an assigned terminal, assigned resource information (a frequency, a time, and a coding resource), data demodulation reference signal information, a modulation/encoding scheme, or the like) and assigned resource information for a delay tolerant signal, included in the control signal, outputs information instructing a retransmission or a new transmission of uplink data to the retransmission controller404, and outputs delay tolerant signal instruction information to the delay tolerant signal generator406. The data encoder403carries out error correction encoding on transmission data (uplink data), and outputs an encoded data signal to the retransmission controller404. The retransmission controller404determines whether or not the uplink data is the first packet or a retransmission packet on the basis of information received from the control signal demodulator/decoder402. In the case of the first packet, the retransmission controller404retains the encoded uplink data received from the data encoder403and also outputs the encoded uplink data to the data modulator405. Furthermore, in the case of the first packet, the retransmission controller404determines that the transmission and reception of the previous transmission packet has been successful and discards the retained data. However, in the case of a retransmission packet, the retransmission controller404outputs the corresponding retained data to the data modulator405. The data modulator405modulates the uplink data received from the retransmission controller404, and outputs modulated uplink data to the signal assignment unit408. The delay tolerant signal generator406generates a delay tolerant signal on the basis of delay tolerant signal instruction information that has been input from the control signal demodulator/decoder402, information that is predetermined by the system, information that is preset in the terminal400by means of a higher layer notification from the base station300, or the like. The delay tolerant signal generator406outputs the generated delay tolerant signal (a bit sequence) to the delay tolerant signal encoder/modulator407. Furthermore, the delay tolerant signal generator406determines whether the transmission of the delay tolerant signal is the first transmission or a retransmission on the basis of whether or not retransmission request information is included in the delay tolerant signal instruction information that is input from the control signal demodulator/decoder402. The delay tolerant signal generator406retains the delay tolerant signal at the time of the first transmission, and outputs a corresponding retained signal to the delay tolerant signal encoder/modulator407at the time of a retransmission. The delay tolerant signal encoder/modulator407carries out encoding processing and modulation processing on the bit sequence that is input from the delay tolerant signal generator406, and outputs a modulated delay tolerant signal to the signal assignment unit408. The signal assignment unit408maps uplink data received from the data modulator405and a delay tolerant signal received from the delay tolerant signal encoder/modulator407to a resource (a time, a frequency, and a coding resource) within a time unit for a self-contained operation, instructed from the control signal demodulator/decoder402. The signal assignment unit408outputs an uplink signal for which signal mapping has been carried out, to the transmitter213. The transmitter213operates in a manner similar to the transmitter213provided in the terminal200. [Operation of Base Stations100and300and Terminals200and400] A detailed description will be given regarding an operation in the base stations100and300and the terminals200and400having the above configurations. FIG.8depicts an example of a transmission sequence in each of a base station (eNB) and a terminal (UE) during the DL data self-contained operation ofFIG.1A. Furthermore,FIG.9depicts an example of a transmission sequence in each of the base station100and the terminal200during the DL data self-contained operation according to the present embodiment. InFIG.8, in each time unit, gap #1 that takes into consideration a propagation delay time and the processing time of the terminal is arranged between a downlink transmission region and an uplink transmission region (at the end of the downlink transmission region), and gap #2 that takes into consideration the processing time of the base station is arranged after the uplink transmission region (at the end of the uplink transmission region). For example, the base station schedules downlink data that is transmitted in the downlink transmission region of the next time unit, on the basis of a determination result for a response signal (depicted as an ACK inFIG.8) received in the uplink transmission region, in the period of gap #2 depicted inFIG.8. However, in the present embodiment, as depicted inFIG.9, in a DL data self-contained operation, a delay tolerant signal, for which a delay can be tolerated more than for a response signal (uplink signal) or uplink data (UL data) that is mapped in the uplink transmission region, is mapped to within gap #2 that is arranged at the end of the uplink transmission region depicted inFIG.8, that is, within a period that takes into consideration the processing time of the base station100. That is, the terminal200transmits a delay tolerant signal that has been mapped to a period corresponding to gap #2 arranged after the uplink transmission region, and the base station100receives the delay tolerant signal that has been mapped to the period corresponding to gap #2 arranged after the uplink transmission region. In this case also, the base station100, upon receiving the response signal (ACK) in the uplink transmission region, can schedule downlink data that is transmitted in the downlink transmission region of the next time unit, on the basis of a determination result for the response signal, in a transmission period for a delay tolerant signal (corresponding to gap #2 inFIG.8). Furthermore, the base station100, upon receiving the delay tolerant signal transmitted from the terminal200at the end of the uplink transmission region, carries out predetermined processing (demodulation/decoding processing or the like) on the delay tolerant signal. However, as mentioned above, a delay tolerant signal is a signal for which it is not always necessary to carry out reception/decoding processing or the like by the time unit that is subsequent to the time unit in which the delay tolerant signal has been received by the base station100. That is, since a delay is tolerated for a delay tolerant signal, the base station100, for example, can carry out demodulation/decoding processing for a delay tolerant signal in a period corresponding to the next time unit. It should be noted that, althoughFIG.9relates to during a DL data self-contained operation, in a UL data self-contained operation it is also sufficient to similarly configure a time unit in which a delay tolerant signal is mapped to a period corresponding to gap #2 depicted inFIG.1B. It is thereby possible to reduce the overhead for gaps while maintaining the average delay time from a transmission buffer for the base station100being generated to the base station100receiving a response signal to downlink data from the terminal200, and the average delay time from a transmission buffer for the terminal400being generated to the terminal400completing transmission of the first uplink data. It should be noted that it is not always necessary for the terminals200and400to transmit a delay tolerant signal in each time unit. In a case where the terminal200does not transmit a delay tolerant signal, the time resource for a delay tolerant signal (the end of an uplink transmission region) becomes a gap period as inFIG.1AandFIG.1B. It is thereby possible to reduce power consumption by not carrying out excessive transmissions. [Types of Delay Tolerant Times] Next, the types of delay tolerant times that are generated in the delay tolerant signal generators210and406of the terminals200and400will be described in detail. Hereinafter, descriptions will be given regarding the types of delay tolerant signals common to the DL data self-contained operation and the UL data self-contained operation (common delay tolerant signal types), the type of delay tolerant signal that is generated in only the DL data self-contained operation (a DL data self-contained operation delay tolerant signal type), and the type of delay tolerant signal that is generated in only the UL data self-contained operation (a UL data self-contained operation delay tolerant signal type). First, common delay tolerant signal types 1 to 6 will be described. <Common Delay Tolerant Signal Type 1> A delay tolerant signal in common delay tolerant signal type 1 is a reference signal (an SRS: sounding reference signal) for estimating a propagation path for an uplink. An SRS has no effect on the retransmission control of downlink data or uplink data even if the base stations100and300do not complete reception/decoding processing by the next time unit. That is, an SRS is a signal for which a delay can be tolerated compared to a response signal or uplink data transmitted in an uplink transmission region. In this way, due to the terminals200and400transmitting an SRS in a gap period (gap #2) at the end of an uplink transmission region, in addition to the aforementioned effects, it is possible to increase the opportunities for the base stations100and300to estimate a propagation path for an uplink. Therefore, channel estimation accuracy for an uplink improves, and uplink throughput can be improved. It should be noted that, in the case of a TDD system, a channel estimation value estimated from a propagation path for an uplink using an SRS can be applied also to a downlink, and therefore downlink throughput can also be improved. <Common Delay Tolerant Signal Type 2> A delay tolerant signal in common delay tolerant signal type 2 is information indicating a plurality of beam patterns. Specifically, the terminals200and400transmit a reference signal including at least one of a plurality of beam patterns as a delay tolerant signal. The base stations100and300then detect the optimum beam pattern in an uplink from among the beam patterns corresponding to the reference signals transmitted from the terminals200and400. A beam pattern has no effect on the retransmission control of downlink data or uplink data even if the base stations100and300do not complete reception/decoding processing by the next time unit. That is, a beam pattern is a signal for which a delay can be tolerated compared to a response signal or uplink data transmitted in an uplink transmission region. In this way, due to the terminals200and400transmitting a reference signal for a predetermined beam pattern in a gap period (gap #2) at the end of an uplink transmission region, in addition to the aforementioned effects, it is possible to increase the opportunities for the base stations100and300to estimate the optimum beam pattern for an uplink. Therefore, beam pattern estimation accuracy for an uplink improves, and uplink throughput can be improved. <Common Delay Tolerant Signal Type 3> A delay tolerant signal in common delay tolerant signal type 3 is CSI, which is channel quality information of a downlink. CSI includes one or more out of a CQI (channel quality indicator), a PMI (precoding matrix indicator), an RI (rank indicator), and a CRI (CSI-RS resource indicator). CSI has no effect on the retransmission control of downlink data or uplink data even if the base stations100and300do not complete reception/decoding processing by the next time unit. That is, CSI is a signal for which a delay can be tolerated compared to a response signal or uplink data transmitted in an uplink transmission region. In this way, due to the terminals200and400transmitting CSI in a gap period (gap #2) at the end of an uplink transmission region, in addition to the aforementioned effects, it is possible to increase the opportunities for the terminals200and400to notify quality information of a downlink to the base stations100and300. Therefore, the accuracy of adaptive modulation for a downlink improves, and downlink throughput can be improved. <Common Delay Tolerant Signal Type 4> A delay tolerant signal in common delay tolerant signal type 4 is a scheduling request (SR) with which the assignment of a radio resource for an uplink is requested. An SR has no effect on the retransmission control of downlink data or uplink data even if the base stations100and300do not complete reception/decoding processing by the next time unit. That is, an SR is a signal for which a delay can be tolerated compared to a response signal or uplink data transmitted in an uplink transmission region. In this way, due to the terminals200and400transmitting an SR in a gap period (gap #2) at the end of an uplink transmission region, in addition to the aforementioned effects, it becomes possible for the terminals200and400to notify a resource assignment request for an uplink to the base stations100and300at an early timing. Uplink throughput therefore improves. <Common Delay Tolerant Signal Type 5> A delay tolerant signal in common delay tolerant signal type 5 is a BSR (buffer status report) that notifies a buffer state of the terminals200and400. A BSR is any of a regular BSR that is notified when data is generated, a periodic BSR that is transmitted periodically, and a padding BSR that is transmitted in a case where the number of redundant bits of a MAC PDU (medium access control protocol data unit) is greater than the number of bits required for storage. A BSR has no effect on the retransmission control of downlink data or uplink data even if the base stations100and300do not complete reception/decoding processing by the next time unit. That is, a BSR is a signal for which a delay can be tolerated compared to a response signal or uplink data transmitted in an uplink transmission region. In this way, due to the terminals200and400transmitting a BSR in a gap period (gap #2) at the end of an uplink transmission region, in addition to the aforementioned effects, the terminals200and400can notify a buffer state to the base stations100and300at an early timing. Therefore, the timing at which scheduling for uplink data is carried out becomes earlier, and uplink throughput improves. <Common Delay Tolerant Signal Type 6> A delay tolerant signal in common delay tolerant signal type 6 is a TCP ACK/SYC. A TCP ACK is a higher layer notification for notifying a base station that the reception of a signal of a TCP (transmission control protocol) layer has been completed. Furthermore, a TCP SYC is a higher layer notification for a terminal to notify a base station when a connection is established with a TCP layer. A TCP ACK/SYC has no effect on the retransmission control of downlink data or uplink data even if the base stations100and300do not complete reception/decoding processing by the next time unit. That is, a TCP ACK/SYC is a signal for which a delay can be tolerated compared to a response signal or uplink data transmitted in an uplink transmission region. It should be noted that since a TCP ACK/SYC is a higher layer notification, there is a possibility of the terminals200and400not being able to determine whether or not the signal in question is a TCP ACK/SYC in a MAC/PHY layer. In this case, the terminals200and400may determine that the signal in question is a TCP ACK/SYC in a case where the size of the signal in question is small (for example, a case where the signal size is less than a predetermined value). Furthermore, a retransmission for a TCP ACK/SYC transmitted as a delay tolerant signal may be carried out or may not be carried out. In a case where a TCP ACK/SYC is to be retransmitted, the terminals200and400are not able to retransmit the TCP ACK/SYC in the next time unit, but a problem does not arise since the TCP ACK/SYC is a signal for which a delay is tolerated as mentioned above. In this way, the terminals200and400transmit a TCP ACK in a gap period (gap #2) at the end of an uplink transmission region. Thus, in addition to the aforementioned effects, it becomes possible for a TCP ACK to be fed back at an early timing, in a slow start phase in which the number of TCP segments in TCP congestion control is increased exponentially. The TCP layer throughput can therefore be improved. Furthermore, by transmitting a TCP SYC, a TCP connection can be established at an early timing, and TCP layer throughput can be improved. Hereinabove, common delay tolerant signal types 1 to 6 have been described. <DL Data Self-Contained Delay Tolerant Signal Type> Next, a description will be given regarding a DL data self-contained delay tolerant signal type with which a performance improvement or the like can be expected due to a transmission being carried out in a DL data self-contained operation. A delay tolerant signal in the DL data self-contained delay tolerant signal type is some or all of the delay tolerant signal and a response signal (ACK) that has already been transmitted in the same time unit. FIG.10is a drawing depicting an example of a transmission sequence in a case where the terminal200transmits a response signal as a delay tolerant signal. A response signal (ACK #2 depicted inFIG.10) that is transmitted as a delay tolerant signal is transmitted in a period (the processing time of the base station) corresponding to gap #2 inFIG.1A. Therefore, there is a high possibility of it being difficult to demodulate/decode ACK #2 within the processing time of the base station100(that is, by the next time unit). Meanwhile, a response signal (ACK #1 depicted inFIG.10) that is transmitted in an uplink transmission region can be demodulated/decoded within the processing time of the base station100. However, in a case where the base station100makes an error in the determination of the receiving of ACK #1 and determines a NACK as an ACK, a retransmission packet is not transmitted to the terminal200, and therefore a packet timeout occurs and a large delay occurs. In order to prevent this determination error, the base station100(the determination unit116depicted inFIG.4) synthesizes ACK #2 transmitted as a delay tolerant signal and ACK #1 transmitted in the same time unit. The base station100(determination unit116) then determines whether or not there is a determination error for the response signal on the basis of the synthesized response signal. In this way, the reception quality for a response signal improves due to the synthesizing, and therefore the reception determination accuracy for a response signal can be improved. For example, as depicted inFIG.10, the processing of the base station100is carried out in the period of the time unit that is subsequent to the time unit in which a response signal (ACK #1 or ACK #2) has been transmitted. Then, the base station100, when having determined that there has been a determination error for the response signal, additionally transmits retransmission data in the next time unit. Thus, a delay for a retransmission packet can be suppressed to one time unit, and a large delay occurring can be prevented. As mentioned above, some or all of a response signal transmitted as a delay tolerant signal has no effect on the retransmission control of downlink data or uplink data even if the base station100does not complete reception/decoding processing by the next time unit. That is, some or all of a response signal transmitted as a delay tolerant signal is a signal for which a delay can be tolerated compared to a response signal that is transmitted in another uplink transmission region. In this way, the terminal200transmits some or all of a response signal transmitted in the same time unit in a gap period (gap #2) at the end of an uplink transmission region. Thus, in addition to the aforementioned effects, the possibility of a determination error for a response signal occurring in the base station100can be reduced, and downlink throughput can be improved. <UL Data Self-Contained Delay Tolerant Signal Type> Next, a description will be given regarding a UL data self-contained delay tolerant signal type with which a performance improvement or the like can be expected due to a transmission being carried out in a UL data self-contained operation. A delay tolerant signal in the UL data self-contained delay tolerant signal type is the delay tolerant signal and uplink data that has already been transmitted in the same time unit. In a case where an IR (incremental redundancy) scheme is applied in retransmission control, uplink data is transmitted with an RV (redundancy version), which indicates a transmission start position in an encoded data sequence, being altered in accordance with the number of times transmission has been carried out. Some of the uplink data transmitted as a delay tolerant signal may be some of a data sequence having the same RV as uplink data that has already been transmitted in the same time unit, or may be some of a data sequence having a different RV. Some of the uplink data that is transmitted as a delay tolerant signal is transmitted in a period (the processing time of the base station) corresponding to gap #2 inFIG.1B. Therefore, there is a high possibility of it being difficult to demodulate/decode within the processing time of the base station300(that is, by the next time unit). Therefore, uplink data that is transmitted as a delay tolerant signal is used when uplink data that is retransmitted in the next time unit is received. That is, the base station300(the retransmission synthesis decoder310depicted inFIG.6) synthesizes retransmission data of the uplink data, and some of the uplink data that has been received in the previous time unit as a delay tolerant signal, and decodes the synthesized data. It should be noted that the delay tolerant signal is discarded in a case where a retransmission does not occur. Some of the uplink data transmitted as a delay tolerant signal has no effect on the retransmission control of downlink data or uplink data even if the base station300does not complete reception/decoding processing by the next time unit. That is, some of the uplink data transmitted as a delay tolerant signal is a signal for which a delay can be tolerated compared to uplink data that is transmitted in another uplink transmission region. In this way, the terminal400transmits some of the uplink data transmitted in the same time unit in a gap period (gap #2) at the end of an uplink transmission region. Thus, in addition to the aforementioned effects, it is possible to improve the reception success probability for the next item of uplink data in the base station300, when a retransmission has occurred. Hereinabove, the types of delay tolerant signals have been described in detail. It should be noted that the types of delay tolerant signals are not restricted to the aforementioned signals, and it is sufficient for the types of delay tolerant signals to be signals for which a delay is tolerated in communication using a time unit configuration. [Delay Tolerant Signal Selection Methods] Next, methods for selecting a delay tolerant signal to be generated in the aforementioned delay tolerant signal controllers102and302of the base stations100and300will be described. <Selection Method 1> In selection method 1, the base stations100and300signal the type of delay tolerant signal to be generated by the terminals200and400, using delay tolerant signal instruction information and using a downlink control signal (a PDCCH including a DL assignment or a UL assignment). FIG.11depicts an example of delay tolerant signal instruction information during a DL data self-contained operation, andFIG.12depicts an example of delay tolerant signal instruction information during a UL data self-contained operation. It should be noted that in a case where a delay tolerant signal to be transmitted does not fit within the resource that has been set, such as in a case where the size of the processing time (gap #2) of the base stations100and300is small, the base stations100and300may notify that there is no delay tolerant signal (delay tolerant signal instruction information=0). In the terminals200and400, the control signal demodulator/decoders204and402acquire notified delay tolerant signal instruction information, and the delay tolerant signal generators210and406, on the basis of the delay tolerant signal instruction information, determine which delay tolerant signal is to be generated, and generate the delay tolerant signal. In this way, the base stations100and300signal the type of delay tolerant signal to be generated, to the terminals200and400by means of a downlink control signal, and it is thereby possible to dynamically switch the information that is transmitted as a delay tolerant signal. <Selection Method 2> In selection method 2, similar to selection method 1, the base station100signals the type of delay tolerant signal to be generated by the terminal200, using a downlink control signal. In selection method 2, in addition, the type of delay tolerant signal to be transmitted by the terminal200is altered in accordance with the size of the radio resource (the frequency domain or the time domain) used to transmit the delay tolerant signal. FIG.13andFIG.14depict an example of delay tolerant signal instruction information during a DL data self-contained operation. The delay tolerant signal instruction information depicted inFIG.13is an example in which, in accordance with the resource size, the type of the delay tolerant signal does not change but the content of the information that is transmitted as a delay tolerant signal changes. For example, in a case where the delay tolerant signal instruction information indicates “3” inFIG.13, a CQI, a PMI, and an RI are transmitted as CSI constituting a delay tolerant signal when the resource size is large, whereas only a CQI is transmitted as CSI constituting a delay tolerant signal when the resource size is small. Similarly, in a case where the delay tolerant signal instruction information indicates “5” inFIG.13, a long BSR is transmitted as a delay tolerant signal when the resource size is large, and a short BSR is transmitted as a delay tolerant signal when the resource size is small. It should be noted that a long BSR is information that notifies the amount of data in a plurality of logical channel groups, and a short BSR is information that notifies the amount of data in one logical channel group. Meanwhile, the delay tolerant signal instruction information depicted inFIG.14is an example in which the type of delay tolerant signal changes in accordance with the resource size. As depicted inFIG.14, in a case where the resource size is small, an SR, an ACK, a TCP ACK/SYC, no delay tolerant signal, or the like constituting information having a comparatively low number of transmission bits is transmitted as a delay tolerant signal, and in a case where the resource size is large, an SRS, CSI, a transmission beam pattern, a BSR, or the like constituting information having a comparatively high number of transmission bits is transmitted as a delay tolerant signal. It should be noted that a configuration may be adopted in which the delay tolerant signal instruction information is instructed from a higher layer, and the terminal200changes the delay tolerant signal to be transmitted in accordance with the resource size. In this way, by changing the type or content of the delay tolerant signal to be transmitted, in accordance with the resource size, it becomes possible for the terminal200to select the type or content of a large number of delay tolerant signals with a small amount of signaling. <Selection Method 3> In selection method 3, the base stations100and300signal delay tolerant signal instruction information that is similar to delay tolerant signal instruction information 1 to the terminals200and400by means of a higher layer. In this way, the base stations100and300notify the type of delay tolerant signal to be generated, to the terminals200and400by means of a higher layer notification, and it is thereby possible to reduce the overhead caused by signaling in a downlink. <Selection Method 4> In selection method 4, the base stations100and300notify delay tolerant signal instruction information that indicates the priority levels of delay tolerant signals to be generated, to the terminals200and400by means of a higher layer. The terminals200and400transmit one or more transmittable delay tolerant signals on the basis of the priority levels instructed by the delay tolerant signal instruction information. FIG.15depicts an example of delay tolerant signal instruction information that indicates the priority levels of delay tolerant signals to be generated by the terminals200and400. For example, in a case where the delay tolerant signal instruction information indicates “0”, the terminals200and400preferentially select signals to be transmitted as delay tolerant signals, in the order of an SR, CSI, and a BSR. In this way, the base stations100and300notify the priority levels of delay tolerant signals to be generated, to the terminals200and400by means of a higher layer, and it thereby becomes possible for the terminals200and400to select the types of a large number of delay tolerant signals with a small amount of signaling. Furthermore, even in a case where a signal having a high priority level instructed by the delay tolerant signal instruction information cannot be transmitted as a delay tolerant signal, the terminals200and400can once again select a signal having a lower priority level, and can therefore select transmission signals in a flexible manner. It should be noted that a configuration may be adopted in which the terminals200and400transmit delay tolerant signals according to priority levels that are stipulated as a specification in advance, rather than the base stations100and300notifying the priority levels of generated signals. Hereinabove, methods for selecting delay tolerant signals have been described. In this way, in the present embodiment, a delay tolerant signal that has no effect on the processing time of the base station is mapped to a gap period that is arranged after an uplink transmission region in a time unit (a gap period that is arranged at the end of a time unit). It is thereby possible to reduce the overhead for gap periods while ensuring the processing times of the base stations100and300in gap periods. For example, even in a case where gap periods increase in length in consideration of the processing times of the base stations100and300, more assigned resources for delay tolerant signals can be ensured in proportion to the amount by which the gap periods have increased in length. Based on the above, according to the present embodiment, it is possible to suppress a decline in the utilization efficiency of radio resources caused by gap periods within time units. It should be noted that, in the present embodiment, in a case where the size of a resource used to transmit a delay tolerant signal is large, the base stations100and300may transmit a plurality of items of delay tolerant signal instruction information, and instruct the terminals200and400to transmit a plurality of delay tolerant signals. Furthermore, a similar effect can be obtained even if the definition of a time unit is different from the arrangement in the exemplary time unit configuration depicted inFIG.9, as long as the arrangement of the signals (a DL assignment, DL data, a gap, an ACK, a delay tolerant signal) within a time unit is the same. For example, the definition of a time unit may be a period from the reception of a delay tolerant signal in a base station to the reception of a response signal (an ACK), as depicted inFIG.16. In this case, delay tolerant signals are transmitted from the terminals200and400to the base stations100and300at the beginning (an uplink transmission region) of a time unit. Thus, an effect that is similar to that of embodiment 1 (the configuration ofFIG.9) can be obtained. Furthermore, a delay tolerant signal is not restricted to an uplink signal transmitted by the terminals200and400, and may be a downlink signal transmitted by the base stations100and300. For example, the base station100may transmit a delay tolerant signal at the beginning of a downlink transmission region, as depicted inFIG.17. Thus, an effect that is similar to that of embodiment 1 can be obtained. It should be noted that the details of a downlink signal that is transmitted as a delay tolerant signal will be described in embodiment 3. Embodiment 2 As described in embodiment 1, in a case where a self-contained operation is used, performance can be improved by transmitting a delay tolerant signal that has no effect on the processing time of a base station or a terminal, at the end of an uplink transmission region within a time unit. However, in embodiment 1, it is necessary for a frequency resource (assigned resource information) used to transmit the delay tolerant signal, to be notified from the base station to the terminal. Therefore, the amount of downlink control signals increases, and the overhead for control signals increases. Thus, in the present embodiment, a method will be described in which a delay tolerant signal is transmitted without the frequency resource used to transmit the delay tolerant signal being notified by means of a downlink control signal. It should be noted that the base station and the terminal according to the present embodiment have a basic configuration that is common to the base stations100and300and the terminals200and400according to embodiment 1, and will therefore be described with reference toFIG.4toFIG.7. In the present embodiment, the processing of the control signal generators103and303of the base stations100and300inFIG.4andFIG.6and the processing of the signal assignment units212and408of the terminals200and400inFIG.5andFIG.7are different from in embodiment 1. Specifically, the control signal generators103and303do not generate control information that indicates a frequency resource to which a delay tolerant signal is assigned. That is, the control signal generators103and303generate assigned resource information for downlink data, uplink data, or a response signal as control information relating to a frequency resource assigned to the terminals200and400. The signal assignment units212and408determine a frequency resource (assigned band) to which a delay tolerant signal is assigned, in accordance with a frequency band (assigned band) to which a downlink control signal, downlink data, uplink data, or a response signal, transmitted in the same time unit as the delay tolerant signal, has been assigned. Hereinafter, resource assignment methods for a delay tolerant signal in the aforementioned signal assignment units212and408of the terminals200and400will be described in detail. First, a resource assignment method that is common to a DL data self-contained operation and a UL data self-contained operation (common resource assignment method) will be described. <Common Resource Assignment Method> The base stations100and300and the terminals200and400determine a frequency assignment position for a delay tolerant signal on the basis of a CCE (control channel element) index to which a downlink control signal (for example, a PDCCH that includes a DL assignment or a UL assignment) has been assigned. FIG.18depicts an example of the assignment of a frequency resource for a delay tolerant signal (the delay tolerant signal ofFIG.18) that is based on a CCE according to the common resource assignment method. In the example depicted inFIG.18, during a DL data self-contained operation, an index of a CCE (downlink resource) to which a DL assignment is assigned and a frequency resource (uplink resource) to which a response signal is assigned are associated on a one-to-one basis. InFIG.18, in addition, the index of the CCE to which the DL assignment is assigned and a frequency resource (uplink resource) to which a delay tolerant signal is assigned are associated on a one-to-one basis. Here, the number of CCEs, for example, is a value obtained by dividing the number of REs (resource elements) forming a downlink control signal (PDCCH) by 36 (1 CCE=36 REs). Thus, for instance, as an example of the association between CCEs and frequency assignment positions, a usable bandwidth is divided by the number of CCEs, and a usable frequency band is associated with each CCE. The terminal200then transmits the delay tolerant signal which is mapped to all or some of a frequency band that is a resource associated on a one-to-one basis in relation to the delay tolerant signal with the index of the CCE (CCE #X inFIG.18) used to transmit the DL assignment addressed thereto. It should be noted that, althoughFIG.18depicts a DL data self-contained operation, similarly also for a UL data self-contained operation, it is sufficient for the index of a CCE used to transmit a UL assignment and a resource used to transmit a delay tolerant signal to be associated on a one-to-one basis. In this way, a delay tolerant signal is mapped to a resource associated on a one-to-one basis with a resource (CCE index) used to transmit assignment information (a DL assignment or a UL assignment) indicating a resource assignment for data transmitted in the same time unit as the delay tolerant signal. By associating a CCE index and a resource for a delay tolerant signal, signaling for notifying a frequency resource used to transmit the delay tolerant signal is not necessary. Thus, the base stations100and300can control the frequency assignment position of a delay tolerant signal while reducing the amount of downlink control information. Furthermore, due to the base stations100and300controlling the assignment of CCEs, it becomes possible for a radio resource for a delay tolerant signal to be changed by the base stations100and300. Next, resource assignment methods during a DL data self-contained operation (DL data self-contained resource assignment methods) will be described. <DL Data Self-Contained Resource Assignment Method 1> The terminal200transmits a delay tolerant signal within a frequency band having assigned thereto a response signal, which is transmitted within the same time unit. FIG.19depicts an example of the assignment of a frequency resource for a response signal (ACK) and a delay tolerant signal according to DL data self-contained resource assignment method 1. InFIG.19, the terminal200specifies an assigned resource (ACK resource) for a response signal associated with a CCE (CCE #X inFIG.19) to which a DL assignment addressed thereto is associated. The terminal200then specifies a resource within the same frequency band as the ACK resource as the assigned resource for a delay tolerant signal. It should be noted that, althoughFIG.19depicts an example in which the assigned resource for a delay tolerant signal is the same as for a response signal, the assigned resource for the delay tolerant signal may not be the same as long as it is within the band to which the response signal is assigned. Furthermore, in a case where a response signal and a delay tolerant signal are mapped to a code region (OCC (orthogonal cover code) number or cyclic shift number) in a manner similar to a response signal in LTE, a configuration may be adopted in which a delay tolerant signal is transmitted in the same radio resource as an ACK code region. In this way, a delay tolerant signal is mapped to within the same frequency band as the frequency band having assigned thereto a response signal for downlink data transmitted in the same time unit as the delay tolerant signal. By associating the frequency assignment position of the delay tolerant signal with the response signal, the amount of downlink control information can be reduced. Furthermore, since the frequency assignment position of the delay tolerant signal is the same as for the response signal, scheduling in the base station100becomes easy. <DL Data Self-Contained Resource Assignment Method 2> The terminal200transmits a delay tolerant signal within a frequency band having assigned thereto downlink data, which is transmitted within the same time unit. FIG.20depicts an example of the assignment of a frequency resource for downlink data and a delay tolerant signal according to DL data self-contained resource assignment method 2. InFIG.20, the terminal200specifies an assigned resource for the downlink data (DL data) by means of a DL assignment addressed thereto. The terminal200then specifies a resource within the same frequency band as the frequency band assigned to the downlink data, as the assigned resource for a delay tolerant signal. It should be noted that, althoughFIG.20depicts an example in which the assigned resource for a delay tolerant signal is the same as for downlink data, the assigned resource for the delay tolerant signal may not be the same as long as it is within the band to which the downlink data is assigned. Furthermore, in a case where downlink data is mapped to non-contiguous bands, the terminal200may select one or more bands in descending order of bandwidth from among the non-contiguous bands. Furthermore, in a case where downlink data is transmitted by means of MU-MIMO, delay tolerant signals of a plurality of terminals200are assigned to the same band. In this case, a method is feasible in which delay tolerant signals are also transmitted by means of MU-MIMO in a manner similar to the downlink data. Furthermore, a method may be adopted in which the assigned band for downlink data is divided by the number of terminals multiplexed by means of MU-MIMO, and, for example, a port number for a reference signal (also referred to as a demodulation reference signal: DMRS) for demodulating downlink data and a divided frequency band are associated. In this way, a delay tolerant signal is mapped to within the same frequency band as the frequency band having assigned thereto downlink data transmitted in the same time unit as the delay tolerant signal. By associating the frequency assignment position of the delay tolerant signal with the downlink data, the amount of downlink control information can be reduced. Furthermore, as depicted inFIG.19, in a case where the assigned bandwidth for a response signal (ACK) is narrow when the assigned resource for a delay tolerant signal is associated with the assigned resource for the response signal, the bandwidth for the delay tolerant signal also becomes small. In contrast, as depicted inFIG.20, by associating the assigned resource for a delay tolerant signal with an assigned resource for downlink data, it is possible to prevent the assigned bandwidth for the delay tolerant signal becoming narrow. Furthermore, downlink data is scheduled and therefore there is a high possibility of downlink data being assigned to a frequency band having a high SINR. Thus, in the case of a TDD system, a scheduling gain can be obtained by a delay tolerant signal being transmitted in the same band as downlink data. <UL Data Self-Contained Resource Assignment Method> Next, a resource assignment method during a UL data self-contained operation (UL data self-contained resource assignment method) will be described. The terminal400transmits a delay tolerant signal within a frequency band having assigned thereto uplink data, which is transmitted within the same time unit. FIG.21depicts an example of the assignment of a frequency resource for uplink data and a delay tolerant signal according to a UL data self-contained resource assignment method. InFIG.21, the terminal400specifies an assigned resource for the uplink data (UL data) by means of a UL assignment addressed thereto. The terminal400then specifies a resource within the same frequency resource as the frequency resource to which the uplink data has been assigned, as the assigned resource for a delay tolerant signal. It should be noted that, althoughFIG.21depicts an example in which the assigned resource for a delay tolerant signal is the same as for uplink data, the assigned resource for the delay tolerant signal may not be the same as long as it is within the band to which the uplink data is assigned. Furthermore, in a case where uplink data is transmitted by means of MU-MIMO, delay tolerant signals of a plurality of terminals400are assigned to the same band. In this case, a method is feasible in which delay tolerant signals are also transmitted by means of MU-MIMO in a manner similar to uplink data. Furthermore, a method may be adopted in which the assigned band for uplink data is divided by the number of terminals multiplexed by means of MU-MIMO, and, for example, a port number for a reference signal (DMRS) for demodulating uplink data and a divided frequency band are associated. In this way, a delay tolerant signal is mapped to within the same frequency band as the frequency band having assigned thereto uplink data transmitted in the same time unit as the delay tolerant signal. By associating the frequency assignment position of the delay tolerant signal with the uplink data, the amount of downlink control information can be reduced. Furthermore, since the frequency assignment position of the uplink data is the same as for the delay tolerant signal, scheduling in the base station300becomes easy. Furthermore, since the uplink data is scheduled, there is a high possibility of a signal being assigned to a frequency band having a high SINR. Thus, a scheduling gain can be obtained by the delay tolerant signal being transmitted in the same band as the uplink data. Hereinabove, the details of resource assignment methods for a delay tolerant signal have been described. In this way, in the present embodiment, it is not necessary to notify a frequency resource (assigned resource information) used to transmit a delay tolerant signal, from the base stations100and300to the terminals200and400using a downlink control signal, and therefore it is possible to prevent an increase in the overhead for control signals. Embodiment 3 In embodiments 1 and 2, methods have been described in which performance is improved by, in a case where a self-contained operation is used, mapping a delay tolerant signal to the end of an uplink transmission region, that is, a gap period (gap #2) that takes into consideration the processing time of a base station. However, in a case where the processing time for receiving/decoding downlink data in a terminal is long, it is necessary to increase a gap period (gap #1) that is a switching point between a downlink transmission region and an uplink transmission region, and therefore the overhead for gap #1 becomes large. Thus, in the present embodiment, a method will be described in which the overhead for gap #1 is reduced by mapping a delay tolerant signal to within a gap period (gap #1) that is subsequent to a downlink transmission region, that is, a period that is provided with consideration being given to the processing time of a terminal. [Overview of Communication System] A communication system that carries out a DL data self-contained operation according to the present embodiment is provided with a base station500and a terminal600. Furthermore, a communication system that carries out a UL data self-contained operation according to each embodiment of the present disclosure is provided with a base station700and a terminal800. [Configuration of Base Station (During DL Data Self-Contained Operation)] FIG.22is a block diagram depicting a configuration of the base station500that carries out a DL data self-contained operation according to the present embodiment. InFIG.22, the base station500has a scheduler501, a delay tolerant signal controller502, a delay tolerant signal generator503, a delay tolerant signal encoder/modulator504, a control signal generator505, a control signal encoder/modulator506, a data encoder507, a retransmission controller508, a data modulator509, a signal assignment unit510, the transmitter109, the antenna110, the receiver111, a signal extraction unit511, a demodulator/decoder512, and a determination unit513. The base station500depicted inFIG.22transmits a downlink signal that includes a control signal (a DL assignment), downlink data (DL data), or a delay tolerant signal in a downlink transmission region, in a time unit (DL data self-contained time unit) that includes the “downlink transmission region”, an “uplink transmission region”, and a “gap period”. Furthermore, the base station500receives an uplink signal that includes a response signal (and may also include a UCI) that is transmitted from the terminal600in the uplink transmission region, in the time unit. In the base station500, the scheduler501determines scheduling information (for example, the ID of an assigned terminal, assigned resource information for the terminal600(a frequency, a time, and a coding resource), data demodulation reference signal information, a modulation/encoding scheme, assigned resource information for a response signal (a frequency, a time, and a coding resource), or the like) relating to a delay tolerant signal, a control signal (a DL assignment), and downlink data (DL data) in the time unit, with respect to the terminal600. The scheduler501outputs the determined scheduling information to the delay tolerant signal generator503, the control signal generator505, the data encoder507, and the signal assignment unit510. The delay tolerant signal controller502determines information regarding a signal (for example, the signal type) that is generated as a delay tolerant signal, which is a signal or a channel that is transmitted from the base station500at the end of the downlink transmission region within the time unit, and outputs information indicating the determined content to the delay tolerant signal generator503and the control signal generator505. It should be noted that the details of a delay tolerant signal determined in the delay tolerant signal controller502will be described hereinafter. The delay tolerant signal generator503generates a delay tolerant signal on the basis of information that is input from the delay tolerant signal controller502and scheduling information that is instructed from the scheduler501, and outputs the generated delay tolerant signal to the delay tolerant signal encoder/modulator504. The delay tolerant signal encoder/modulator504encodes and modulates the delay tolerant signal (a bit sequence) that is input from the delay tolerant signal generator503, and outputs a modulated delay tolerant signal (symbol string) to the signal assignment unit510. The control signal generator505generates a control signal (a DL assignment) for the terminal600on the basis of information that is input from each of the scheduler501and the delay tolerant signal controller502. Control signals include a signal of a cell-specific higher layer, a signal of a group or RAT-specific higher layer, a signal of a terminal-specific higher layer, assigned resource information for downlink data, assigned resource information for a delay tolerant signal, information instructing the type of delay tolerant signal (hereinafter, referred to as delay tolerant signal type information), assigned resource information for a response signal, and the like. The control signal generator505generates a control information bit string using such control information, and outputs the generated control information bit string to the control signal encoder/modulator506. It should be noted that assigned resource information for a delay tolerant signal or the delay tolerant signal type information may be notified in advance by means of a higher layer notification from the base station500to the terminal600. In this case, the assigned resource information for a delay tolerant signal or the delay tolerant signal type information is not included in a control signal (a DL assignment). FIG.23depicts an example of the delay tolerant signal type information. InFIG.23, delay tolerant signal type information (an index) and the types of delay tolerant signals transmitted from the base station500are associated. The control signal encoder/modulator506encodes and modulates the control signal (a bit string) received from the control signal generator505, and outputs a modulated control signal to the signal assignment unit510. The data encoder507carries out error correction encoding on transmission data (downlink data) in accordance with an encoding scheme received from the scheduler501, and outputs an encoded data signal to the retransmission controller508. The retransmission controller508, at the time of the first transmission, retains the encoded data signal received from the data encoder507and also outputs the encoded data signal to the data modulator509. Furthermore, the retransmission controller508, at the time of a retransmission, controls the retained data on the basis of a determination result (an ACK/NACK) from the determination unit513. Specifically, the retransmission controller508, upon receiving a NACK with respect to the data signal, outputs the corresponding retained data to the data modulator509. Furthermore, the retransmission controller508, upon receiving an ACK with respect to the data signal, discards the corresponding retained data and ends the transmission of downlink data. The data modulator509modulates a data signal received from the retransmission controller508, and outputs the modulated data signal (symbol string) to the signal assignment unit510. The signal assignment unit510maps a delay tolerant signal received from the delay tolerant signal encoder/modulator504, a control signal received from the control signal encoder/modulator506, and a data signal received from the data modulator509to a radio resource instructed from the scheduler501. The signal assignment unit510outputs a downlink signal for which signal mapping has been carried out, to the transmitter109. The transmitter109, the antenna110, and the receiver111operate in a manner similar to the transmitter109, the antenna110, and the receiver111provided in the base station100. The signal extraction unit511extracts a radio resource portion in which an uplink response signal from the terminal600has been transmitted, from the reception signal, and outputs a reception response signal to the demodulator/decoder512. The demodulator/decoder512carries out equalization, demodulation, and decoding on the reception response signal that is received from the signal extraction unit511, and outputs a decoded bit sequence to the determination unit513. The determination unit513determines whether a response signal for downlink data, transmitted from the terminal600, indicates an ACK or NACK with respect to the downlink data, on the basis of the bit sequence that is input from the demodulator/decoder512. The determination unit513outputs a determination result (an ACK or NACK) to the retransmission controller508. [Configuration of Terminal (During DL Data Self-Contained Operation)] FIG.24is a block diagram depicting a configuration of the terminal600that carries out a DL data self-contained operation according to the present embodiment. InFIG.24, the terminal600has the antenna201, the receiver202, a signal extraction unit601, a control signal demodulator/decoder602, a delay tolerant signal demodulator/decoder603, a delay tolerant signal determination unit604, a data demodulator605, a data decoder606, an error detector607, a response signal generator608, an encoder/modulator609, a signal assignment unit610, and the transmitter213. The terminal600depicted inFIG.24receives a downlink signal that includes a delay tolerant signal, a control signal (a DL assignment), or downlink data (DL data) transmitted from the base station500in a downlink transmission region, in a time unit (DL data self-contained time unit) that includes the “downlink transmission region”, a “gap period”, and an “uplink transmission region”. Furthermore, the terminal600transmits an uplink signal that includes a response signal for downlink data (and may also include a UCI) in the uplink transmission region in the time unit. In the terminal600, the antenna201and the receiver202operate in a manner similar to the antenna201and the receiver202provided in the terminal200. The signal extraction unit601extracts a signal portion that includes a control signal from a baseband signal received from the receiver202, and outputs the signal portion to the control signal demodulator/decoder602. Furthermore, the signal extraction unit601extracts a signal portion that includes downlink data from the baseband signal, and outputs the signal portion to the data demodulator605. Furthermore, the signal extraction unit601extracts a signal portion that includes a delay tolerant signal from the baseband signal, and outputs the signal portion to the delay tolerant signal demodulator/decoder603. The control signal demodulator/decoder602carries out blind decoding on a control signal received from the signal extraction unit601, and attempts decoding for a control signal addressed thereto. The control signal demodulator/decoder602, when having determined as a result of the blind decoding that the control signal is a control signal addressed thereto, outputs, to the data demodulator605, assigned resource information for downlink data included in the control signal (the ID of an assigned terminal, assigned resource information (a frequency, a time, and a coding resource), data demodulation reference signal information, a modulation/encoding scheme, or the like), outputs assigned resource information (a frequency, a time, and a coding resource) for a response signal to the signal assignment unit610, and outputs assigned resource information for a delay tolerant signal and delay tolerant signal type information to the delay tolerant signal demodulator/decoder603. The delay tolerant signal demodulator/decoder603carries out equalization, demodulation, and error correction decoding for a delay tolerant signal that is input from the signal extraction unit601, on the basis of the assigned resource information for the delay tolerant signal and the delay tolerant signal type that are input from the control signal demodulator/decoder602, and outputs a decoded bit sequence to the delay tolerant signal determination unit604. The delay tolerant signal determination unit604determines whether or not the delay tolerant signal (a bit sequence) that is input from the delay tolerant signal demodulator/decoder603has been correctly received. The delay tolerant signal determination unit604, when having determined that the delay tolerant signal has been correctly received, outputs the delay tolerant signal. The data demodulator605demodulates downlink data received from the signal extraction unit601, on the basis of assigned resource information for downlink data, received from the control signal demodulator/decoder602, and outputs demodulated downlink data to the data decoder606. The data decoder606decodes the downlink data received from the data demodulator605, and outputs decoded downlink data to the error detector607. The error detector607carries out error detection by means of a CRC, for example, with respect to the downlink data received from the data decoder606, and outputs an error detection result (an ACK or NACK) to the response signal generator608. Furthermore, the error detector607outputs, as reception data, downlink data determined as having no errors as a result of the error detection. The response signal generator608, using the error detection result (an ACK or NACK) received from the error detector607, generates a response signal (a bit sequence) for the received downlink data, and outputs the response signal to the encoder/modulator609. The encoder/modulator609carries out error correction encoding on the response signal (a bit sequence) received from the response signal generator608, modulates an encoded bit sequence, and outputs a modulated symbol sequence to the signal assignment unit610. The signal assignment unit610maps a signal received from the encoder/modulator609to a resource (a time, a frequency, and a coding resource) within a time unit for a self-contained operation, instructed from the control signal demodulator/decoder602. The transmitter213operates in a manner similar to the transmitter213provided in the terminal200. [Configuration of Base Station (During UL Data Self-Contained Operation)] FIG.25is a block diagram depicting a configuration of the base station700that carries out a UL data self-contained operation according to the present embodiment. InFIG.25, the base station700has a scheduler701, a delay tolerant signal controller702, a delay tolerant signal generator703, a delay tolerant signal encoder/modulator704, a control signal generator705, a control signal encoder/modulator706, a signal assignment unit707, the transmitter109, the antenna110, the receiver111, a signal extraction unit708, a data demodulator709, a retransmission synthesis decoder710, and an error detector711. The base station700depicted inFIG.25transmits a downlink signal that includes a delay tolerant signal and a UL assignment in a downlink transmission region of a time unit (UL data self-contained time unit) that includes the “downlink transmission region”, a “gap period”, and an “uplink transmission region”. Furthermore, the base station700receives an uplink signal that includes uplink data (and may also include a UCI) that is transmitted from the terminal800in the uplink transmission region of the time unit. In the base station700, the scheduler701schedules the retransmission of uplink data in a case where an error detection result indicating that there is an error in the previous uplink data is input from the error detector711. Furthermore, the scheduler701schedules a new packet for the terminal800in a case where an error detection result indicating that there are no errors in the previous uplink data is input from the error detector711. For example, the scheduler701determines scheduling information (for example, the ID of an assigned terminal, assigned resource information for the terminal800(a frequency, a time, and a coding resource), data demodulation reference signal information, a modulation/encoding scheme for uplink data, or the like) relating to a delay tolerant signal, a control signal (a UL assignment), and uplink data (UL data) in a time unit, with respect to the terminal800. The scheduler701outputs the determined scheduling information to the delay tolerant signal generator703, the control signal generator705, and the signal assignment unit707. The delay tolerant signal controller702determines information (for example, the type of delay tolerant signal) relating to a signal that is generated as a delay tolerant signal, which is a signal or a channel that is transmitted from the base station700at the end of the downlink transmission region within the time unit, and outputs information indicating the determined content to the delay tolerant signal generator703and the control signal generator705. It should be noted that the details of the signal types determined in the delay tolerant signal controller702will be described hereinafter. The delay tolerant signal generator703generates a delay tolerant signal on the basis of information that is input from the delay tolerant signal controller702and scheduling information that is instructed from the scheduler701, and outputs the generated delay tolerant signal to the delay tolerant signal encoder/modulator704. The delay tolerant signal encoder/modulator704encodes and modulates the delay tolerant signal (a bit sequence) that is input from the delay tolerant signal generator703, and outputs a modulated delay tolerant signal (symbol string) to the signal assignment unit707. The control signal generator705generates a control signal (a UL assignment) for the terminal800on the basis of information that is input from each of the scheduler701and the delay tolerant signal controller702. Control signals include a signal of a cell-specific higher layer, a signal of a group or RAT-specific higher layer, a signal of a terminal-specific higher layer, assigned resource information for uplink data, information instructing a retransmission or a new transmission of uplink data, assigned resource information for a delay tolerant signal, information indicating the type of delay tolerant signal (delay tolerant signal type information), or the like. The control signal generator705generates a control information bit string using such control information, encodes the generated control information bit string, and outputs the encoded control signal to the control signal encoder/modulator706. It should be noted that assigned resource information for a delay tolerant signal or the delay tolerant signal type information may be notified in advance by means of a higher layer notification from the base station700to the terminal800. In this case, the assigned resource information for a delay tolerant signal or the delay tolerant signal type information is not included in a control signal (a DL assignment). The control signal encoder/modulator706encodes and modulates a control signal received from the control signal generator705, and outputs a modulated control signal to the signal assignment unit707. The signal assignment unit707maps a delay tolerant signal received from the delay tolerant signal encoder/modulator704and a control signal received from the control signal encoder/modulator706to a radio resource (an assigned time/frequency/coding resource) instructed from the scheduler701. The signal assignment unit707outputs a downlink signal for which signal mapping has been carried out, to the transmitter109. The transmitter109, the antenna110, and the receiver111operate in a manner similar to the transmitter109, the antenna110, and the receiver111provided in the base station100. The signal extraction unit708extracts a radio resource portion in which uplink data from the terminal800has been transmitted, from a reception signal that is input from the receiver111, and outputs the radio resource portion to the data demodulator709. The data demodulator709carries out equalization and demodulation processing on uplink data received from the signal extraction unit708, and outputs demodulated uplink data (a bit sequence) to the retransmission synthesis decoder710. The retransmission synthesis decoder710, in a case where uplink data to be decoded of the terminal800is retained (a case where the uplink data is retransmission data), synthesizes the uplink data retained and uplink data that has been output from the data demodulator709, and carries out decoding processing on the synthesized uplink data. The retransmission synthesis decoder710, in a case where uplink data of the terminal800is not retained (a case where the uplink data is the first packet), carries out decoding processing without carrying out synthesis processing for uplink data. The retransmission synthesis decoder710then outputs decoded uplink data to the error detector711. Furthermore, the retransmission synthesis decoder710, in a case where a detection result from the error detector711indicates that there are no errors, deletes the retained uplink data of the terminal800. The error detector711carries out error detection by means of a CRC, for example, with respect to uplink data received from the retransmission synthesis decoder710, and outputs an error detection result (an ACK or NACK) to the scheduler701and the retransmission synthesis decoder710. Furthermore, the error detector711outputs, as reception data, uplink data determined as having no errors as a result of the error detection. [Configuration of Terminal (During UL Data Self-Contained Operation)] FIG.26is a block diagram depicting a configuration of the terminal800that carries out a UL data self-contained operation according to the present embodiment. InFIG.26, the terminal800has the antenna201, the receiver202, a signal extraction unit801, a control signal demodulator/decoder802, a delay tolerant signal demodulator/decoder803, a delay tolerant signal determination unit804, a data encoder805, a retransmission controller806, a data modulator807, a signal assignment unit808, and the transmitter213. The terminal800depicted inFIG.26receives a downlink signal that includes a delay tolerant signal or a control signal (a UL assignment) transmitted from the base station700in a downlink transmission region of a time unit (UL data self-contained time unit) that includes the “downlink transmission region”, a “gap period”, and an “uplink transmission region”. Furthermore, the terminal800transmits an uplink signal that includes uplink data (and may also include a UCI) in the uplink transmission region of the time unit. In the terminal800, the antenna201and the receiver202operate in a manner similar to the antenna201and the receiver202provided in the terminal200. The signal extraction unit801extracts a control signal from a baseband signal received from the receiver202, and outputs the control signal to the control signal demodulator/decoder802. Furthermore, the signal extraction unit801extracts a signal portion that includes a delay tolerant signal from the baseband signal, and outputs the delay tolerant signal to the delay tolerant signal demodulator/decoder803. The control signal demodulator/decoder802carries out blind decoding on a control signal received from the signal extraction unit801, and attempts decoding for a control signal addressed thereto. The control signal demodulator/decoder802, when having determined as a result of the blind decoding that the control signal is a control signal addressed thereto, outputs, to the signal assignment unit808, assigned resource information for uplink data (the ID of an assigned terminal, assigned resource information (a frequency, a time, and a coding resource), data demodulation reference signal information, a modulation/encoding scheme, or the like), included in the control signal, outputs information instructing a retransmission or a new transmission of uplink data to the retransmission controller806, and outputs assigned resource information for a delay tolerant signal and delay tolerant signal type information to the delay tolerant signal demodulator/decoder803. The delay tolerant signal demodulator/decoder803carries out equalization, demodulation, and error correction decoding for a delay tolerant signal that is input from the signal extraction unit801, on the basis of the assigned resource information for the delay tolerant signal and the delay tolerant signal type information that are input from the control signal demodulator/decoder802, and outputs a decoded bit sequence to the delay tolerant signal determination unit804. The delay tolerant signal determination unit804determines whether or not the delay tolerant signal (a bit sequence) that is input from the delay tolerant signal demodulator/decoder803has been correctly received. The delay tolerant signal determination unit804, when having determined that the delay tolerant signal has been correctly received, outputs the delay tolerant signal. The data encoder805carries out error correction encoding on transmission data (uplink data), and outputs an encoded data signal to the retransmission controller806. The retransmission controller806determines whether or not the uplink data is the first packet or a retransmission packet on the basis of information received from the control signal demodulator/decoder802. In the case of the first packet, the retransmission controller806retains the encoded uplink data received from the data encoder805and also outputs the encoded uplink data to the data modulator807. Furthermore, in the case of the first packet, the retransmission controller806determines that the transmission and reception of the previous transmission packet has been successful and discards the retained data. However, in the case of a retransmission packet, the retransmission controller806outputs the corresponding retained data to the data modulator807. The data modulator807modulates the uplink data received from the retransmission controller806, and outputs the modulated uplink data to the signal assignment unit808. The signal assignment unit808maps the uplink data received from the data modulator807to a resource (a time, a frequency, and a coding resource) within a time unit for a self-contained operation, instructed from the control signal demodulator/decoder802. The signal assignment unit808outputs an uplink signal for which signal mapping has been carried out, to the transmitter213. The transmitter213operates in a manner similar to the transmitter213provided in the terminal200. [Operation of Base Stations500and700and Terminals600and800] A detailed description will be given regarding an operation in the base stations500and700and the terminals600and800having the above configurations. FIG.27depicts an example of a transmission sequence in each of a base station (eNB) and a terminal (UE) during the DL data self-contained operation ofFIG.1A. Furthermore,FIG.28depicts an example of a transmission sequence in each of the base station500and the terminal600during the DL data self-contained operation according to the present embodiment. InFIG.27, in each time unit, gap #1 that takes into consideration the propagation delay time and the processing time of the terminal is arranged between a downlink transmission region and an uplink transmission region (at the end of the downlink transmission region), and gap #2 that takes into consideration the processing time of the base station is arranged after the uplink transmission region (at the end of the uplink transmission region). For example, the terminal carries out reception processing for downlink data received in the downlink transmission region, in the period of gap #1 depicted inFIG.27, and transmits a response signal (ACK) for the downlink data in the uplink transmission region. However, in the present embodiment, as depicted inFIG.28, in a DL data self-contained operation, a delay tolerant signal, for which a delay can be tolerated more than for a control signal or downlink data (DL data) that is mapped to the downlink transmission region, is mapped to a period that takes into consideration the processing time of terminal600within gap #1 arranged between the downlink transmission region and the uplink transmission region depicted inFIG.27. That is, the base station500transmits a delay tolerant signal that has been mapped to a period corresponding to gap #1 between the downlink transmission region and the uplink transmission region, and the terminal600receives the delay tolerant signal that has been mapped to the period corresponding to gap #1. It should be noted that, as depicted inFIG.28, in the base station500, out of the period corresponding to gap #1, the length of the period in which the delay tolerant signal is arranged corresponds to the processing time of the terminal600, and the remaining period remains as a gap period that takes into consideration a propagation delay between the base station500and the terminal600. In this case also, the terminal600, upon receiving downlink data in the downlink transmission region, is able to carry out reception processing for downlink data in the transmission period for the delay tolerant signal (corresponding to gap #1), and transmit a response signal for the downlink data in the uplink transmission region. Furthermore, the terminal600, upon receiving a delay tolerant signal transmitted from the base station500at the end of the downlink transmission region, carries out predetermined processing (demodulation/decoding processing or the like) on the delay tolerant signal. As mentioned above, a delay tolerant signal is a signal for which it is not always necessary to carry out reception/decoding processing or the like by the time unit that is subsequent to the time unit in which the delay tolerant signal has been received by the terminal600. That is, since a delay is tolerated for a delay tolerant signal, the terminal600, for example, can carry out demodulation/decoding processing for a delay tolerant signal in a period corresponding to the next time unit. It should be noted that, althoughFIG.28relates to during a DL data self-contained operation, in a UL data self-contained operation it is also sufficient to similarly configure a time unit in which a delay tolerant signal is mapped to a period corresponding to the processing time of the terminal600within gap #1 depicted inFIG.1B. It is thereby possible to reduce the overhead for gaps while maintaining the average delay time from a transmission buffer for the base station500being generated to the base station500receiving a response signal to downlink data from the terminal600, and the average delay time from a transmission buffer for the terminal800being generated to the terminal800completing transmission of the first uplink data. It should be noted that it is not always necessary for the base stations500and700to transmit a delay tolerant signal in each time unit. In a case where the base stations500and700do not transmit a delay tolerant signal, the time resource for a delay tolerant signal (the end of a downlink transmission region) becomes a gap period as inFIG.1AandFIG.1B. It is thereby possible to reduce power consumption by not carrying out excessive transmissions. [Types of Delay Tolerant Times] Next, the types of delay tolerant times that are generated in the delay tolerant signal generators503and703of the base stations500and700will be described in detail. Hereinafter, the types of delay tolerant signals (common delay tolerant signal types) with which a performance improvement can be expected due to being transmitted in both a DL data self-contained operation and a UL data self-contained operation will be described. <Common Delay Tolerant Signal Type 1> A delay tolerant signal in common delay tolerant signal type 1 is system information (a MIB: master information block) of the base stations500and700constituting broadcast information. A system bandwidth, the number of transmission antennas, and the like are included in a MIB. A MIB has no effect on the retransmission control of downlink data or uplink data even if the terminals600and800do not complete reception/decoding processing by the next time unit. That is, a MIB is a signal for which a delay can be tolerated compared to a control signal (a DL assignment or a UL assignment) or downlink data transmitted in a downlink transmission region. In this way, due to the base stations500and700transmitting a MIB in a gap period (part of gap #1) at the end of a downlink transmission region, in addition to the aforementioned effects, it is possible to increase the opportunities for the terminals600and800to receive system information of the base stations500and700. Therefore, it is possible to shorten the time required for the terminals600and800to connect to the base stations500and700. <Common Delay Tolerant Signal Type 2> A delay tolerant signal in common delay tolerant signal type 2 is system information (a SIB: system information block) of the base stations500and700constituting broadcast information. A SIB includes parameters relating to access to the base stations500and700, settings for common/shared channels (configuration), and the like. It should be noted that, in an LTE system, there are SIB 1 to SIB 11 as SIBs, and the content and period for transmission by each SIB are determined. One or more from among SIB 1 to SIB 11 may be SIBs that are transmitted as delay tolerant signals. A SIB has no effect on the retransmission control of downlink data or uplink data even if the terminals600and800do not complete reception/decoding processing by the next time unit. That is, a SIB is a signal for which a delay can be tolerated compared to a control signal (a DL assignment or a UL assignment) or downlink data transmitted in a downlink transmission region. In this way, due to the base stations500and700transmitting a SIB in a gap period (part of gap #1) at the end of a downlink transmission region, in addition to the aforementioned effects, it is possible to increase the opportunities for the terminals600and800to receive system information of the base stations500and700. Therefore, it is possible to shorten the time required for the terminals600and800to connect to the base stations500and700. <Common Delay Tolerant Signal Type 3> A delay tolerant signal in common delay tolerant signal type 3 is MBMS data, which is broadcast distribution data that is multicast/broadcast. MBMS data has no effect on the retransmission control of downlink data or uplink data even if the terminals600and800do not complete reception/decoding processing by the next time unit. That is, MBMS data is a signal for which a delay can be tolerated compared to a control signal (a DL assignment or a UL assignment) or downlink data transmitted in a downlink transmission region. In this way, due to the base stations500and700transmitting MBMS data in a gap period (part of gap #1) at the end of a downlink transmission region, in addition to the aforementioned effects, it is possible to increase the opportunities for the terminals600and800to receive broadcast distribution data that is multicast/broadcast. <Common Delay Tolerant Signal Type 4> A delay tolerant signal in common delay tolerant signal type 4 is information instructing a time unit or symbol configuration that can be transmitted in a downlink and an uplink within a certain time period (sometimes also referred to as a DL/UL usage configuration). A DL/UL usage configuration has no effect on the retransmission control of downlink data or uplink data even if the terminals600and800do not complete reception/decoding processing by the next time unit. That is, a DL/UL usage configuration is a signal for which a delay can be tolerated compared to a control signal (a DL assignment or a UL assignment) or downlink data transmitted in a downlink transmission region. In this way, due to the base stations500and700transmitting a DL/UL usage configuration in a gap period (part of gap #1) at the end of a downlink transmission region, in addition to the aforementioned effects, it is possible to increase the opportunities for the terminals600and800to switch the time unit or symbol configuration for a downlink and an uplink within a certain time period. Therefore, the configuration of a frame can be altered more dynamically in accordance with the amount of downlink traffic and the amount of uplink traffic, and system throughput can be improved. Hereinabove, common delay tolerant signal types 1 to 4 have been described. In this way, in the present embodiment, a delay tolerant signal having no effect on the processing time of the terminal is mapped to a gap period that is a switching point from a downlink transmission region to an uplink transmission region in a time unit (a gap period that is arranged at the end of the downlink transmission region). It is thereby possible to reduce the overhead for gap periods while ensuring the processing times of the terminals600and800in gap periods. For example, even in a case where gap periods increase in length in consideration of the processing times of the terminals600and800, more assigned resources for delay tolerant signals can be ensured in proportion to the amount by which the gap periods have increased in length. Based on the above, according to the present embodiment, it is possible to suppress a decline in the utilization efficiency of radio resources caused by gap periods within time units. Embodiment 4 As described in embodiment 3, in a case where a self-contained operation is used, performance can be improved by transmitting a delay tolerant signal that has no effect on the processing time of the base station or the terminal, at the end of a downlink transmission region within a time unit (that is, a period for the processing time of the terminal within gap #1 inFIG.1AandFIG.1B). However, in embodiment 3, it is necessary to notify a frequency resource (assigned resource information) used to transmit a delay tolerant signal, from the base station to the terminal. Therefore, the amount of downlink control signals increases, and the overhead for control signals increases. Thus, in the present embodiment, a method will be described in which a delay tolerant signal is transmitted without the frequency resource used to transmit the delay tolerant signal being notified by means of a downlink control signal. It should be noted that the base station and the terminal according to the present embodiment have a basic configuration that is common to the base stations500and700and the terminals600and800according to embodiment 3, and will therefore be described with reference toFIG.22andFIG.24toFIG.26. In the present embodiment, the processing of the control signal generators505and705and the processing of the signal assignment units510and707of the base stations500and700inFIG.22andFIG.25are different from in embodiment 3. Specifically, the control signal generators505and705do not generate control information that indicates a frequency resource to which a delay tolerant signal is assigned. That is, the control signal generators505and705generate assigned resource information for downlink data, uplink data, or a response signal as control information relating to a frequency resource assigned to the terminals600and800. The signal assignment units510and707determine a frequency resource (assigned band) to which a delay tolerant signal is assigned, in accordance with a frequency band (assigned band) to which a downlink control signal, downlink data, or uplink data, transmitted in the same time unit as the delay tolerant signal, has been assigned. Hereinafter, a resource assignment method for a delay tolerant signal in the aforementioned signal assignment units510and707of the base stations500and700will be described in detail. First, resource assignment methods that are common to a DL data self-contained operation and a UL data self-contained operation (common resource assignment method) will be described. <Common Resource Assignment Method 1> The terminals600and800receive a delay tolerant signal in a frequency band notified by means of a higher layer. In this way, by notifying a transmission band for a delay tolerant signal by means of a higher layer notification, the amount of downlink control information can be reduced. Furthermore, data transmitted to both the terminals600and800such as broadcast information can be received by both the terminals600and800due to being arranged in a radio resource instructed by means of a higher layer notification. <Common Resource Assignment Method 2> The terminals600and800specify a frequency assignment position for a delay tolerant signal on the basis of a CCE index to which a downlink control signal (for example, a PDCCH that includes a DL assignment or a UL assignment) has been assigned. FIG.29depicts an example of the assignment of a frequency resource for a delay tolerant signal (the delay tolerant signal ofFIG.29) that is based on a CCE according to common resource assignment method 2. In the example depicted inFIG.29, during a DL data self-contained operation, an index of a CCE (downlink resource) to which a DL assignment is assigned and a frequency resource (uplink resource) to which a response signal is assigned are associated on a one-to-one basis. InFIG.29, in addition, the CCE index to which the DL assignment is assigned and a frequency resource (uplink resource) to which a delay tolerant signal is assigned are associated on a one-to-one basis. Here, the number of CCEs, for example, is a value obtained by dividing the number of REs forming a downlink control signal (PDCCH) by 36 (1 CCE=36 REs). Thus, for instance, as an example of the association between CCEs and frequency assignment positions, a usable bandwidth is divided by the number of CCEs, and a usable frequency band is associated with each CCE. The base station500then transmits the delay tolerant signal which is mapped to all or some of a frequency band that is a resource associated on a one-to-one basis in relation to the delay tolerant signal with the index of the CCE (CCE #X inFIG.29) used to transmit the DL assignment for the corresponding terminal600. Furthermore, the terminal600specifies, as an assigned resource for the delay tolerant signal, all or some of a frequency band that is a resource associated on a one-to-one basis in relation to the delay tolerant signal with the index of the CCE to which the DL assignment addressed thereto has been assigned. It should be noted that, althoughFIG.29depicts a DL data self-contained operation, similarly also for a UL data self-contained operation, it is sufficient for the index of a CCE used to transmit a UL assignment and a resource used to transmit a delay tolerant signal to be associated on a one-to-one basis. In this way, a delay tolerant signal is mapped to a resource associated on a one-to-one basis with a resource (CCE index) used to transmit assignment information (a DL assignment or a UL assignment) indicating a resource assignment for data transmitted in the same time unit as the delay tolerant signal. By associating a CCE index and a resource for a delay tolerant signal, signaling for notifying a frequency resource used to transmit the delay tolerant signal is not necessary. Thus, the base stations500and700can control the frequency assignment position of a delay tolerant signal while reducing the amount of downlink control information. Furthermore, due to the base stations500and700controlling the assignment of CCEs, it becomes possible for a radio resource for a delay tolerant signal to be changed by the base stations500and700. <DL Data Self-Contained Resource Assignment Method> Next, a resource assignment method during a DL data self-contained operation (DL data self-contained resource assignment methods) will be described. The terminal600receives a delay tolerant signal within a frequency band having assigned thereto downlink data, which is transmitted within the same time unit. FIG.30depicts an example of the assignment of a frequency resource for downlink data and a delay tolerant signal according to a DL data self-contained resource assignment method. InFIG.30, the base station500transmits a delay tolerant signal which is mapped to a resource within the same frequency band as the frequency band used to transmit downlink data for the corresponding terminal600. The terminal600specifies an assigned resource for the downlink data (DL data) by means of the DL assignment addressed thereto. The terminal600then specifies a resource within the same frequency band as the frequency band assigned to the downlink data, as the assigned resource for the delay tolerant signal. It should be noted that, althoughFIG.30depicts an example in which the assigned resource for a delay tolerant signal is the same as for downlink data, the assigned resource for the delay tolerant signal may not be the same as long as it is within the band to which the downlink data is assigned. Furthermore, in a case where downlink data is transmitted by means of MU-MIMO, delay tolerant signals of a plurality of terminals600are assigned to the same band. In this case, a method is feasible in which delay tolerant signals are also transmitted by means of MU-MIMO in a manner similar to downlink data. Furthermore, a method may be adopted in which the assigned band for downlink data is divided by the number of terminals multiplexed by means of MU-MIMO, and, for example, a port number for a reference signal (DMRS) for demodulating downlink data and a divided frequency band are associated. Furthermore, in a case where a delay tolerant signal is a multicast/broadcast signal, a method may be adopted in which the base station500transmits a delay tolerant signal within a frequency band to which downlink data is assigned, irrespective of the downlink data transmission method. In this way, a delay tolerant signal is mapped to within the same frequency band as the frequency band having assigned thereto downlink data transmitted in the same time unit as the delay tolerant signal. By associating the frequency assignment position of the delay tolerant signal with the downlink data, the amount of downlink control information can be reduced. Furthermore, since the frequency assignment position of the downlink data is the same as for the delay tolerant signal, scheduling in the base station500becomes easy. Furthermore, since the downlink data is scheduled, there is a high possibility of a signal being assigned to a frequency band having a high SINR. Thus, a scheduling gain can be obtained by the delay tolerant signal being transmitted in the same band as the downlink data. <UL Data Self-Contained Resource Assignment Method> Next, a resource assignment method during a UL data self-contained operation (UL data self-contained resource assignment method) will be described. The terminal800receives a delay tolerant signal within a frequency band having assigned thereto uplink data, which is transmitted within the same time unit. FIG.31depicts an example of the assignment of a frequency resource for uplink data and a delay tolerant signal according to a UL data self-contained resource assignment method. InFIG.31, the base station700transmits a delay tolerant signal which is mapped to a resource within the same frequency band as the frequency band used to transmit uplink data (UL data) for the corresponding terminal800. The terminal800specifies an assigned resource for the uplink data by means of a UL assignment addressed thereto. The terminal800then specifies a resource within the same frequency band as the frequency band assigned to the uplink data, as the assigned resource for a delay tolerant signal. It should be noted that, althoughFIG.31depicts an example in which the assigned resource for a delay tolerant signal is the same as for uplink data, the assigned resource for the delay tolerant signal may not be the same as long as it is within the band to which the uplink data is assigned. Furthermore, in a case where uplink data is transmitted by means of MU-MIMO, delay tolerant signals of a plurality of terminals800are assigned to the same band. In this case, a method is feasible in which delay tolerant signals are also transmitted by means of MU-MIMO in a manner similar to uplink data. Furthermore, a method may be adopted in which the assigned band for uplink data is divided by the number of terminals multiplexed by means of MU-MIMO, and, for example, a port number for a reference signal (DMRS) for demodulating uplink data and a divided frequency band are associated. In this way, a delay tolerant signal is mapped to within the same frequency band as the frequency band having assigned thereto uplink data transmitted in the same time unit as the delay tolerant signal. By associating the frequency assignment position of the delay tolerant signal with the uplink data, the amount of downlink control information can be reduced. Furthermore, since the frequency assignment position of the uplink data is the same as for the delay tolerant signal, scheduling in the base station700becomes easy. Furthermore, since the uplink data is scheduled, there is a high possibility of a signal being assigned to a frequency band having a high SINR. Thus, in the case of a TDD system, a scheduling gain can be obtained by the delay tolerant signal being transmitted in the same band as the uplink data. Hereinabove, the details of resource assignment methods for a delay tolerant signal have been described. In this way, in the present embodiment, it is not necessary to notify a frequency resource (assigned resource information) used to transmit a delay tolerant signal, from the base stations500and700to the terminals600and800using a downlink control signal, and therefore it is possible to prevent an increase in the overhead for control signals. Hereinabove, embodiments of the present disclosure have been described. It should be noted that embodiment 1 and embodiment 2 may be combined and implemented. Furthermore, the aforementioned embodiments describe examples of cases where an aspect of the present disclosure is configured by means of hardware; however, it is also possible for the present disclosure to be realized by means of software in cooperation with hardware. Furthermore, each functional block used in the description of the aforementioned embodiments is typically realized as an LSI, which is an integrated circuit. The integrated circuits may control the functional blocks used in the descriptions of the aforementioned embodiments, and may be provided with input and output. These may be implemented separately as single chips or may be implemented as a single chip in such a way as to include some or all of the functional blocks. LSIs have been mentioned here; however, the functional blocks are sometimes also referred to as ICs, system LSIs, super LSIs, or ultra LSIs depending on differences in the degree of integration. Furthermore, the circuit integration technique is not limited to that of an LSI, and a functional block may be realized using a dedicated circuit or a general-purpose processor. After an LSI has been manufactured, an FPGA (field-programmable gate array) that can be programmed, or a reconfigurable processor with which the connections and settings of circuit cells within the LSI can be reconfigured, may be used. In addition, if circuit integration technology that replaces LSI appears as a result of another technology that is an advancement in semiconductor technology or is derived therefrom, naturally, the other technology may be used to carry out the integration of functional blocks. Biotechnology applications and the like are also a possibility. A base station of the present disclosure is provided with: a transmitter that transmits a downlink signal in a downlink transmission region, in a time unit that includes the downlink transmission region, an uplink transmission region, and a gap period that is a switching point between the downlink transmission region and the uplink transmission region; and a receiver that receives an uplink signal in the uplink transmission region, in the time unit, in which a delay tolerant signal for which a delay is tolerated more than for the downlink signal and the uplink signal is mapped to within the gap period. In the base station of the present disclosure, the transmitter transmits the delay tolerant signal mapped to the gap period arranged between the downlink transmission region and the uplink transmission region within the time unit. In the base station of the present disclosure, the delay tolerant signal is at least one downlink signal from among a MIB (master information block), a SIB (system information block), MBMS (multimedia broadcast and multicast service) data, information indicating a time unit configuration of a downlink and an uplink, or downlink data. In the base station of the present disclosure, the receiver receives the delay tolerant signal mapped to the gap period arranged after the uplink transmission region. In the base station of the present disclosure, the delay tolerant signal is at least one uplink signal from among an SRS (sounding reference signal), information indicating a transmission beam pattern, CSI (channel state information), an SR (scheduling request), a BSR (buffer status report), a TCP ACK/SYC, a response signal that is transmitted in the same time unit as the delay tolerant signal, or uplink data that is transmitted in the same time unit as the delay tolerant signal. In the base station of the present disclosure, the delay tolerant signal is mapped to a resource associated on a one-to-one basis with a resource used to transmit assignment information indicating a resource assignment for data transmitted in the same time unit as the delay tolerant signal. In the base station of the present disclosure, the delay tolerant signal is mapped to within the same frequency band as a frequency band having assigned thereto a response signal for downlink data transmitted in the same time unit as the delay tolerant signal. In the base station of the present disclosure, the delay tolerant signal is mapped to within the same frequency band as a frequency band having assigned thereto data transmitted in the same time unit as the delay tolerant signal. A terminal of the present disclosure is provided with: a receiver that receives a downlink signal in a downlink transmission region, in a time unit that includes the downlink transmission region, an uplink transmission region, and a gap period that is a switching point between the downlink transmission region and the uplink transmission region; and a transmitter that transmits an uplink signal in the uplink transmission region, in the time unit, in which a delay tolerant signal for which a delay is tolerated more than for the downlink signal and the uplink signal is mapped to within the gap period. A communication method of the present disclosure includes: transmitting a downlink signal in a downlink transmission region, in a time unit that includes the downlink transmission region, an uplink transmission region, and a gap period that is a switching point between the downlink transmission region and the uplink transmission region; and receiving an uplink signal in the uplink transmission region, in the time unit, in which a delay tolerant signal for which a delay is tolerated more than for the downlink signal and the uplink signal is mapped to within the gap period. A communication method of the present disclosure includes: receiving a downlink signal in a downlink transmission region, in a time unit that includes the downlink transmission region, an uplink transmission region, and a gap period that is a switching point between the downlink transmission region and the uplink transmission region; and transmitting an uplink signal in the uplink transmission region, in the time unit, in which a delay tolerant signal for which a delay is tolerated more than for the downlink signal and the uplink signal is mapped to within the gap period. INDUSTRIAL APPLICABILITY An aspect of the present disclosure is useful in a mobile communication system. REFERENCE SIGNS LIST 100,300,500,700Base station101,301,501,701Scheduler102,302,502,702Delay tolerant signal controller103,303,505,705Control signal generator104,304,506,706Control signal encoder/modulator105,403,507,805Data encoder106,404,508,806Retransmission controller107,405,509,807Data modulator108,212,305,408,510,610,707,808Signal assignment unit109,213Transmitter110,201Antenna111,202Receiver112,203,306,401,511,601,708,801Signal extraction unit113,307,603,803Delay tolerant signal demodulator/decoder114,308,604,804Delay tolerant signal determination unit115,512Demodulator/decoder116,513Determination unit200,400,600,800Terminal204,402,602,802Control signal demodulator/decoder205,309,605,709Data demodulator206,606Data decoder207,311,607,711Error detector208,608Response signal generator209,609Encoder/modulator210,406,503,703Delay tolerant signal generator211,407,504,704Delay tolerant signal encoder/modulator310,710Retransmission synthesis decoder
138,772
11863479
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one aspect may be beneficially utilized on other aspects without specific recitation. DETAILED DESCRIPTION Aspects of the present disclosure provide apparatus, methods, processing systems, and computer readable mediums for signaling quasi-colocation (QCL) information for demodulation reference signals (DM-RS) associated with multiple transmission-reception points (multi-TRP) or multiple antenna panels (multi-panel). Aspects of the present disclosure provide signaling, to a UE, QCL assumptions linked to multiple antenna port groups. For example, a UE may receive, from a BS, QCL information associated with multiple antenna groups, and the UE may apply the QCL assumptions to receive transmissions via the multiple antenna port groups such as multi-TRP/multi-panel transmissions. The following description provides examples, and is not limiting of the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For instance, the methods described may be performed in an order different from that described, and various steps may be added, omitted, or combined. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. The techniques described herein may be used for various wireless communication technologies, such as LTE, CDMA, TDMA, FDMA, OFDMA, SC-FDMA and other networks. The terms “network” and “system” are often used interchangeably. A CDMA network may implement a radio technology such as Universal Terrestrial Radio Access (UTRA), cdma2000, etc. UTRA includes Wideband CDMA (WCDMA) and other variants of CDMA. cdma2000 covers IS-2000, IS-95 and IS-856 standards. A TDMA network may implement a radio technology such as Global System for Mobile Communications (GSM). An OFDMA network may implement a radio technology such as NR (e.g. 5G RA), Evolved UTRA (E-UTRA), Ultra Mobile Broadband (UMB), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDMA, etc. UTRA and E-UTRA are part of Universal Mobile Telecommunication System (UMTS). New Radio (NR) is an emerging wireless communications technology under development in conjunction with the 5G Technology Forum (5GTF). 3GPP Long Term Evolution (LTE) and LTE-Advanced (LTE-A) are releases of UMTS that use E-UTRA. UTRA, E-UTRA, UMTS, LTE, LTE-A and GSM are described in documents from an organization named “3rd Generation Partnership Project” (3GPP). cdma2000 and UMB are described in documents from an organization named “3rd Generation Partnership Project 2” (3GPP2). The techniques described herein may be used for the wireless networks and radio technologies mentioned above as well as other wireless networks and radio technologies. For clarity, while aspects may be described herein using terminology commonly associated with 3G and/or 4G wireless technologies, aspects of the present disclosure can be applied in other generation-based communication systems, such as 5G and later, including NR technologies. New radio (NR) access (e.g., 5G technology) may support various wireless communication services, such as enhanced mobile broadband (eMBB) targeting wide bandwidth (e.g., 80 MHz or beyond), millimeter wave (mmW) targeting high carrier frequency (e.g., 25 GHz or beyond), massive machine type communications MTC (mMTC) targeting non-backward compatible MTC techniques, and/or mission critical targeting ultra-reliable low-latency communications (URLLC). These services may include latency and reliability requirements. These services may also have different transmission time intervals (TTI) to meet respective quality of service (QoS) requirements. In addition, these services may co-exist in the same subframe. Example Wireless Communications System FIG.1illustrates an example wireless communication network100in which aspects of the present disclosure may be performed. The wireless communication network100may be a New Radio (NR) or 5G network that supports multiple DM-RS port groups for QCL assumptions. For example, UE120amay receive, from BS110a, QCL information corresponding to multiple antenna groups, such as demodulation reference signal (DM-RS) port groups. The QCL information may enable the UE120ato apply QCL assumptions to multi-TRP/multi-panel transmissions, such as transmissions from BS110aand BS110bor transmissions from multiple antenna panels of BS110a. As illustrated inFIG.1, the wireless network100may include a number of base stations (BSs)110and other network entities. A BS may be a station that communicates with user equipments (UEs). Each BS110may provide communication coverage for a particular geographic area. In 3GPP, the term “cell” can refer to a coverage area of a Node B (NB) and/or a Node B subsystem serving this coverage area, depending on the context in which the term is used. In NR systems, the term “cell” and next generation NodeB (gNB), new radio base station (NR BS), 5G NB, access point (AP), or transmission reception point (TRP) may be interchangeable. In some examples, a cell may not necessarily be stationary, and the geographic area of the cell may move according to the location of a mobile BS. In some examples, the base stations may be interconnected to one another and/or to one or more other base stations or network nodes (not shown) in wireless communication network100through various types of backhaul interfaces, such as a direct physical connection, a wireless connection, a virtual network, or the like using any suitable transport network. In general, any number of wireless networks may be deployed in a given geographic area. Each wireless network may support a particular radio access technology (RAT) and may operate on one or more frequencies. A RAT may also be referred to as a radio technology, an air interface, etc. A frequency may also be referred to as a carrier, a subcarrier, a frequency channel, a tone, a subband, etc. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. In some cases, NR or 5G RAT networks may be deployed. A base station (BS) may provide communication coverage for a macro cell, a pico cell, a femto cell, and/or other types of cells. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscription. A pico cell may cover a relatively small geographic area and may allow unrestricted access by UEs with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs having an association with the femto cell (e.g., UEs in a Closed Subscriber Group (CSG), UEs for users in the home, etc.). A BS for a macro cell may be referred to as a macro BS. A BS for a pico cell may be referred to as a pico BS. ABS for a femto cell may be referred to as a femto BS or a home BS. In the example shown inFIG.1, the BSs110a,110band110cmay be macro BSs for the macro cells102a,102band102c, respectively. The BS110xmay be a pico BS for a pico cell102x. The BSs110yand110zmay be femto BSs for the femto cells102yand102z, respectively. A BS may support one or multiple (e.g., three) cells. Wireless communication network100may also include relay stations. A relay station is a station that receives a transmission of data and/or other information from an upstream station (e.g., a BS or a UE) and sends a transmission of the data and/or other information to a downstream station (e.g., a UE or a BS). A relay station may also be a UE that relays transmissions for other UEs. In the example shown inFIG.1, a relay station110rmay communicate with the BS110aand a UE120rin order to facilitate communication between the BS110aand the UE120r. A relay station may also be referred to as a relay BS, a relay, etc. Wireless network100may be a heterogeneous network that includes BSs of different types, e.g., macro BS, pico BS, femto BS, relays, etc. These different types of BSs may have different transmit power levels, different coverage areas, and different impact on interference in the wireless network100. For example, macro BS may have a high transmit power level (e.g., 20 Watts) whereas pico BS, femto BS, and relays may have a lower transmit power level (e.g., 1 Watt). Wireless communication network100may support synchronous or asynchronous operation. For synchronous operation, the BSs may have similar frame timing, and transmissions from different BSs may be approximately aligned in time. For asynchronous operation, the BSs may have different frame timing, and transmissions from different BSs may not be aligned in time. The techniques described herein may be used for both synchronous and asynchronous operation. A network controller130may couple to a set of BSs and provide coordination and control for these BSs. The network controller130may communicate with the BSs110via a backhaul. The BSs110may also communicate with one another (e.g., directly or indirectly) via wireless or wireline backhaul. The UEs120(e.g.,120x,120y, etc.) may be dispersed throughout the wireless network100, and each UE may be stationary or mobile. A UE may also be referred to as a mobile station, a terminal, an access terminal, a subscriber unit, a station, a Customer Premises Equipment (CPE), a cellular phone, a smart phone, a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet computer, a camera, a gaming device, a netbook, a smartbook, an ultrabook, an appliance, a medical device or medical equipment, a biometric sensor/device, a wearable device such as a smart watch, smart clothing, smart glasses, a smart wrist band, smart jewelry (e.g., a smart ring, a smart bracelet, etc.), an entertainment device (e.g., a music device, a video device, a satellite radio, etc.), a vehicular component or sensor, a smart meter/sensor, industrial manufacturing equipment, a global positioning system device, or any other suitable device that is configured to communicate via a wireless or wired medium. Some UEs may be considered machine-type communication (MTC) devices or evolved MTC (eMTC) devices. MTC and eMTC UEs include, for example, robots, drones, remote devices, sensors, meters, monitors, location tags, etc., that may communicate with a BS, another device (e.g., remote device), or some other entity. A wireless node may provide, for example, connectivity for or to a network (e.g., a wide area network such as Internet or a cellular network) via a wired or wireless communication link. Some UEs may be considered Internet-of-Things (IoT) devices, which may be narrowband IoT (NB-IoT) devices. Certain wireless networks (e.g., LTE) utilize orthogonal frequency division multiplexing (OFDM) on the downlink and single-carrier frequency division multiplexing (SC-FDM) on the uplink. OFDM and SC-FDM partition the system bandwidth into multiple (K) orthogonal subcarriers, which are also commonly referred to as tones, bins, etc. Each subcarrier may be modulated with data. In general, modulation symbols are sent in the frequency domain with OFDM and in the time domain with SC-FDM. The spacing between adjacent subcarriers may be fixed, and the total number of subcarriers (K) may be dependent on the system bandwidth. For example, the spacing of the subcarriers may be 15 kHz and the minimum resource allocation (called a “resource block” (RB)) may be 12 subcarriers (or 180 kHz). Consequently, the nominal Fast Fourier Transfer (FFT) size may be equal to 128, 256, 512, 1024 or 2048 for system bandwidth of 1.25, 2.5, 5, 10, or 20 megahertz (MHz), respectively. The system bandwidth may also be partitioned into subbands. For example, a subband may cover 1.08 MHz (i.e., 6 resource blocks), and there may be 1, 2, 4, 8, or 16 subbands for system bandwidth of 1.25, 2.5, 5, 10 or 20 MHz, respectively. While aspects of the examples described herein may be associated with LTE technologies, aspects of the present disclosure may be applicable with other wireless communications systems, such as NR. NR may utilize OFDM with a cyclic prefix (CP) on the uplink and downlink and include support for half-duplex operation using TDD. Beamforming may be supported and beam direction may be dynamically configured. MIMO transmissions with precoding may also be supported. MIMO configurations in the DL may support up to 8 transmit antennas with multi-layer DL transmissions up to 8 streams and up to 2 streams per UE. Aggregation of multiple cells may be supported with up to 8 serving cells. In some examples, access to the air interface may be scheduled, wherein a scheduling entity (e.g., a base station) allocates resources for communication among some or all devices and equipment within its service area or cell. The scheduling entity may be responsible for scheduling, assigning, reconfiguring, and releasing resources for one or more subordinate entities. That is, for scheduled communication, subordinate entities utilize resources allocated by the scheduling entity. Base stations are not the only entities that may function as a scheduling entity. In some examples, a UE may function as a scheduling entity and may schedule resources for one or more subordinate entities (e.g., one or more other UEs), and the other UEs may utilize the resources scheduled by the UE for wireless communication. In some examples, a UE may function as a scheduling entity in a peer-to-peer (P2P) network, and/or in a mesh network. In a mesh network example, UEs may communicate directly with one another in addition to communicating with a scheduling entity. InFIG.1, a solid line with double arrows indicates desired transmissions between a UE and a serving BS, which is a BS designated to serve the UE on the downlink and/or uplink. A finely dashed line with double arrows indicates interfering transmissions between a UE and a BS. FIG.2illustrates an example logical architecture of a distributed Radio Access Network (RAN)200, which may be implemented in the wireless communication network100illustrated inFIG.1. A 5G access node206may include an access node controller (ANC)202. ANC202may be a central unit (CU) of the distributed RAN200. The backhaul interface to the Next Generation Core Network (NG-CN)204may terminate at ANC202. The backhaul interface to neighboring next generation access Nodes (NG-ANs)210may terminate at ANC202. ANC202may include one or more transmission reception points (TRPs)208(e.g., cells, BSs, gNBs, etc.). The TRPs208may be a distributed unit (DU). TRPs208may be connected to a single ANC (e.g., ANC202) or more than one ANC (not illustrated). For example, for RAN sharing, radio as a service (RaaS), and service specific AND deployments, TRPs208may be connected to more than one ANC. TRPs208may each include one or more antenna ports. TRPs208may be configured to individually (e.g., dynamic selection) or jointly (e.g., joint transmission) serve traffic to a UE. The logical architecture of distributed RAN200may support fronthauling solutions across different deployment types. For example, the logical architecture may be based on transmit network capabilities (e.g., bandwidth, latency, and/or jitter). The logical architecture of distributed RAN200may share features and/or components with LTE. For example, next generation access node (NG-AN)210may support dual connectivity with NR and may share a common fronthaul for LTE and NR. The logical architecture of distributed RAN200may enable cooperation between and among TRPs208, for example, within a TRP and/or across TRPs via ANC202. An inter-TRP interface may not be used. Logical functions may be dynamically distributed in the logical architecture of distributed RAN200. As will be described in more detail with reference toFIG.5, the Radio Resource Control (RRC) layer, Packet Data Convergence Protocol (PDCP) layer, Radio Link Control (RLC) layer, Medium Access Control (MAC) layer, and a Physical (PHY) layers may be adaptably placed at the DU (e.g., TRP208) or CU (e.g., ANC202). FIG.3illustrates an example physical architecture of a distributed Radio Access Network (RAN)300, according to aspects of the present disclosure. A centralized core network unit (C-CU)302may host core network functions. C-CU302may be centrally deployed. C-CU302functionality may be offloaded (e.g., to advanced wireless services (AWS)), in an effort to handle peak capacity. A centralized RAN unit (C-RU)304may host one or more ANC functions. Optionally, the C-RU304may host core network functions locally. The C-RU304may have distributed deployment. The C-RU304may be close to the network edge. A DU306may host one or more TRPs (Edge Node (EN), an Edge Unit (EU), a Radio Head (RH), a Smart Radio Head (SRH), or the like). The DU may be located at edges of the network with radio frequency (RF) functionality. FIG.4illustrates example components of BS110and UE120(as depicted inFIG.1), which may be used to implement aspects of the present disclosure. For example, antennas452, processors466,458,464, and/or controller/processor480of the UE120and/or antennas434, processors420,430,438, and/or controller/processor440of the BS110may be used to perform the various techniques and methods described herein (FIGS.9and10). For example, UE120may receive, from BS110, QCL information corresponding to multiple antenna groups, such as demodulation DM-RS port groups. The QCL information may enable the UE120to apply QCL assumptions to receive transmissions via the multiple antenna port groups, such as multi-TRP/multi-panel transmissions. At the BS110, a transmit processor420may receive data from a data source412and control information from a controller/processor440. The control information may be for the physical broadcast channel (PBCH), physical control format indicator channel (PCFICH), physical hybrid ARQ indicator channel (PHICH), physical downlink control channel (PDCCH), group common PDCCH (GC PDCCH), etc. The data may be for the physical downlink shared channel (PDSCH), etc. The processor420may process (e.g., encode and symbol map) the data and control information to obtain data symbols and control symbols, respectively. The processor420may also generate reference symbols, e.g., for the primary synchronization signal (PSS), secondary synchronization signal (SSS), and cell-specific reference signal (CRS). A transmit (TX) multiple-input multiple-output (MIMO) processor430may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, and/or the reference symbols, if applicable, and may provide output symbol streams to the modulators (MODs)432athrough432t. Each modulator432may process a respective output symbol stream (e.g., for OFDM, etc.) to obtain an output sample stream. Each modulator may further process (e.g., convert to analog, amplify, filter, and upconvert) the output sample stream to obtain a downlink signal. Downlink signals from modulators432athrough432tmay be transmitted via the antennas434athrough434t, respectively. At the UE120, the antennas452athrough452rmay receive the downlink signals from the base station110and may provide received signals to the demodulators (DEMODs) in transceivers454athrough454r, respectively. Each demodulator454may condition (e.g., filter, amplify, downconvert, and digitize) a respective received signal to obtain input samples. Each demodulator may further process the input samples (e.g., for OFDM, etc.) to obtain received symbols. A MIMO detector456may obtain received symbols from all the demodulators454athrough454r, perform MIMO detection on the received symbols if applicable, and provide detected symbols. A receive processor458may process (e.g., demodulate, deinterleave, and decode) the detected symbols, provide decoded data for the UE120to a data sink460, and provide decoded control information to a controller/processor480. On the uplink, at UE120, a transmit processor464may receive and process data (e.g., for the physical uplink shared channel (PUSCH)) from a data source462and control information (e.g., for the physical uplink control channel (PUCCH) from the controller/processor480. The transmit processor464may also generate reference symbols for a reference signal (e.g., for the sounding reference signal (SRS)). The symbols from the transmit processor464may be precoded by a TX MIMO processor466if applicable, further processed by the demodulators in transceivers454athrough454r(e.g., for SC-FDM, etc.), and transmitted to the base station110. At the BS110, the uplink signals from the UE120may be received by the antennas434, processed by the modulators432, detected by a MIMO detector436if applicable, and further processed by a receive processor438to obtain decoded data and control information sent by the UE120. The receive processor438may provide the decoded data to a data sink439and the decoded control information to the controller/processor440. The controllers/processors440and480may direct the operation at the base station110and the UE120, respectively. The processor440and/or other processors and modules at the BS110may perform or direct the execution of processes for the techniques described herein. The memories442and482may store data and program codes for BS110and UE120, respectively. A scheduler444may schedule UEs for data transmission on the downlink and/or uplink. FIG.5illustrates a diagram500showing examples for implementing a communications protocol stack, according to aspects of the present disclosure. The illustrated communications protocol stacks may be implemented by devices operating in a wireless communication system, such as a 5G system (e.g., a system that supports uplink-based mobility). Diagram500illustrates a communications protocol stack including a Radio Resource Control (RRC) layer510, a Packet Data Convergence Protocol (PDCP) layer515, a Radio Link Control (RLC) layer520, a Medium Access Control (MAC) layer525, and a Physical (PHY) layer530. In various examples, the layers of a protocol stack may be implemented as separate modules of software, portions of a processor or ASIC, portions of non-collocated devices connected by a communications link, or various combinations thereof. Collocated and non-collocated implementations may be used, for example, in a protocol stack for a network access device (e.g., ANs, CUs, and/or DUs) or a UE. A first option505-ashows a split implementation of a protocol stack, in which implementation of the protocol stack is split between a centralized network access device (e.g., an ANC202inFIG.2) and distributed network access device (e.g., DU208inFIG.2). In the first option505-a, an RRC layer510and a PDCP layer515may be implemented by the central unit, and an RLC layer520, a MAC layer525, and a PHY layer530may be implemented by the DU. In various examples the CU and the DU may be collocated or non-collocated. The first option505-amay be useful in a macro cell, micro cell, or pico cell deployment. A second option505-bshows a unified implementation of a protocol stack, in which the protocol stack is implemented in a single network access device. In the second option, RRC layer510, PDCP layer515, RLC layer520, MAC layer525, and PHY layer530may each be implemented by the AN. The second option505-bmay be useful in, for example, a femto cell deployment. Regardless of whether a network access device implements part or all of a protocol stack, a UE may implement an entire protocol stack as shown in505-c(e.g., the RRC layer510, the PDCP layer515, the RLC layer520, the MAC layer525, and the PHY layer530). In LTE, the basic transmission time interval (TTI) or packet duration is the 1 ms subframe. In NR, a subframe is still 1 ms, but the basic TTI is referred to as a slot. A subframe contains a variable number of slots (e.g., 1, 2, 4, 8, 16, . . . slots) depending on the subcarrier spacing. The NR RB is 12 consecutive frequency subcarriers. NR may support a base subcarrier spacing of 15 KHz and other subcarrier spacing may be defined with respect to the base subcarrier spacing, for example, 30 kHz, 60 kHz, 120 kHz, 240 kHz, etc. The symbol and slot lengths scale with the subcarrier spacing. The CP length also depends on the subcarrier spacing. FIG.6is a diagram showing an example of a frame format600for NR. The transmission timeline for each of the downlink and uplink may be partitioned into units of radio frames. Each radio frame may have a predetermined duration (e.g., 10 ms) and may be partitioned into 10 subframes, each of 1 ms, with indices of 0 through 9. Each subframe may include a variable number of slots depending on the subcarrier spacing. Each slot may include a variable number of symbol periods (e.g., 7, 12, or 14 symbols) depending on the subcarrier spacing. The symbol periods in each slot may be assigned indices. A mini-slot, which may be referred to as a sub-slot structure, refers to a transmit time interval having a duration less than a slot (e.g., 2, 3, or 4 symbols). Each symbol in a slot may indicate a link direction (e.g., DL, UL, or flexible) for data transmission and the link direction for each subframe may be dynamically switched. The link directions may be based on the slot format. Each slot may include DL/UL data as well as DL/UL control information. In NR, a synchronization signal (SS) block is transmitted. The SS block includes a PSS, a SSS, and a two symbol PBCH. The SS block can be transmitted in a fixed slot location, such as the symbols 0-3 as shown inFIG.6. The PSS and SSS may be used by UEs for cell search and acquisition. The PSS may provide half-frame timing, the SS may provide the CP length and frame timing. The PSS and SSS may provide the cell identity. The PBCH carries some basic system information, such as downlink system bandwidth, timing information within radio frame, SS burst set periodicity, system frame number, etc. The SS blocks may be organized into SS bursts to support beam sweeping. Further system information such as, remaining minimum system information (RMSI), system information blocks (SIBs), other system information (OSI) can be transmitted on a physical downlink shared channel (PDSCH) in certain subframes. In some circumstances, two or more subordinate entities (e.g., UEs) may communicate with each other using sidelink signals. Real-world applications of such sidelink communications may include public safety, proximity services, UE-to-network relaying, vehicle-to-vehicle (V2V) communications, Internet of Everything (IoE) communications, IoT communications, mission-critical mesh, and/or various other suitable applications. Generally, a sidelink signal may refer to a signal communicated from one subordinate entity (e.g., UE1) to another subordinate entity (e.g., UE2) without relaying that communication through the scheduling entity (e.g., UE or BS), even though the scheduling entity may be utilized for scheduling and/or control purposes. In some examples, the sidelink signals may be communicated using a licensed spectrum (unlike wireless local area networks, which typically use an unlicensed spectrum). A UE may operate in various radio resource configurations, including a configuration associated with transmitting pilots using a dedicated set of resources (e.g., a radio resource control (RRC) dedicated state, etc.) or a configuration associated with transmitting pilots using a common set of resources (e.g., an RRC common state, etc.). When operating in the RRC dedicated state, the UE may select a dedicated set of resources for transmitting a pilot signal to a network. When operating in the RRC common state, the UE may select a common set of resources for transmitting a pilot signal to the network. In either case, a pilot signal transmitted by the UE may be received by one or more network access devices, such as an AN, or a DU, or portions thereof. Each receiving network access device may be configured to receive and measure pilot signals transmitted on the common set of resources, and also receive and measure pilot signals transmitted on dedicated sets of resources allocated to the UEs for which the network access device is a member of a monitoring set of network access devices for the UE. One or more of the receiving network access devices, or a CU to which receiving network access device(s) transmit the measurements of the pilot signals, may use the measurements to identify serving cells for the UEs, or to initiate a change of serving cell for one or more of the UEs. Example Quasi-Colocation Indication for Demodulation Reference Signals Aspects of the present disclosure provide techniques for providing quasi-colocation (QCL) signaling for groups of demodulation reference signal (DM-RS) ports across scenarios involving multiple cells and/or multiple panels (multi-panel), such as coordinated multipoint (CoMP) scenarios in which a UE is connected to multiple transmit receive points (TRPs). QCL assumptions generally refer to assumptions that, for a set of signals or channels considered to be QCL related (or simply “QCL′d” for short), certain characteristics derived for (measured from) one of the signals or channels may be applied to the other. As an example, if PDSCH DMRS is QCL′d with other DL RS, a UE may process PDSCH and measure the associated DM-RS based on characteristics/measurements of the other DL RS. In some cases, QCL assumptions for receptions/transmissions of signals and channels may be signaled via a mechanism referred to as Transmission Configuration Indicator (TCI) states.FIG.7illustrates an example TCI state used to configure a DM-RS port group via control signaling, in accordance with certain aspects of the present disclosure. In this example, the TCI state includes a single QCL configuration having at least two types of QCL information, which may provide QCL assumptions for two different DL reference signals. In some cases, a UE may be configured with various TCI states via radio resource control (RRC) signaling, while one of the actual TCI states may be indicated by an N bit DCI field for PDSCH. In some other cases, a UE may be configured with a subset of various TCI states (e.g., up to 8 TCI states) via MAC control signaling (e.g., a MAC control element (MAC-CE)), and downlink control signaling (e.g., DCI) may be used to select a TCI state out of the subset (e.g., 3 bits may be used to identify which TCI state is enabled). FIG.8illustrates an example of QCL information that may be included in a QCL configuration, in accordance with certain aspects of the present disclosure. The QCL assumptions may be grouped into different types that correspond to the parameters that may be assumed QCL′d for a set of QCL′d signals. For example, for a set of QCL′d signals, Type A may indicate that Doppler shift, Doppler spread, average delay, delay spread can be assumed QCL′d, while Type B may indicate only Doppler shift and Doppler spread, Type C may indicate a still different set of parameters. In some cases, spatial QCL assumptions may be indicated, for example, by Type D. Spatial QCL may mean a (Tx or Rx) beam selected based on a certain signal measurement may be applied to the QCL related signal. As an example, the QCL assumptions may provide a QCL relationship between a DM-RS and at least one of a channel state information reference signal (CSI-RS) or a synchronization signal (SS). As used herein, a set of QCL′d signals refers to the QCL relationship between those signals (e.g., Doppler shift, Doppler spread, average delay, and/or delay spread). One limitation of the current QCL configuration is that only one TCI state consisting of a single QCL assumption is provided per DL transmission. That is, all the DM-RS ports have the same QCL assumptions. In some cases, multiple DM-RS port groups are configured for a DL transmission, but the current QCL configuration only supports signaling of a single QCL assumption. Aspects of the present disclosure, however, extend the QCL configuration to allow signaling of QCL assumptions linked to multiple antenna port groups. As such, the QCL signaling described herein may be applied in multi-TRP/multi-panel scenarios, such as CoMP deployments where multiple transmission reception points (TRPs) communicate with a UE. FIG.9is a flow diagram illustrating example operations900that may be performed, for example, by a base station (e.g., BS110), for configuring DM-RS transmissions with QCL information that supports multi-TRP transmissions, in accordance with certain aspects of the present disclosure. Operations900may begin, at902, where the BS generates quasi-colocation (QCL) information indicating a first QCL assumption for a first group of demodulation reference signal (DM-RS) ports and a second QCL assumption for a second group of DM-RS ports. At904, the BS transmits the QCL information to at least one user equipment (UE) for use in processing one or more transmission associated with at least one of the first group of DM-RS ports and the second group of DM-RS ports. FIG.10is a flow diagram illustrating example operations1000that may be performed, for example, by a user equipment (e.g., UE120), for configuring DM-RS transmissions with QCL information that supports multi-TRP/multi-panel transmissions, in accordance with certain aspects of the present disclosure. Operations1000may begin, at1002, where the UE obtains quasi-colocation (QCL) information indicating a first QCL assumption for a first group of demodulation reference signal (DM-RS) ports and a second QCL assumption for a second group of DM-RS ports. At1004, the UE receives transmissions associated with the first group of DM-RS ports and the second group of DM-RS ports based on the QCL information. The QCL information may be transmitted to the UE via control signaling such as radio resource control (RRC) signaling (e.g., RRC element), medium access control (MAC) signaling (e.g., MAC control element (MAC-CE)), or downlink control signaling (e.g., downlink control information (DCI)). For example, the UE may be initially configured with various TCI states (e.g., up to 8 TCI states per DL transmission) via RRC signaling, and DCI signaling may be used to select one or more of the TCI states (e.g., 6 bits may be used to select the TCI states used for the DL transmissions). The UE may determine QCL assumptions associated with the DM-RS port groups based on the QCL information signaled to the UE. The UE may then monitor and receive transmissions associated with the DM-RS port groups based on the QCL assumptions. In certain aspects, the QCL information may be indicated via a TCI state having at least a first QCL configuration and a second QCL configuration.FIG.11illustrates an example TCI state used to configure DM-RS port groups via control signaling, in accordance with certain aspects of the present disclosure. As illustrated inFIG.11, the TCI state may provide the QCL assumptions for at least two DM-RS port groups. For example, the UE may assume that the first QCL configuration (qcl-Config1) provides the QCL assumptions for the first group of DM-RS ports, and that the second QCL configuration (qcl-Config2) provides the QCL assumptions for the second group of DM-RS ports. In situations where one of the QCL configuration provides no QCL information (i.e., the field is reserved), the first QCL configuration may be applied to the QCL assumptions for the first and second group of DM-RS ports, or vice versa. In other aspects, the first QCL configuration may be applied to the QCL assumptions for the first group of DM-RS ports, and a default QCL configuration may be applied to the QCL assumptions for the second group of DM-RS ports, or vice versa. If the UE is configured with only one DM-RS port group, all the ports are QCL′d with the same QCL information in the TCI state. As examples, if the UE obtains only one QCL configuration, then that QCL configuration is applied to the configured DM-RS port group. If the UE obtains two QCL configurations, then the UE may apply the first QCL configuration to the configured DM-RS port group. In other aspects, the UE may apply the QCL configuration based on the index of the DM-RS port group. For example, if the configured DM-RS port group has an index indicating that it is the first DM-RS port group, the UE may apply the first QCL configuration to the configured DM-RS port group, and if the configured DM-RS port group has an index indicating that it is the second DM-RS port group, then the UE may apply the second QCL configuration to the configured DM-RS port group. For aspects, the QCL information may be indicated via a plurality of TCI states, and each of the TCI states comprises a QCL configuration. For example, the TCI state shown inFIG.7may be used as one of the plurality of TCI states. As an example, an indication having the plurality of TCI states, each of the TCI states having a QCL configuration associated with a DM-RS port group, may be signaled, by the BS, to the UE via a control message, such as a RRC message, MAC-CE message, or DCI message. That is, the plurality of TCI states supporting multi-TRP/multi-panel transmissions may be signaled via a single indicator included in a control message transmitted to the UE. The UE may receive the control message having the TCI states and determine the QCL assumptions for the DM-RS port groups based on the TCI states. For aspects, the QCL information may be indicated via a TCI state having a single QCL configuration. The UE may assume that the QCL configuration applies to the QCL assumptions for the first and second group of DM-RS ports. In other aspects, the UE may assume that the QCL configuration applies to the QCL assumptions for the first group of DM-RS ports, and that a default QCL configuration applies to the QCL assumptions for the second group of DM-RS ports. In certain aspects, the UE may determine QCL assumptions for the DM-RS port groups based on a cell identification (cell ID). For instance, the UE may be connected to a TRP having a certain cell ID as configured via RRC signaling. If the cell ID provided in a TCI state is the same as the cell ID provided in the RRC signaling, the QCL configuration provided in the TCI state is applied to the first DM-RS port group, and a default QCL configuration is applied to the second DM-RS port group. In other aspects, if the cell ID provided in the TCI state is different from the cell ID provided in the RRC signaling, the QCL configuration provided in the TCI state is applied to the second DM-RS port group, and a default QCL configuration is applied to the first DM-RS port group. In certain aspects, the UE may report its capability of supporting DM-RS port groups with different QCL assumptions to the BS. Based on this reporting, higher-layer signaling may provide a maximum number of supported DM-RS port groups to the UE. In aspects, the BS may provide the UE with an indication of the maximum number of supported DM-RS port groups. The UE may determine the payload size of downlink control signaling (e.g., DCI) based at least in part on the configured maximum number of supported DM-RS port groups. The UE may also determine how to apply the QCL assumptions included in the one or more TCI state(s) based on the maximum number of supported DM-RS port groups as further described herein. As examples, if the maximum number of supported DM-RS port groups is set to 1, and one QCL assumption is provided in the TCI state(s), then the UE applies the QCL configuration to the sole DM-RS port group configured. If the maximum number of supported DM-RS port groups is set to 1, and two QCL assumptions are provided in the TCI state(s), then the UE may assume the first QCL configuration applies to the sole DM-RS port group configured. If the maximum number of supported DM-RS port groups is set to 1, and two QCL assumptions are provided in the TCI state(s), then the UE may apply the QCL assumption with a cell ID and bandwidth part (BWP) ID that matches the cell ID and BWP ID of the DM-RS port group. As other examples, if the maximum number of supported DM-RS port groups is set to 2, only one DM-RS port group is configured, and one QCL assumption is provided in the TCI state(s), then the UE applies the QCL assumption to the configured DM-RS port group. In other aspects, if the maximum number of supported DM-RS port groups is set to 2, only one DM-RS port group is configured, and one QCL assumption is provided in the TCI state(s), then the UE applies the QCL assumption to the first DM-RS port group, if the QCL assumption is from the first QCL configuration, or applies the QCL assumption to the second DM-RS port group, if the QCL assumption is from the second QCL configuration. Otherwise, the UE applies a default QCL assumption to the DM-RS port group. In certain aspects, if the maximum number of supported DM-RS port groups is set to 2, only one DM-RS port group is configured, and two QCL assumptions are provided in the TCI state(s), then the UE may apply the QCL assumption to the first DM-RS port group, if the QCL assumption is from the first QCL configuration, or the UE may apply the QCL assumption to the second DM-RS port group, if the QCL assumption is from the second QCL configuration. In other aspects, if the maximum number of supported DM-RS port groups is set to 2, only one DM-RS port group is configured, and two QCL assumptions are provided in the TCI state(s), then the UE may apply the QCL assumption with a cell ID and BWP ID that matches the cell ID and BWP ID of the corresponding DM-RS port group. As further examples, if the maximum number of supported DM-RS port groups is set to 2, two DM-RS port groups are configured, and only one QCL assumption is provided in the TCI state(s), then the UE may apply the same QCL assumption to both groups. In other aspects, if the maximum number of supported DM-RS port groups is set to 2, two DM-RS port groups are configured, and only one QCL assumption is provided in the TCI state(s), then the UE may apply the QCL assumption to the corresponding DM-RS port group and apply a default QCL assumption to the other DM-RS port group. In aspects, the maximum number of supported DM-RS port groups may also be indicated by a maximum number of QCL configurations per DL transmissions. That is, the maximum number of QCL configurations per DL transmissions may be used as higher layer signaling and provided to the UE in determining how to apply the QCL assumptions as described herein. FIG.12illustrates a communications device1200(such as a BS110or a UE120) that may include various components (e.g., corresponding to means-plus-function components) configured to perform operations for the techniques disclosed herein, such as the operations illustrated inFIGS.9and10. The communications device1200includes a processing system1202coupled to a transceiver1208(e.g., a transmitter and/or receiver). The transceiver1208is configured to transmit and receive signals for the communications device1200via an antenna1210, such as the various signal described herein. The processing system1202may be configured to perform processing functions for the communications device1200, including processing signals received and/or to be transmitted by the communications device1200. The processing system1202includes a processor1204coupled to a computer-readable medium/memory1212via a bus1206. In certain aspects, the computer-readable medium/memory1212is configured to store instructions that when executed by processor1204, cause the processor1204to perform the operations illustrated inFIGS.9and10, or other operations for performing the various techniques discussed herein. In certain aspects, the processing system1202further includes a transmit/receive component1214for performing the operations illustrated inFIGS.9and10. Additionally, the processing system1202includes a generating component1216for performing the operations illustrated inFIGS.9and10. Additionally, the processing system1202includes an obtaining component1218for performing the operations illustrated inFIGS.9and10. The transmit/receive component1214, generating component1216, and obtaining component1218may be coupled to the processor1204via bus1206. In certain aspects, the transmit/receive component1214, generating component1216, and obtaining component1218may be hardware circuits. In certain aspects, the transmit/receive component1214, generating component1216, and obtaining component1218may be software components that are executed and run on processor1204. The methods disclosed herein comprise one or more steps or actions for achieving the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components. The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. If implemented in hardware, an example hardware configuration may comprise a processing system in a wireless node. The processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and a bus interface. The bus interface may be used to connect a network adapter, among other things, to the processing system via the bus. The network adapter may be used to implement the signal processing functions of the PHY layer. In the case of a user equipment120(seeFIG.1), a user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further. The processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system. If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer readable medium. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. The processor may be responsible for managing the bus and general processing, including the execution of software modules stored on the machine-readable storage media. A computer-readable storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. By way of example, the machine-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer readable storage medium with instructions stored thereon separate from the wireless node, all of which may be accessed by the processor through the bus interface. Alternatively, or in addition, the machine-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files. Examples of machine-readable storage media may include, by way of example, RAM (Random Access Memory), flash memory, ROM (Read Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. The computer-readable media may comprise a number of software modules. The software modules include instructions that, when executed by an apparatus such as a processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module below, it will be understood that such functionality is implemented by the processor when executing instructions from that software module. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects computer-readable media may comprise non-transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer-readable media may comprise transitory computer-readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media. Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer-readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein and illustrated inFIGS.9and10. Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized. It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.
57,002
11863480
DETAILED DESCRIPTION Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description of exemplary embodiments do not represent all implementations consistent with the invention. Instead, they are merely examples of apparatuses and methods consistent with aspects related to the invention as recited in the appended claims. In a related art, a vehicle terminal (or called a vehicle-mounted terminal) in an Internet of Vehicles may move at a high speed, thereby resulting in poor communication quality. After a reference signal is sent, reception of a vehicle terminal at an opposite end is likely to fail, which is not conducive to communication transmission. In order to solve the problem, a transmission density of a reference signal may be adjusted according to a driving speed of a vehicle terminal in embodiments, so as to improve possibility of a vehicle terminal at an opposite end successfully receiving the reference signal, facilitating subsequent communication and transmission. FIG.1is a flow chart showing a method for transmitting a reference signal, according to an exemplary embodiment. The method for transmitting the reference signal is applied to a vehicle terminal. As illustrated inFIG.1, the method includes the following steps101to103. In step101, a current driving speed (or called current traveling speed) is acquired. In step102, a corresponding reference signal transmission density is determined according to a speed level to which the current driving speed belongs. In step103, a reference signal is sent to a connected vehicle terminal at an opposite end according to the reference signal transmission density. In the embodiment, the vehicle terminal may periodically acquire the current driving speed. The time period may be 1 to 5 minutes, etc., and may be configured flexibly as needed. Within the period, the reference signal is transmitted according to the reference signal transmission density currently determined. Or, according to a communication need, if there is no need to send a reference signal for a period of time, such as when the vehicle terminal is in an idle state, there is no need to acquire the current driving speed. When a reference signal is to be sent, such as when the vehicle terminal is in an active connected state, the current driving speed is acquired. In the embodiment, a corresponding relationship between the driving speed and the speed level and a corresponding relationship between the speed level and the reference signal transmission density are configured for a vehicle terminal in advance. The higher the driving speed, the greater the corresponding speed level, and the greater the corresponding reference signal transmission density. In an embodiment, the highest speed level corresponds to the maximal reference signal transmission density. The maximal driving speed corresponds to the highest speed level. When being initially connected to the Internet of Vehicles, a low driving speed may correspond to a low speed level or a high speed level. An Internet of Vehicles system may be pre-configured, and adjusted subsequently according to a factor such as a driving speed and a network communication environment, and the like. In an embodiment, the higher the driving speed, the greater the corresponding reference signal transmission density, thereby increasing possibility of a vehicle terminal at an opposite end successfully receiving a reference signal. The lower the driving speed, the smaller the corresponding reference signal transmission density, thereby saving a network resource occupied by a reference signal, and ensuring increased possibility of a vehicle terminal at an opposite end successfully receiving the reference signal. The reference signal transmission density refers to a number or count of reference signals transmitted within a unit time. The unit time may be 1 ms or 1 time slot or 1 sub-frame, or the like. It may be configured flexibly as needed. After receiving the reference signal, the vehicle terminal at the opposite end may perform channel estimation, channel quality measurement and the like based on the reference signal. In the embodiment, the reference signal transmission density is adjusted flexibly according to the driving speed, which is more in line with a communication requirement. In an embodiment, the step103includes step A. In the step A, the reference signal may be sent to the connected vehicle terminal at the opposite end through a control channel in a unicast mode. The embodiment provides a mode of transmitting the reference signal, that is, the reference signal is sent through the control channel in the unicast mode. In an embodiment, the method further includes step B. In the step B, level identification information of the speed level may be sent to the connected vehicle terminal at the opposite end. In the embodiment, the level identification information of the speed level may be sent to the vehicle terminal at the opposite end, so that the vehicle terminal at the opposite end may learn the speed level, then determine the reference signal transmission density, and may receive the reference signal better according to the reference signal transmission density. The step B may be performed before the step103, that is, the vehicle terminal at the opposite end may learn the reference signal transmission density before receiving the reference signal, so as to better receive the reference signal. Alternatively, the step B is performed synchronously with the step103, the reference signal and the level identification information may be transmitted in the same information block, which saves the transmission number, such that the vehicle terminal at the opposite end may better receive the reference signal. In an embodiment, the method further includes step C1. In the step C1, relevant driving information of the opposite end sent by a vehicle terminal at the opposite end may be received. The step102includes steps C2 to C3. In the step C2, the speed level may be determined according to the current driving speed and the relevant driving information of the opposite end. In the step C3, the corresponding reference signal transmission density may be determined according to the determined speed level. In the embodiment, the vehicle terminal at the opposite end may send the relevant driving information of the opposite end in advance. The vehicle terminal may determine the speed level more accurately according to the current driving speed of the vehicle terminal combined with the relevant driving information of the opposite end, and then determine the more appropriate reference signal transmission density. In an embodiment, the relevant driving information of the opposite end includes at least one of: a driving speed of the opposite end, a driving direction of the opposite end, or a relative driving speed. In the embodiment, there may be multiple types of relevant driving information of the opposite end related to the driving speed. Knowing the driving speed of the opposite end and the driving direction of the opposite end, the relative driving speed relative to the vehicle terminal at the opposite end may be determined. The speed level may be determined accurately according to the relative driving speed. To receive the relative driving speed sent by the vehicle terminal at the opposite end, driving direction information and the current driving speed have to be sent to the connected vehicle terminal at the opposite end in advance. The relative driving speed is calculated and fed back by the vehicle terminal at the opposite end. In an embodiment, the method further includes step D1 to step D2. In the step D1, feedback information sent by a vehicle terminal at the opposite end is received. In the step D2, when the feedback information meets a preset increase condition, the speed level to which the current driving speed belongs is increased. In the embodiment, the vehicle terminal at the opposite end may further send feedback information, and the feedback information is related to reception quality. The vehicle terminal may adjust the speed level to which the current driving speed belongs according to the receiving quality of the vehicle terminal at the opposite end. For example, a corresponding speed level 1 is determined according to the current driving speed (such as 60 kilometers/hour). When the feedback information meets the preset increase condition, a speed level 2 corresponding to the current driving speed (such as 60 km/h) is determined. In an embodiment, the feedback information includes at least one of: link measurement information or a feedback signal for the reference signal. The increase condition includes at least one of: the feedback signal indicating reception failure; the feedback signal indicating reception failure, and a number of consecutive failures reaching a preset failure number threshold; or link quality indicated by the link measurement information being lower than a preset quality threshold. In the embodiment, the feedback signal indicates the success or recognition of receiving the reference signal, namely ACK (acknowledgement) or NACK (non-acknowledgement). If the feedback signal is NACK, it means that the reception quality is not good. If there are multiple NACKs in a row, it means that the reception quality is fairly poor. The link measurement information may directly reflect channel quality, which is equivalent to the reception quality. The link quality indicated by the link measurement information being lower than the preset quality threshold indicates that the reception quality is poor. When the reception quality is poor, increasing the speed level, that is, increasing the reference signal transmission density, helps to increase the possibility of successfully receiving the reference signal by the vehicle terminal at the opposite end. If the reception quality is good, the corresponding speed level may be lowered, just by configuring a lowering condition in advance, such as, the feedback signal indicating reception success, and a number of consecutive successful receptions reaching a preset success number threshold; or link quality indicated by the link measurement information being higher than a preset quality threshold. In the embodiments, various communications between vehicle terminals may be completed through the control channel in the unicast mode. The implementation process is elaborated below through embodiments. FIG.2is a flow chart showing a method for transmitting a reference signal, according to an exemplary embodiment. The method for transmitting the reference signal is applied to a vehicle terminal. As illustrated inFIG.2, the method includes the following steps201to205. In step201, a current driving speed is acquired. In step202, relevant driving information of an opposite end sent by a vehicle terminal at the opposite end is received. The execution order of step201and step202can be interchanged. In step203, a speed level is determined according to the current driving speed and the relevant driving information of the opposite end. In step204, a corresponding reference signal transmission density is determined according to the determined speed level. In step205, a reference signal and level identification information are sent to a connected vehicle terminal at the opposite end through a control channel in a unicast mode according to the reference signal transmission density. FIG.3is a flow chart showing a method for transmitting a reference signal, according to an exemplary embodiment. The method for transmitting the reference signal is applied to a vehicle terminal. As illustrated inFIG.3, the method includes the following steps301to307. In step301, a current driving speed is acquired. In step302, relevant driving information of an opposite end sent by a vehicle terminal at the opposite end is received. In step303, a speed level is determined according to the current driving speed and the relevant driving information of the opposite end. In step304, feedback information sent by the vehicle terminal at the opposite end is received. The step304may be performed before step305. In the step305, when the feedback information meets a preset increase condition, the speed level to which the current driving speed belongs is increased. In step306, a corresponding reference signal transmission density is determined according to the increased speed level. In step307, a reference signal and level identification information are sent to a connected vehicle terminal at the opposite end through a control channel in a unicast mode according to the reference signal transmission density. The embodiments may be combined flexibly as needed. The following are device embodiments of the present disclosure, which may be configured to implement the method embodiments of the present disclosure. FIG.4is a block diagram of a device for transmitting a reference signal, according to an exemplary embodiment. The device may be implemented as a part of electronic equipment or the entire electronic equipment through software, hardware or a combination of both. Referring toFIG.4, the device for transmitting the reference signal includes an acquiring module401, a density determining module402, and a first sending module403. The acquiring module401is configured to acquire a current driving speed. The density determining module402is configured to determine a corresponding reference signal transmission density according to a speed level to which the current driving speed belongs. The first sending module403is configured to send a reference signal to a connected vehicle terminal at an opposite end according to the reference signal transmission density. In an embodiment, as illustrated inFIG.5, the first sending module403includes a sending sub-module501. The sending sub-module501is configured to send the reference signal to the connected vehicle terminal at the opposite end through a control channel in a unicast mode. In an embodiment, as illustrated inFIG.6, the device further includes a second sending module601. The second sending module601is configured to send level identification information of the speed level to the connected vehicle terminal at the opposite end. In an embodiment, as illustrated inFIG.7, the device further includes a first receiving module701. The first receiving module701is configured to receive relevant driving information of the opposite end sent by a vehicle terminal at the opposite end. As illustrated inFIG.8, the density determining module402includes a level determining sub-module801and a density determining sub-module802. The level determining sub-module801is configured to determine the speed level according to the current driving speed and the relevant driving information of the opposite end. The density determining sub-module802is configured to determine the corresponding reference signal transmission density according to the determined speed level. In an embodiment, the relevant driving information of the opposite end includes at least one of: a driving speed of the opposite end, a driving direction of the opposite end, or a relative driving speed. In an embodiment, as illustrated inFIG.9, the device further includes a second receiving module901and an adjusting module902. The second receiving module901is configured to receive feedback information sent by a vehicle terminal at the opposite end. The adjusting module902is configured to, in response to the feedback information meeting a preset increase condition, increase the speed level to which the current driving speed belongs. In an embodiment, the feedback information includes at least one of: link measurement information or a feedback signal for the reference signal. The increase condition includes at least one of: the feedback signal indicating reception failure; the feedback signal indicating reception failure, and a number of consecutive failures reaching a preset failure number threshold; or link quality indicated by the link measurement information being lower than a preset quality threshold. In an embodiment, the highest speed level corresponds to the maximal reference signal transmission density. Each module in the device according to the above embodiments herein may perform an operation in a mode elaborated in the above embodiments of the method herein, which will not be repeated here. FIG.10is a block diagram of a device for transmitting a reference signal, according to an exemplary embodiment. For example, the device1000may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet, a medical device, exercise equipment, a personal digital assistant (PDA), and the like. The device1000may include one or more of the following components: a processing component1002, a memory1004, a power component1006, a multimedia component1008, an audio component1010, an input/output (I/O) interface1012, a sensor component1014, or a communication component1016. The processing component1002typically controls overall operations of the device1000, such as the operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component1002may include one or more processors1020to execute instructions to complete all or part of the steps in the above method. In addition, the processing component1002may include one or more modules which facilitate interaction between the processing component1002and other components. For example, the processing component1002may include a multimedia module to facilitate interaction between the multimedia component1008and the processing component1002. The memory1004is configured to store various types of data to support the operation of the device1000. Examples of such data include instructions for any applications or methods operated on the device1000, contact data, phonebook data, messages, pictures, video, etc. The memory1004may be implemented by any type of volatile or non-volatile memory device or a combination thereof, such as a static random access memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), a magnetic memory, a flash memory, and a magnetic or optical disk. The power component1006provides power for various components of the device1000. The power component1006may include a power management system, one or more power supplies, and other components associated with generation, management and distribution of power for the device1000. The multimedia component1008includes a screen providing an output interface between the device1000and a user. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes the TP, the screen may be implemented as a touch screen to receive an input signal from the user. The TP includes one or more touch sensors to sense touches, swipes and gestures on the TP. The touch sensors may not only sense a boundary of a touch or swipe action, but also detect a period of time and a pressure associated with the touch or swipe action. In some embodiments, the multimedia component1008includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the device1200is in an operation mode, such as a photographing mode or a video mode. Each of the front camera and the rear camera may be a fixed optical lens system or have focusing and optical zooming capabilities. The audio component1010is configured to output and/or input an audio signal. For example, the audio component1010includes a microphone (MIC), and the MIC is configured to receive an external audio signal when the device1000is in an operation mode, such as a call mode, a recording mode and a voice recognition mode. The received audio signal may further be stored in the memory1004or sent through the communication component1016. In some embodiments, the audio component1010further includes a speaker configured to output the audio signal. The I/O interface1012provides an interface between the processing component1002and peripheral interface modules, such as a keyboard, a click wheel, buttons, and the like. The buttons may include, but are not limited to: a home button, a volume button, a starting button and a locking button. The sensor component1014includes one or more sensors configured to provide status assessments in various aspects for the device1000. For example, the sensor component1014may detect an on/off status of the device1000and relative positioning of components, such as a display and small keyboard of the device1000, and the sensor component1014may further detect a change in a position of the device1000or a component of the device1000, presence or absence of contact between the user and the device1000, orientation or acceleration/deceleration of the device1000, and a change in temperature of the device1000. The sensor component1014may include a proximity sensor configured to detect presence of an object nearby without any physical contact. The sensor component1014may also include a light sensor, such as a complementary metal oxide semiconductor (CMOS) or charge coupled device (CCD) image sensor, configured for use in an imaging application. In some embodiments, the sensor component1014may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor. The communication component1016is configured to facilitate wired or wireless communication between the device1000and other devices. The device1000may access a communication-standard-based wireless network, such as a wireless fidelity (WiFi), network, a 2nd-generation (2G), or 3rd-generation (3G), or a combination thereof. In an exemplary embodiment, the communication component1016receives a broadcast signal or broadcast associated information from an external broadcast management system through a broadcast channel. In an exemplary embodiment, the communication component1016further includes a near field communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on a radio frequency identification (RFID) technology, an infrared data association (IrDA), ultra-wide band (UWB) technology, a Bluetooth (BT) technology, and other technologies. In an exemplary embodiment, the device1000may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components, and is configured to implement the method. In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium including instructions, such as included in the memory1004, executable by the processor1020of the device1000to complete the above method. For example, the non-transitory computer-readable storage medium may be a ROM, a random access memory (RAM), a compact disc read-only memory (CD-ROM), a magnetic tape, a floppy disc, an optical data storage device, and the like. In an exemplary embodiment, a device for transmitting a reference signal includes: a processor and a memory configured to store instructions executable by the processor. The processor is configured to: acquire a current driving speed; determine a corresponding reference signal transmission density according to a speed level to which the current driving speed belongs; and send a reference signal to a connected vehicle terminal at an opposite end according to the reference signal transmission density. The processor may further be configured as follows. The operation of sending the reference signal to the connected vehicle terminal at the opposite end may include: the reference signal is sent to the connected vehicle terminal at the opposite end through a control channel in a unicast mode. The processor may further be configured as follows. The method may further include: level identification information of the speed level is sent to the connected vehicle terminal at the opposite end. The processor may further be configured as follows. The method may further include: relevant driving information of the opposite end sent by a vehicle terminal at the opposite end is received. The operation of determining the corresponding reference signal transmission density according to the speed level to which the current driving speed belongs may include: the speed level is determined according to the current driving speed and the relevant driving information of the opposite end; and the corresponding reference signal transmission density is determined according to the determined speed level. The processor may further be configured as follows. The relevant driving information of the opposite end may include at least one of: a driving speed of the opposite end, a driving direction of the opposite end, or a relative driving speed. The processor may further be configured as follows. The method may further include: feedback information sent by a vehicle terminal at the opposite end is received; and in response to the feedback information meeting a preset increase condition, the speed level to which the current driving speed belongs is increased. The processor may further be configured as follows. The feedback information may include at least one of: link measurement information or a feedback signal for the reference signal; and the increase condition may include at least one of: the feedback signal indicating reception failure; the feedback signal indicating reception failure, and a number of consecutive failures reaching a preset failure number threshold; or link quality indicated by the link measurement information being lower than a preset quality threshold. A computer-readable storage medium has stored therein computer instructions that, when executed by a processor of a device, causes the device to implement the above method for transmitting the reference signal. The method includes: acquiring a current driving speed; determining a corresponding reference signal transmission density according to a speed level to which the current driving speed belongs; and sending a reference signal to a connected vehicle terminal at an opposite end according to the reference signal transmission density. The instructions in the storage medium may further include as follows. The operation of sending the reference signal to the connected vehicle terminal at the opposite end may include: the reference signal is sent to the connected vehicle terminal at the opposite end through a control channel in a unicast mode. The instructions in the storage medium may further include as follows. The method may further include: level identification information of the speed level is sent to the connected vehicle terminal at the opposite end. The instructions in the storage medium may further include as follows. The method may further include: relevant driving information of the opposite end sent by a vehicle terminal at the opposite end is received. The operation of determining the corresponding reference signal transmission density according to the speed level to which the current driving speed belongs may include: the speed level is determined according to the current driving speed and the relevant driving information of the opposite end; and the corresponding reference signal transmission density is determined according to the determined speed level. The instructions in the storage medium may further include as follows. The relevant driving information of the opposite end may include at least one of: a driving speed of the opposite end, a driving direction of the opposite end, or a relative driving speed. The instructions in the storage medium may further include as follows. The method may further include: feedback information sent by a vehicle terminal at the opposite end is received; and in response to the feedback information meeting a preset increase condition, the speed level to which the current driving speed belongs is increased. The instructions in the storage medium may further include as follows. The feedback information may include at least one of: link measurement information or a feedback signal for the reference signal; and the increase condition may include at least one of: the feedback signal indicating reception failure; the feedback signal indicating reception failure, and a number of consecutive failures reaching a preset failure number threshold; or link quality indicated by the link measurement information being lower than a preset quality threshold. Other implementation solutions of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the present disclosure. This application is intended to cover any variations, uses, or adaptations of the present disclosure following the general principles thereof and including such departures from the present disclosure as come within known or customary practice in the art. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the present disclosure being indicated by the following claims. It will be appreciated that the present disclosure is not limited to the exact construction that has been described above and illustrated in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. It is intended that the scope of the present disclosure only be limited by the appended claims.
30,327
11863481
DESCRIPTION OF THE EMBODIMENTS The technical scheme of the disclosure is described below in further detail in conjunction with the drawings. It should be noted that the embodiments in the disclosure and the characteristics of the embodiments may be mutually combined arbitrarily if no conflict is incurred. Embodiment 1 Embodiment 1 illustrates an example of a flowchart of a first radio signal, as shown inFIG.1. In Embodiment 1, the UE in the disclosure first receives a target radio signal, then transmits a first radio signal in a first time resource pool, and finally monitors a first signaling in a second time resource pool in a first frequency domain resource; a channel measurement for the target radio signal is used for triggering a transmitting of the first radio signal; the first signaling is transmitted employing a first antenna port group, the first radio signal is used for determining a second antenna port group, and the first antenna port group is different from the second antenna port group; the first signaling is a physical layer signaling, and the first signaling is used for determining whether the first radio signal is correctly received. In one subembodiment, the target radio signal is received by the UE in the first frequency domain resource. In one subembodiment, the phrase monitoring a first signaling refers to blind decoding the first signaling. In one subembodiment, the first radio signal includes a BRR. In one subembodiment, the first radio signal includes a Physical Random Access Channel (PRACH). In one subembodiment, the first signaling includes a first field, and the first field is used for determining whether the first radio signal is correctly received. In one affiliated embodiment of the above subembodiment, the first field includes 1 bit, and the 1 bit is used for determining whether the first radio signal is correctly received. In one subembodiment, the first signaling is one Downlink Control Information (DCI). In one subembodiment, the first signaling is one downlink grant, or the first signaling is one uplink grant. In one subembodiment, the first radio signal includes a Physical Uplink Control Channel (PUCCH). In one subembodiment, the first frequency domain resource is deployed on unlicensed spectrum. In one subembodiment, the first frequency domain resource is one carrier. In one subembodiment, the first frequency domain resource is one Bandwidth Part (BWP). In one subembodiment, the first radio signal is transmitted in the first frequency domain resource. In one subembodiment, the first antenna port group includes P1 antenna port(s), the P1 is equal to 1 or the P1 is a positive integer greater than 1. In one subembodiment, the second antenna port group includes P2 antenna port(s), the P2 is equal to 1 or the P2 is a positive integer greater than 1. In one subembodiment, the phrase that the first antenna port group is different from the second antenna port group refers that: among antenna ports included in the first antenna port group, at least one antenna port does not belong to antenna ports included in the second antenna port group. In one subembodiment, the phrase that the first antenna port group is different from the second antenna port group refers that: among antenna ports included in the second antenna port group, at least one antenna port does not belong to antenna ports included in the first antenna port group. In one subembodiment, the phrase that the first antenna port group is different from the second antenna port group refers that: the first antenna port group corresponds to a first Reference Signal (RS) resource configuration index, the second antenna port group corresponds to a second RS resource configuration index, and the first RS resource configuration index is not equal to the second RS resource configuration index. In one affiliated embodiment of the above subembodiment, the RS is a Channel State Information Reference Signal (CSI-RS). In one subembodiment, the phrase that the first antenna port group is different from the second antenna port group refers that: the first antenna port group and the second antenna port group correspond to different transmitting beamforming vectors respectively. In one subembodiment, the phrase that the first antenna port group is different from the second antenna port group refers that: the first antenna port group and the second antenna port group are not spatially Quasi-Coloated (QCLed). In one subembodiment, the first time resource pool includes a positive integer number of first time resource subpools, and the first time resource subpool includes a positive integer number of consecutive multicarrier symbols in time domain. In one subembodiment, the second time resource pool includes a positive integer number of second time resource subpools, and second time resource subpool includes a positive integer number of consecutive multicarrier symbols in time domain. In one affiliated embodiment of the above subembodiment, the UE monitors the first signaling in the second time resource subpool included in a first time window. In one example of the above affiliated embodiment, the UE transmits the first radio signal in a first time unit, and the first time window is located behind the first time unit in time domain. In one example of the above affiliated embodiment, the UE transmits the first radio signal in a first time unit, the first time window and the first time unit have an interval not less than Ti ms, the Ti is fixed, or the Ti is configured through a higher layer signaling, and the Ti is a real number greater than 0. In one example of the above affiliated embodiment, the first time window has a fixed duration, or the duration of the first time window is configured through a higher layer signaling. In one subembodiment, the first time resource pool and the second time resource pool are correlated. In one affiliated embodiment of the above subembodiment, the phrase that the first time resource pool and the second time resource pool are correlated refers that: the first time resource pool includes a positive integer number of first time resource subpools, the second time resource pool includes a positive integer number of second time resource subpools, and any one of the first time resource subpools can find one second time resource subpool corresponding to it. In one affiliated embodiment of the above subembodiment, the second time resource pool is used for the base station in the disclosure to transmit a given Synchronization Signal Block (SSB), and the first time resource pool is used for receiving a random access request for the given SSB. In one example of the above affiliated embodiment, for the UE, multiantenna related receiving of the given SSB is used for determining multiantenna related transmitting of the first radio signal. In one exception of the above example, the phrase that multiantenna related receiving of the given SSB is used for determining multiantenna related transmitting of the first radio signal refers that: when transmitting the first radio signal, the UE employs the same spatial domain transmission filter as receiving the given SSB. In one subembodiment, the UE monitors the first signaling in a first frequency domain resource pool in the second time resource pool. In one affiliated embodiment of the above subembodiment, the first frequency domain resource pool corresponds to one Control Resource Set (CORESET). In one subembodiment, the phrase that an antenna port group #1 and an antenna port group #2 are not spatially QCLed refers that: any one antenna port in the antenna port group #1 and any one antenna port in the antenna port group #2 are not spatially QCLed. In one subembodiment, the phrase that an antenna port group #1 and an antenna port group #2 are not spatially QCLed refers that: full or partial large-scale properties of radio signals transmitted on the antenna port group #2 cannot be deduced from full or partial large-scale properties of radio signals transmitted on the antenna port group #1. In one subembodiment, the phrase that an antenna port group #1 and an antenna port group #2 are not spatially QCLed refers that: multiantenna related receiving of radio signals transmitted on the antenna port group #2 cannot be deduced from multiantenna related receiving of radio signals transmitted on the antenna port group #1. In one subembodiment, the phrase that an antenna port group #1 and an antenna port group #2 are not spatially QCLed refers that: multiantenna related transmitting of radio signals transmitted on the antenna port group #2 cannot be deduced from multiantenna related transmitting of radio signals transmitted on the antenna port group #1. In one subembodiment, the phrase that an antenna port group #1 and an antenna port group #2 are not spatially QCLed refers that: at least one QCL parameter of the antenna port group #2 cannot be deduced from at least one QCL parameter of the antenna port group #1. In one subembodiment, the given antenna port group corresponds to the first antenna port group in the disclosure, and the target antenna port group corresponds to the second antenna port group in the disclosure. In one subembodiment, the QCL parameter in the disclosure includes one or more of angle of arrival, angle of departure, spatial correlation, multiantenna related transmitting or multiantenna related receiving. In one subembodiment, the large-scale property in the disclosure includes one or more of delay spread, Doppler spread, Doppler shift, path loss or average gain. In one subembodiment, the UE in the disclosure needs to perform a process of channel detection before performing any transmission illustrated in the disclosure. In one affiliated embodiment of the above subembodiment, the channel detection is an energy detection. In one affiliated embodiment of the above subembodiment, the channel detection is an LBT, or the channel detection is a CCA. In one affiliated embodiment of the above subembodiment, the channel detection is a channel detection of the first frequency domain resource. In one subembodiment, the multicarrier symbol in the disclosure is one of an Orthogonal Frequency Division Multiplexing (OFDM) symbol, a Single-Carrier Frequency Division Multiple Access (SC-FDMA) symbol, a Filter Bank Multi Carrier (FBMC) symbol, an OFDM symbol including a CP or a Discrete Fourier Transform Spreading Orthogonal Frequency Division Multiplexing (DFT-s-OFDM) symbol including a CP. Embodiment 2 Embodiment 2 illustrates an example of a diagram of a network architecture, as shown inFIG.2. Embodiment 2 illustrates an example of a diagram of a network architecture according to the disclosure, as shown inFIG.2.FIG.2is a diagram illustrating a network architecture200of NR 5G, Long-Term Evolution (LTE) and Long-Term Evolution Advanced (LTE-A) systems. The NR 5G or LTE network architecture200may be called an Evolved Packet System (EPS)200or some other appropriate terms. The EPS200may include one or more UEs201, a Next Generation-Radio Access Network (NG-RAN)202, an Evolved Packet Core/5G-Core Network (EPC/5G-CN)210, a Home Subscriber Server (HSS)220and an Internet service230. The EPS may be interconnected with other access networks. For simple description, the entities/interfaces are not shown. As shown inFIG.2, the EPS provides packet switching services. Those skilled in the art are easy to understand that various concepts presented throughout the disclosure can be extended to networks providing circuit switching services or other cellular networks. The NG-RAN includes a NR node (gNB)203and other gNBs204. The gNB203provides UE201oriented user plane and control plane protocol terminations. The gNB203may be connected to other gNBs204via an Xn interface (for example, backhaul). The gNB203may be called a base station, a base transceiver station, a radio base station, a radio transceiver, a transceiver function, a Basic Service Set (BSS), an Extended Service Set (ESS), a TRP or some other appropriate terms. The gNB203provides an access point of the EPC/5G-CN210for the UE201. Examples of UE201include cellular phones, smart phones, Session Initiation Protocol (SIP) phones, laptop computers, Personal Digital Assistants (PDAs), satellite radios, non-territorial network base station communications, satellite mobile communications, Global Positioning Systems (GPSs), multimedia devices, video devices, digital audio player (for example, MP3 players), cameras, games consoles, unmanned aerial vehicles, air vehicles, narrow-band physical network equipment, machine-type communication equipment, land vehicles, automobiles, wearable equipment, or any other devices having similar functions. Those skilled in the art may also call the UE201a mobile station, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a radio communication device, a remote device, a mobile subscriber station, an access terminal, a mobile terminal, a wireless terminal, a remote terminal, a handset, a user proxy, a mobile client, a client, or some other appropriate terms. The gNB203is connected to the EPC/5G-CN210via an S1/NG interface. The EPC/5G-CN210includes a Mobility Management Entity/Authentication Management Field/User Plane Function (MME/AMF/UPF)211, other MMEs/AMFs/UPFs214, a Service Gateway (S-GW)212and a Packet Data Network Gateway (P-GW)213. The MME/AMF/UPF211is a control node for processing a signaling between the UE201and the EPC/5G-CN210. Generally, the MME/AMF/UPF211provides bearer and connection management. All user Internet Protocol (IP) packets are transmitted through the S-GW212. The S-GW212is connected to the P-GW213. The P-GW213provides UE IP address allocation and other functions. The P-GW213is connected to the Internet service230. The Internet service230includes IP services corresponding to operators, specifically including internet, intranet, IP Multimedia Subsystems (IP IMSs) and PS Streaming Services (PSSs). In one subembodiment, the UE201corresponds to the UE in the disclosure. In one subembodiment, the gNB203corresponds to the base station in the disclosure. In one subembodiment, the UE201supports wireless communications of data transmission on unlicensed spectrum. In one subembodiment, the gNB203supports wireless communications of data transmission on unlicensed spectrum. In one subembodiment, the UE201supports massive MIMO wireless communications. In one subembodiment, the gNB203supports massive MIMO wireless communications. Embodiment 3 Embodiment 3 illustrates a diagram of an embodiment of a radio protocol architecture of a user plane and a control plane according to the disclosure, as shown inFIG.3. FIG.3is a diagram illustrating an embodiment of a radio protocol architecture of a user plane and a control plane. InFIG.3, the radio protocol architecture of a UE and a base station (gNB or eNB) is represented by three layers, which are a Layer 1, a Layer 2 and a Layer 3 respectively. The Layer 1 (L1 layer) is the lowest layer and implements various PHY (physical layer) signal processing functions. The L1 layer will be referred to herein as the PHY301. The Layer 2 (L2 layer)305is above the PHY301, and is responsible for the link between the UE and the gNB over the PHY301. In the user plane, the L2 layer305includes a Medium Access Control (MAC) sublayer302, a Radio Link Control (RLC) sublayer303, and a Packet Data Convergence Protocol (PDCP) sublayer304, which are terminated at the gNB on the network side. Although not shown, the UE may include several higher layers above the L2 layer305, including a network layer (i.e. IP layer) terminated at the P-GW on the network side and an application layer terminated at the other end (i.e. a peer UE, a server, etc.) of the connection. The PDCP sublayer304provides multiplexing between different radio bearers and logical channels. The PDCP sublayer304also provides header compression for higher-layer packets so as to reduce radio transmission overheads. The PDCP sublayer304provides security by encrypting packets and provides support for UE handover between gNBs. The RLC sublayer303provides segmentation and reassembling of higher-layer packets, retransmission of lost packets, and reordering of lost packets to as to compensate for out-of-order reception due to HARQ. The MAC sublayer302provides multiplexing between logical channels and transport channels. The MAC sublayer302is also responsible for allocating various radio resources (i.e., resource blocks) in one cell among UEs. The MAC sublayer302is also in charge of HARQ operations. In the control plane, the radio protocol architecture of the UE and the gNB is almost the same as the radio protocol architecture in the user plane on the PHY301and the L2 layer305, with the exception that there is no header compression function for the control plane. The control plane also includes a Radio Resource Control (RRC) sublayer306in the layer 3 (L3). The RRC sublayer306is responsible for acquiring radio resources (i.e. radio bearers) and configuring lower layers using an RRC signaling between the gNB and the UE. In one subembodiment, the radio protocol architecture shown inFIG.3is applicable to the UE in the disclosure. In one subembodiment, the radio protocol architecture shown inFIG.3is applicable to the base station in the disclosure. In one subembodiment, the first signaling in the disclosure is generated on the PHY301. In one subembodiment, the second signaling in the disclosure is generated on the MAC sublayer302, or generated on the RRC sublayer306. In one subembodiment, the first radio signal in the disclosure is generated on the PHY301, or the first radio signal in the disclosure is generated on the MAC sublayer302. In one subembodiment, the third radio signal in the disclosure is generated on the PHY301. In one subembodiment, the fourth radio signal in the disclosure is generated on the PHY301, or the fourth radio signal in the disclosure is generated on the MAC sublayer302. In one subembodiment, the fifth radio signal in the disclosure is generated on the PHY301, or the fifth radio signal in the disclosure is generated on the MAC sublayer302. Embodiment 4 Embodiment 4 illustrates a diagram of a base station and a UE according to the disclosure, as shown inFIG.4.FIG.4is a block diagram of a gNB410in communication with a UE450in an access network. The base station410includes a controller/processor440, a memory430, a receiving processor412, a transmitting processor415, a transmitter/receiver416and an antenna420. The UE450includes a controller/processor490, a memory480, a data source467, a transmitting processor455, a receiving processor452, a transmitter/receiver456and an antenna460. In UL transmission, processes relevant to the base station410include the following. The receiver416receives a radio-frequency signal via the corresponding antenna420, converts the received radio-frequency signal into a baseband signal and provides the baseband signal to the receiving processor412. The receiving processor412performs various signal receiving processing functions of an L1 layer (that is, PHY), including decoding, de-interleaving, descrambling, demodulation, extraction of physical layer control signaling, etc. The controller/processor440performs functions of L2 layer, and is connected to the memory430that stores program codes and data. The controller/processor440provides de-multiplexing between a logical channel and a transport channel, packet reassembling, decryption, header decompression and control signaling processing to recover a higher-layer packet from the UE450. The higher-layer packet from the UE450may be provided to the core network. The beam manager441determines to receive a first radio signal in a first time resource pool and to transmit a first signaling in a second time resource pool in a first frequency domain resource, and transmits the result to the controller/processor440. In Uplink (UL) transmission, processes relevant to the UE450include the following. The data source467provides a higher-layer packet to the controller/processor490. The data source467represents all protocol layers above L2 layer. The transmitter456transmits a radio-frequency signal via the corresponding antenna460, converts a baseband signal into a radio-frequency signal and provides the radio-frequency radio to the corresponding antenna460. The transmitting processor455performs various signal receiving processing functions of an L1 layer (that is, PHY), including decoding, deinterleaving, descrambling, demodulation and extraction of physical layer control signalings, etc. The controller/processor490provides header compression, encryption, packet segmentation and reordering, multiplexing between a logical channel and a transport channel based on radio resource allocation of the gNB410, to implement the L2 functions used for the user plane and the control plane. The controller/processor459is also in charge of HARQ operation, retransmission of lost packets, and signalings to the gNB410. The beam manager471determines to transmit a first radio signal in a first time resource pool and to monitor a first signaling in a second time resource pool in a first frequency domain resource, and transmits the result to the controller/processor490. In Downlink (DL) transmission, processes relevant to the base station410include the following. A higher-layer packet is provided to the controller/processor440. The controller/processor440provides header compression, encryption, packet segmentation and reordering, multiplexing and de-multiplexing between a logical channel and a transport channel, to implement L2 protocols used for the user plane and the control plane. The higher-layer packet may include data or control information, for example, Downlink Shared Channel (DL-SCH). The controller/processor440is connected to the memory430that stores program codes and data. The memory430may be a computer readable medium. The controller/processor440includes a scheduling unit for transmission requirements, and the scheduling unit is configured to schedule air interface resources corresponding to transmission requirements. The beam manager441determines to receive a first radio signal in a first time resource pool and to transmit a first signaling in a second time resource pool in a first frequency domain resource, and transmits the result to the controller/processor440. The transmitting processor415receives a bit stream output from the controller/processor440, and performs various signal transmitting processing functions of L1 layer (that is, PHY), including encoding, interleaving, scrambling, modulation, power control/allocation, generation of physical layer control signalings (including PBCH, PDCCH, PHICH, PCFICH, reference signal), etc. The transmitter416is configured to convert the baseband signal provided by the MIMO transmitting processor441into a radio-frequency signal and transmit the radio-frequency signal via the antenna420. Each transmitter416performs sampling processing on respective input symbol streams to obtain respective sampled signal streams. Each transmitter416performs further processing (for example, digital-to-analogue conversion, amplification, filtering, up conversion, etc.) on respective sampled streams to obtain a downlink signal. In Downlink (DL) transmission, processes relevant to the UE450include the following. The receiver456is configured to convert a radio-frequency signal received via the antenna460into a baseband signal and provide the baseband signal to receiving processor452. The receiving processor452performs various signal receiving processing functions of an L1 layer (that is, PHY), including decoding, de-interleaving, descrambling, demodulation, extraction of physical layer control signaling, etc. The controller/processor490receives a bit stream output from the receiving processor452, and provides header decompression, decryption, packet segmentation and reordering, multiplexing and de-multiplexing between a logical channel and a transport channel, to implement L2 protocols used for the user plane and the control plane. The beam manager471determines to transmit a first radio signal in a first time resource pool and to monitor a first signaling in a second time resource pool in a first frequency domain resource, and transmits the result to the controller/processor490. The controller/processor490is connected to the memory480that stores program codes and data. The memory480may be a computer readable medium. In one subembodiment, the UE450includes at least one processor and at least one memory. The at least one memory includes computer program codes. The at least one memory and the computer program codes are configured to be used in collaboration with the at least one processor. The UE450at least receives a target radio signal, transmits a first radio signal in a first time resource pool, and monitors a first signaling in a second time resource pool in a first frequency domain resource; a channel measurement for the target radio signal is used for triggering a transmitting of the first radio signal; the first signaling is transmitted employing a first antenna port group, the first radio signal is used for determining a second antenna port group, and the first antenna port group is different from the second antenna port group; the first signaling is a physical layer signaling, and the first signaling is used for determining whether the first radio signal is correctly received. In one subembodiment, the UE450includes a memory that stores a computer readable instruction program. The computer readable instruction program generates an action when executed by at least one processor. The action includes: receiving a target radio signal, transmitting a first radio signal in a first time resource pool, and monitoring a first signaling in a second time resource pool in a first frequency domain resource; a channel measurement for the target radio signal is used for triggering a transmitting of the first radio signal; the first signaling is transmitted employing a first antenna port group, the first radio signal is used for determining a second antenna port group, and the first antenna port group is different from the second antenna port group; the first signaling is a physical layer signaling, and the first signaling is used for determining whether the first radio signal is correctly received. In one subembodiment, the gNB410includes at least one processor and at least one memory. The at least one memory includes computer program codes. The at least one memory and the computer program codes are configured to be used in collaboration with the at least one processor. The gNB410at least transmits a target radio signal, receives a first radio signal in a first time resource pool, and transmits a first signaling in a second time resource pool in a first frequency domain resource; a channel measurement for the target radio signal is used for triggering a transmitting of the first radio signal; the first signaling is transmitted employing a first antenna port group, the first radio signal is used for determining a second antenna port group, and the first antenna port group is different from the second antenna port group; the first signaling is a physical layer signaling, and the first signaling is used for determining whether the first radio signal is correctly received. In one embodiment, the gNB410includes a memory that stores a computer readable instruction program. The computer readable instruction program generates an action when executed by at least one processor. The action includes transmitting a target radio signal, receiving a first radio signal in a first time resource pool, and transmitting a first signaling in a second time resource pool in a first frequency domain resource; a channel measurement for the target radio signal is used for triggering a transmitting of the first radio signal; the first signaling is transmitted employing a first antenna port group, the first radio signal is used for determining a second antenna port group, and the first antenna port group is different from the second antenna port group; the first signaling is a physical layer signaling, and the first signaling is used for determining whether the first radio signal is correctly received. In one subembodiment, the UE450corresponds to the UE in the disclosure. In one subembodiment, the gNB410corresponds to the base station in the disclosure. In one subembodiment, the beam manager441is used for determining to transmit a first radio signal in a first time resource pool and monitor a first signaling in a second time resource pool in a first frequency domain resource. In one subembodiment, the beam manager441is used for determining to receive a third radio signal in a third time resource pool in a first frequency domain resource. In one subembodiment, the beam manager441is used for determining to transmit a fourth radio signal in a first time resource pool. In one subembodiment, the beam manager441is used for determining to transmit a fifth radio signal in a first time resource pool. In one subembodiment, at least the former two of the receiver456, the receiving processor452and the controller/processor490are used for receiving a target radio signal. In one subembodiment, at least the former two of the transmitter456, the transmitting processor455and the controller/processor490are used for transmitting a first radio signal in a first time resource pool. In one subembodiment, at least the former two of the receiver456, the receiving processor452and the controller/processor490are used for monitoring a first signaling in a second time resource pool in a first frequency domain resource. In one subembodiment, at least the former two of the receiver456, the receiving processor452and the controller/processor490are used for monitoring a second signaling in a third time resource pool in a first frequency domain resource. In one subembodiment, at least the former two of the receiver456, the receiving processor452and the controller/processor490are used for receiving a third radio signal in a third time resource pool in a first frequency domain resource. In one subembodiment, at least the former two of the transmitter456, the transmitting processor455and the controller/processor490are used for transmitting a fourth radio signal in a first time resource pool. In one subembodiment, at least the former two of the transmitter456, the transmitting processor455and the controller/processor490are used for transmitting a fifth radio signal in a first time resource pool. In one subembodiment, at least the former two of the receiver456, the receiving processor452and the controller/processor490are used for receiving a candidate radio signal in a first frequency domain resource. In one subembodiment, at least the former two of the receiver456, the receiving processor452and the controller/processor490are used for receiving first information and second information respectively. In one subembodiment, the beam manager471is used for determining to receive a first radio signal in a first time resource pool and transmit a first signaling in a second time resource pool in a first frequency domain resource. In one subembodiment, the beam manager471is used for determining to transmit a third radio signal in a third time resource pool in a first frequency domain resource. In one subembodiment, the beam manager471is used for determining to receive a fourth radio signal in a first time resource pool. In one subembodiment, the beam manager471is used for determining to receive a fifth radio signal in a first time resource pool. In one subembodiment, at least the former two of the transmitter416, the transmitting processor415and the controller/processor440are used for transmitting a target radio signal. In one subembodiment, at least the former two of the receiver412, the receiving processor415and the controller/processor440are used for receiving a first radio signal in a first time resource pool. In one subembodiment, at least the former two of the transmitter416, the transmitting processor415and the controller/processor440are used for transmitting a first signaling in a second time resource pool in a first frequency domain resource. In one subembodiment, at least the former two of the transmitter416, the transmitting processor415and the controller/processor440are used for transmitting a second signaling in a third time resource pool in a first frequency domain resource. In one subembodiment, at least the former two of the transmitter416, the transmitting processor415and the controller/processor440are used for transmitting a third radio signal in a third time resource pool in a first frequency domain resource. In one subembodiment, at least the former two of the receiver412, the receiving processor415and the controller/processor440are used for receiving a fourth radio signal in a first time resource pool. In one subembodiment, at least the former two of the receiver412, the receiving processor415and the controller/processor440are used for receiving a fifth radio signal in a first time resource pool. In one subembodiment, at least the former two of the transmitter416, the transmitting processor415and the controller/processor440are used for transmitting a candidate radio signal in a first frequency domain resource. In one subembodiment, at least the former two of the transmitter416, the transmitting processor415and the controller/processor440are used for transmitting first information and second information respectively. Embodiment 5 Embodiment 5 illustrates an example of a flowchart of a first signaling, as shown inFIG.5. InFIG.5, a base station N1is a maintenance base station for a serving cell of a UE U2. Steps in box F0and box F1are optional. The base station N1transmits first information and second information respectively in S10, transmits a candidate radio signal in a first frequency domain resource in S11, transmits a target radio signal in S12, receives a first radio signal in a first time resource pool in S13, transmits a first signaling in a second time resource pool in the first frequency domain resource in S14, transmits a second signaling in a third time resource pool in the first frequency domain resource in S15, and transmits a third radio signal in the third time resource pool in the first frequency domain resource in S16. The UE U2receives first information and second information respectively in S20, receives a candidate radio signal in a first frequency domain resource in S21, receives a target radio signal in S22, transmits a first radio signal in a first time resource pool in S23, monitors a first signaling in a second time resource pool in the first frequency domain resource in S24, monitors a second signaling in a third time resource pool in the first frequency domain resource in S25, and receives a third radio signal in the third time resource pool in the first frequency domain resource in S26. In Embodiment 5, a channel measurement for the target radio signal is used for triggering a transmitting of the first radio signal; the first signaling is transmitted employing a first antenna port group, the first radio signal is used for determining a second antenna port group, and the first antenna port group is different from the second antenna port group; the first signaling is a physical layer signaling, and the first signaling is used for determining whether the first radio signal is correctly received; the second signaling is transmitted employing the second antenna port group, and the second signaling is used for determining that the second antenna port group is acknowledged by the base station N1; the first signaling determines that the first radio signal is correctly received and the UE U2detects the second signaling in the third time resource pool in the first frequency domain resource, and the third radio signal is transmitted employing the second antenna port group. In one subembodiment, S11and S12of the base station N1in Embodiment 5 may be interchanged in the sequence. In one subembodiment, S21and S22of the UE U2in Embodiment 5 may be interchanged in the sequence. In one subembodiment, the first radio signal is one PRACH, and the second signaling is one DCI scrambled with a Random Access Radio Network Temporary Identifier (RA-RNTI). In one subembodiment, the second signaling is one Medium Access Control (MAC) signaling. In one subembodiment, the second signaling is one BRR response. In one subembodiment, the UE U2monitors the second signaling in a second frequency domain resource pool in the third time resource pool. In one affiliated embodiment of the above subembodiment, the second frequency domain resource pool corresponds to one CORESET. In one subembodiment, the third time resource pool is related to the second antenna port group. In one subembodiment, the second antenna port group is used for determining the third time resource pool. In one subembodiment, the phrase that the second signaling is used for determining that the second antenna port group is acknowledged by the base station N1refers that: the base station N1informs the UE U2through the second signaling that subsequent schedulings for the UE U2will be transmitted using the second antenna port group. In one subembodiment, the phrase that the second signaling is used for determining that the second antenna port group is acknowledged by the base station N1refers that: the base station N1informs, through the second signaling, the UE U2to employ the second antenna port group to receive subsequent schedulings for the UE U2. In one subembodiment, the third radio signal is one Physical Downlink Shared Channel (PDSCH). In one subembodiment, the third radio signal is one DCI. In one subembodiment, the third radio signal is one downlink grant, or the third radio signal is one uplink grant. In one subembodiment, the third time resource pool includes a positive integer number of third time resource subpools, the UE monitors the second signaling in a second frequency domain resource pool in one third time resource subpool, and receives the third radio signal in the second frequency domain resource pool in another third time resource subpool. In one subembodiment, before transmitting the first signaling in a second time resource pool in a first frequency domain resource, the base station N1performs a channel detection of the first frequency domain resource employing the first antenna port group, and determines that the first frequency domain resource is idle. In one affiliated embodiment of the above subembodiment, the above channel detection is an energy detection. In one affiliated embodiment of the above subembodiment, the above channel detection is a process of LBT. In one affiliated embodiment of the above subembodiment, the above channel detection is a process of Clear Channel Assessment (CCA). In one affiliated embodiment of the above subembodiment, the phrase that performing a channel detection of the first frequency domain resource employing the first antenna port group refers to: performing a channel detection of the first frequency domain resource employing a beamforming vector corresponding to the first antenna port group. In one affiliated embodiment of the above subembodiment, the phrase that performing a channel detection of the first frequency domain resource employing the first antenna port group refers to: performing a channel detection of the first frequency domain resource employing at least one of antenna ports included in the first antenna port group. In one affiliated embodiment of the above subembodiment, the phrase that performing a channel detection of the first frequency domain resource employing the first antenna port group refers that: judging whether the first frequency domain resource is idle employing a given receiving energy, wherein the given receiving energy refers to a receiving energy of at least one of antenna ports included in the first antenna port group in a frequency domain resource corresponding to the first frequency domain resource. In one subembodiment, before transmitting a second signaling in a third time resource pool in a first frequency domain resource, the base station N1performs a channel detection of the first frequency domain resource employing the second antenna port group, and determines that the first frequency domain resource is idle. In one affiliated embodiment of the above subembodiment, the above channel detection is an energy detection. In one affiliated embodiment of the above subembodiment, the above channel detection is a process of LBT. In one affiliated embodiment of the above subembodiment, the above channel detection is a process of CCA. In one affiliated embodiment of the above subembodiment, the phrase that performing a channel detection of the first frequency domain resource employing the second antenna port group refers to: performing a channel detection of the first frequency domain resource employing a beamforming vector corresponding to the second antenna port group. In one affiliated embodiment of the above subembodiment, the phrase that performing a channel detection of the first frequency domain resource employing the second antenna port group refers to: performing a channel detection of the first frequency domain resource employing at least one of antenna ports included in the second antenna port group. In one affiliated embodiment of the above subembodiment, the phrase that performing a channel detection of the first frequency domain resource employing the second antenna port group refers to: judging whether the first frequency domain resource is idle employing a given receiving energy, wherein the given receiving energy refers to a receiving energy of at least one of antenna ports included in the second antenna port group in a frequency domain resource corresponding to the first frequency domain resource. In one subembodiment, the first information is used for determining multiantenna related transmitting of the target radio signal. In one subembodiment, the first information is used for determining frequency domain resources occupied by the target radio signal. In one subembodiment, the first information is used for determining time domain resources occupied by the target radio signal. In one subembodiment, the first information is configured through an RRC signaling. In one subembodiment, the second information is used for determining multiantenna related transmitting of the candidate radio signal. In one subembodiment, the second information is used for determining frequency domain resources occupied by the candidate radio signal. In one subembodiment, the second information is used for determining time domain resources occupied by the candidate radio signal. In one subembodiment, the second information is configured through an RRC signaling. In one subembodiment, the air interface is wireless. In one subembodiment, the air interface includes a wireless channel. In one subembodiment, the air interface is an interface between a base station and the UE. In one subembodiment, the air interface is an Uu interface. In one subembodiment, the air interface corresponds to the wireless channel between UE201and NR node B203shown inFIG.2. Embodiment 6 Embodiment 6 illustrates an example of a flowchart of a fourth radio signal, as shown inFIG.6. InFIG.6, a base station N3is a maintenance base station for a serving cell of a UE U4. The base station N3receives a fourth radio signal in a first time resource pool. The UE U4transmits a fourth radio signal in a first time resource pool. In Embodiment 6, the first signaling in the disclosure determines that the first radio signal in the disclosure is not correctly received, and the fourth radio signal is used for determining the second antenna port group in the disclosure. In one subembodiment, the fourth radio signal is a retransmission of the first radio signal in the disclosure. In one subembodiment, the first signaling determines that the first radio signal is not correctly received and the UE does not detect the second signaling in the third time resource pool in the first frequency domain resource. In one affiliated embodiment of the above subembodiment, the base station N3cannot determine the second antenna port group through the first radio signal, thus cannot transmit the second signaling through the second antenna port group, then the base station N3drops transmitting of the second signaling. In one affiliated embodiment of the above subembodiment, before transmitting the second signaling, the base station N3performs a channel detection of the first frequency domain resource employing the second antenna port group, the first frequency domain resource is busy, and the base station N3drops transmitting of the second signaling. Embodiment 7 Embodiment 7 illustrates an example of a flowchart of a fifth radio signal, as shown inFIG.7. InFIG.7, a base station N5is a maintenance base station for a serving cell of a UE U6. The base station N5performs a channel detection of the first frequency domain resource employing the second antenna port group, determines that the first frequency domain resource is busy and drops transmitting of the second signaling in S50, and receives a fifth radio signal in the first time resource pool in S51. The UE U6monitors a second signaling in a third time resource pool in a first frequency domain resource and does not detect the second signaling in S60, and transmits a fifth radio signal in the first time resource pool in S61. In Embodiment 7, the first signaling in the disclosure determines that the first radio signal in the disclosure is correctly received and the UE U6does not detect the second signaling in the third time resource pool in the first frequency domain resource, the fifth radio signal is used for determining a third antenna port group, and the third antenna port group is different from the second antenna port group. In one subembodiment, the phrase that the third antenna port group is different from the second antenna port group refers that: among antenna ports included in the third antenna port group, at least one antenna port does not belong to antenna ports included in the second antenna port group. In one subembodiment, the phrase that the third antenna port group is different from the second antenna port group refers that: among antenna ports included in the second antenna port group, at least one antenna port does not belong to antenna ports included in the third antenna port group. In one subembodiment, the phrase that the third antenna port group is different from the second antenna port group refers that: the third antenna port group corresponds to a third RS resource configuration index, the second antenna port group corresponds to a second RS resource configuration index, and the third RS resource configuration index is not equal to the second RS resource configuration index. In one subembodiment, the phrase that the third antenna port group is different from the second antenna port group refers that: the third antenna port group and the second antenna port group correspond to different transmitting beamforming vectors respectively. In one subembodiment, the phrase that the third antenna port group is different from the second antenna port group refers that: the third antenna port group and the second antenna port group are not spatially QCLed. In one subembodiment, the channel detection is an energy detection. In one subembodiment, the channel detection is a process of LBT. In one subembodiment, the channel detection is a CCA. In one subembodiment, the phrase that performing a channel detection of the first frequency domain resource employing the second antenna port group refers to: performing a channel detection of the first frequency domain resource employing a beamforming vector corresponding to the second antenna port group. In one subembodiment, the phrase that performing a channel detection of the first frequency domain resource employing the second antenna port group refers to: performing a channel detection of the first frequency domain resource employing at least one of antenna ports included in the second antenna port group. In one subembodiment, the phrase that performing a channel detection of the first frequency domain resource employing the second antenna port group refers to: judging whether the first frequency domain resource is idle employing a given receiving energy, wherein the given receiving energy refers to a receiving energy of at least one of antenna ports included in the second antenna port group in a frequency domain resource corresponding to the first frequency domain resource. Embodiment 8 Embodiment 8 illustrates an example of a diagram of a first time resource pool and a second time resource pool, as shown inFIG.8. In Embodiment 8, the first time resource pool includes M1 first time resource subpools, and the second time resource pool includes M2 second time resource subpools; any one of the M1 first time resource subpools includes a positive integer number of consecutive multicarrier symbols in time domain, and any one of the M2 second time resource subpools includes a positive integer number of consecutive multicarrier symbols in time domain. In one subembodiment, for the UE in the disclosure, the M2 second time resource subpools are used for receiving a given SSB. In one affiliated embodiment of the above subembodiment, for the UE in the disclosure, the M1 first time resource subpools are used for transmitting a given random access request for the SSB. In one affiliated embodiment of the above subembodiment, multiantenna related receiving of the given SSB is used for determining multiantenna related transmitting of the given random access request. In one affiliated embodiment of the above subembodiment, an index corresponding to the given SSB is used for determining positions of the M2 second time resource subpools in time domain. In one subembodiment, positions of the M1 first time resource subpools in time domain are used for determining positions of the M2 second time resource subpools in time domain. In one subembodiment, the M1 is equal to the M2, and the M1 first time resource subpools are one-to-one corresponding to the M2 second time resource subpools. In one subembodiment, the M2 is equal to a product of P and M1, the P is a positive integer greater than 1, P consecutive second time resource subpools among M2 second time resource subpools correspond to one first time resource subpool. In one subembodiment, the M1 first time resource subpools are discrete in time domain. In one subembodiment, the M1 first time resource subpools are periodically distributed in time domain. In one subembodiment, the M2 second time resource subpools are discrete in time domain. In one subembodiment, the M2 second time resource subpools are periodically distributed in time domain. In one subembodiment, any one of the M1 first time resource subpools is orthogonal to any one of the M2 second time resource subpools. In one subembodiment, time domain positions occupied by the M2 second time resource subpools are configured through an RRC signaling. In one subembodiment, there is(are) unoccupied multicarrier symbol(s) between any one first time resource subpool and any one second time resource subpool. Embodiment 9 Embodiment 9 illustrates an example of a diagram of a second time resource pool and a third time resource pool, as shown inFIG.9. In Embodiment 9, the second time resource pool includes M2 second time resource subpools, and the third time resource pool includes M3 third time resource subpools; any one of the M2 second time resource subpools includes a positive integer number of consecutive multicarrier symbols in time domain, and any one of the M3 third time resource subpools includes a positive integer number of consecutive multicarrier symbols in time domain. In one subembodiment, for the base station in the disclosure, the M2 second time resource subpools are used for transmitting a given SSB of a given index. In one subembodiment, for the base station in the disclosure, the M2 second time resource subpools are used for transmitting a PBCH of a given index. In one subembodiment, for the base station in the disclosure, the M2 second time resource subpools are used for transmitting a System Information Block (SIB) of a given index. In one subembodiment, the M2 second time resource subpools are configured as a Common Search Space (CSS). In one subembodiment, radio signals in the M3 third time resource subpools are all transmitted employing the second antenna port group. In one subembodiment, the M3 third time resource subpools are configured as a UE-Specific Search Space (USS). In one subembodiment, the M3 third time resource subpools are discrete in time domain. In one subembodiment, the M3 third time resource subpools are periodically distributed in time domain. In one subembodiment, time domain positions occupied by the M3 third time resource subpools are configured through an RRC signaling. In one subembodiment, there is(are) unoccupied multicarrier symbol(s) between any one second time resource subpool and any one third time resource subpool. Embodiment 10 Embodiment 10 illustrates an example of a diagram of a target radio signal, as shown inFIG.10. InFIG.10, the target radio signal includes K1 target radio sub-signals, the K1 target radio sub-signals are transmitted employing K1 target antenna port groups respectively, and the K1 is a positive integer. In one subembodiment, the second antenna port group in the disclosure is an antenna port group other than the K1 target antenna port groups. In one subembodiment, the K1 target radio sub-signals are all Physical Downlink Control Channels (PDCCHs), and the K1 target radio sub-signals are detected on K1 CORESETs respectively. In one subembodiment, the phrase that a channel measurement for the target radio signal is used for triggering a transmitting of the first radio signal in the disclosure refers that: the K1 target radio sub-signals are all PDCCHs, Block Error Rate(s) for the K1 target radio sub-signals are all less than a first threshold within a given time window, and the first radio signal is transmitted. In one affiliated embodiment of the above subembodiment, the phrase that all less than a first threshold within a given time window refers that: X1 detections are performed within the given time window, and results of the X1 detections are all less than a first threshold; the X1 is fixed, or the X1 is configured through an RRC signaling; and the X1 is a positive integer. In one affiliated embodiment of the above subembodiment, the first threshold is fixed, or the first threshold is configured through an RRC signaling. In one subembodiment, the phrase that a channel measurement for the target radio signal is used for triggering a transmitting of the first radio signal refers that: the K1 target radio sub-signals are all CSI-RSs, values of RSRP for the K1 target radio sub-signals are all less than a second threshold within a given time window, and the first radio signal is transmitted. In one affiliated embodiment of the above subembodiment, the phrase that all less than a second threshold within a given time window refers that: X2 detections are performed within the given time window, and results of the X2 detections are all less than a second threshold; the X2 is fixed, or the X2 is configured through an RRC signaling; and the X2 is a positive integer. In one affiliated embodiment of the above subembodiment, the second threshold is fixed, or the second threshold is configured through an RRC signaling. In one subembodiment, the K1 target antenna port groups correspond to K1 CSI-RS resource configuration indexes respectively. In one subembodiment, the K1 target antenna port groups correspond to K1 serving beams respectively. In one subembodiment, for the K1 target antenna port groups, the base station performs only one time of LBT to judge whether the first frequency domain resource is idle on the K1 target antenna port groups. In one subembodiment, the one time of LBT is used for a given antenna port group. In one affiliated embodiment of the above subembodiment, the given antenna port group corresponds to a first-type spatial transmitting parameter group, and the K1 target antenna port groups correspond to K1 target spatial transmitting parameter groups. In one example of the above affiliated embodiment, beams corresponding to the K1 target spatial transmitting parameter groups are all less than a beam corresponding to the first-type spatial transmitting parameter group in width. In one example of the above affiliated embodiment, the first-type spatial transmitting parameter group is generated with less antennas compared with a given target spatial transmitting parameter group, and the given target spatial transmitting parameter group is any one of the K1 target spatial transmitting parameter groups. In one affiliated embodiment of the above subembodiment, the first-type spatial transmitting parameter group corresponds to one transmitting beamforming vector. In one affiliated embodiment of the above subembodiment, the K1 target spatial transmitting parameter groups correspond to K1 transmitting beamforming vectors respectively. Embodiment 11 Embodiment 11 illustrates an example of a diagram of a candidate radio signal, as shown inFIG.11. InFIG.11, the candidate radio signal includes K2 candidate radio sub-signals, the K2 candidate radio sub-signals are transmitted employing K2 candidate antenna port groups respectively, and the K2 is a positive integer. In one subembodiment, the second antenna port group in the disclosure is one of the K2 candidate antenna port groups. In one subembodiment, the third antenna port group in the disclosure is one of the K2 candidate antenna port groups. In one affiliated embodiment of the above subembodiment, an RSRP of a candidate radio sub-signal transmitted on the second antenna port group is optimal in RSRPs of the K2 candidate radio sub-signals. In one affiliated embodiment of the above subembodiment, a hypothetical PDCCH BLER corresponding to a candidate radio sub-signal transmitted on the second antenna port group is optimal in hypothetical PDCCH BLERs corresponding to the K2 candidate radio sub-signals. In one affiliated embodiment of the above subembodiment, a hypothetical PDCCH BLER corresponding to a candidate radio sub-signal transmitted on the second antenna port group is greater than a third threshold, the third threshold if fixed, or the third threshold is configured through an RRC signaling. In one affiliated embodiment of the above subembodiment, an RSRP of a candidate radio sub-signal transmitted on the third antenna port group is just less than an RSRP of a candidate radio sub-signal transmitted on the second antenna port group, in RSRPs of the K2 candidate radio sub-signals. In one affiliated embodiment of the above subembodiment, a hypothetical PDCCH BLER corresponding to a candidate radio sub-signal transmitted on the third antenna port group is just less than a hypothetical PDCCH BLER corresponding to a candidate radio sub-signal transmitted on the second antenna port group, in hypothetical PDCCH BLERs corresponding to the K2 candidate radio sub-signals. In one affiliated embodiment of the above subembodiment, a hypothetical PDCCH BLER corresponding to a candidate radio sub-signal transmitted on the third antenna port group is greater than a third threshold, the third threshold if fixed, or the third threshold is configured through an RRC signaling. In one subembodiment, the second antenna port group is different from the third antenna port group. In one subembodiment, a channel measurement for the candidate radio signal is used for triggering a transmitting of the first radio signal. In one affiliated embodiment of the above subembodiment, the K2 candidate radio sub-signals are all CSI-RSs, there is a given candidate radio sub-signal in the K2 candidate radio sub-signals, the given candidate radio sub-signal is transmitted on the second antenna port group, values of RSRP of the given candidate radio sub-signal are all greater than a fourth threshold within a given time window, and the first radio signal is transmitted. In one affiliated embodiment of the above subembodiment, the K2 candidate radio sub-signals are all PDCCHs, there is a given candidate radio sub-signal in the K2 candidate radio sub-signals, the given candidate radio sub-signal is transmitted on the second antenna port group, hypothetical PDCCH BLERs corresponding to the given candidate radio sub-signal are all greater than a third threshold within a given time window, and the first radio signal is transmitted. In one subembodiment, the K2 candidate antenna port groups correspond to K2 CSI-RS resource configuration indexes respectively. In one subembodiment, the K2 candidate antenna port groups correspond to K2 serving beams respectively. In one subembodiment, for the K2 candidate antenna port groups, the base station performs only one time of LBT to judge whether the first frequency domain resource is idle on the K2 candidate antenna port groups. In one subembodiment, the one time of LBT is used for a given antenna port group. In one affiliated embodiment of the above subembodiment, the given antenna port group corresponds to a second-type spatial transmitting parameter group, and the K2 candidate antenna port groups correspond to K2 candidate spatial transmitting parameter groups. In one example of the above affiliated embodiment, beams corresponding to the K2 candidate spatial transmitting parameter groups are all less than a beam corresponding to the second-type spatial transmitting parameter group in width. In one example of the above affiliated embodiment, the second-type spatial transmitting parameter group is generated with less antennas compared with a given candidate spatial transmitting parameter group, and the given candidate spatial transmitting parameter group is any one of the K1 target spatial transmitting parameter groups. In one affiliated embodiment of the above subembodiment, the second-type spatial transmitting parameter group corresponds to one transmitting beamforming vector. In one affiliated embodiment of the above subembodiment, the K2 candidate spatial transmitting parameter groups correspond to K2 transmitting beamforming vectors respectively. Embodiment 12 Embodiment 12 illustrates an example of a diagram of an antenna port and an antenna port group, as shown inFIG.12. In Embodiment 12, one antenna port group includes a positive integer number of antenna ports; one antenna port is formed by antennas in a positive integer number of antenna groups through antenna virtualization superposition; one antenna group includes a positive integer number of antennas. One antenna group is connected to a baseband processor through one Radio Frequency (RF) chain, and different antenna groups correspond to different RF chains. Mapping coefficients from all antennas in a positive integer number of antenna groups included in a given antenna port to the given antenna port constitute a beamforming vector corresponding to the given antenna port. Mapping coefficients from multiple antennas included in any one given antenna group among a positive integer number of antenna groups included in the given antenna port to the given antenna port constitute an analog beamforming vector of the given antenna group. Analog beamforming vectors corresponding to the positive integer number of antenna groups are diagonally arranged to form an analog beamforming matrix corresponding to the given antenna port. Mapping coefficients from the positive integer number of antenna groups to the given antenna port constitute a digital beamforming vector corresponding to the given antenna port. The beamforming vector corresponding to the given antenna port is obtained by a product of the analog beamforming matrix and the digital beamforming vector corresponding to the given antenna port. Different antenna ports in one antenna port group are formed by same antenna group(s), and different antenna ports in one same antenna port group correspond to different beamforming vectors. FIG.12illustrates two antenna port groups, that is, an antenna port group #0 and an antenna port group #1, wherein the antenna port group #0 is formed by an antenna group #0, the antenna port group #1 is formed by an antenna group #1 and an antenna group #2. Mapping coefficients from multiple antennas in the antenna group #0 to the antenna port group #0 constitute an analog beamforming vector #0, a mapping coefficient from the antenna group #0 to the antenna port group #0 constitutes a digital beamforming vector #0. Mapping coefficients from multiple antennas in the antenna group #1 and multiple antennas in the antenna group #2 to the antenna port group #1 constitute an analog beamforming vector #1 and an analog beamforming vector #2 respectively. Mapping coefficients from the antenna group #1 and the antenna group #2 to the antenna port group #1 constitute a digital beamforming vector #1. A beamforming vector corresponding to any one antenna port in the antenna port group #0 is obtained by a product of the analog beamforming vector #0 and the digital beamforming vector #0. A beamforming vector corresponding to any one antenna port in the antenna port group #1 is obtained by a product of an analog beamforming matrix, which is formed by diagonal arrangement of the analog beamforming vector #1 and the analog beamforming vector #2, and the digital beamforming vector #1. In one embodiment, one antenna port group includes one antenna port. For example, the antenna port group #0 illustrated inFIG.12includes one antenna port. In one subembodiment, an analog beamforming matrix corresponding to the one antenna port is dimensionally reduced to an analog beamforming vector, a digital beamforming vector corresponding to the one antenna port is dimensionally reduced to one scalar, and a beamforming vector corresponding to the one antenna port is equal to the analog beamforming vector of the one antenna port. For example, the digital beamforming vector #0 inFIG.12is dimensionally reduced to one scalar, and the beamforming vector corresponding to the antenna port in the antenna port group #0 is the analog beamforming vector #0. In one embodiment, one antenna port group includes multiple antenna ports. For example, the antenna port group #1 inFIG.12includes multiple antenna ports. In one subembodiment, the multiple antenna ports correspond to a same analog beamforming matrix and different digital beamforming vectors. In one embodiment, antenna ports in different antenna port groups correspond to different analog beamforming matrixes. In one embodiment, any two antenna ports in one antenna port group are QCLed. In one embodiment, any two antenna ports in one antenna port group are spatially QCLed. Embodiment 13 Embodiment 13 illustrates a structure block diagram of a processing device in a UE, as shown inFIG.13. InFIG.13, the processing device1300in the UE mainly includes a first receiver1301, a first transmitter1302and a first transceiver1303. The receiver1301receives a target radio signal. The first transmitter1302transmits a first radio signal in a first time resource pool. The first transceiver1303monitors a first signaling in a second time resource pool in a first frequency domain resource. In Embodiment 13, a channel measurement for the target radio signal is used for triggering a transmitting of the first radio signal; the first signaling is transmitted employing a first antenna port group, the first radio signal is used for determining a second antenna port group, and the first antenna port group is different from the second antenna port group; the first signaling is a physical layer signaling, and the first signaling is used for determining whether the first radio signal is correctly received. In one subembodiment, the first transceiver1303further monitors a second signaling in a third time resource pool in the first frequency domain resource; the second signaling is transmitted employing the second antenna port group, and the second signaling is used for determining that the second antenna port group is acknowledged by a transmitter of the second signaling. In one subembodiment, the first transceiver1303further receives a third radio signal in the third time resource pool in the first frequency domain resource; the first signaling determines that the first radio signal is correctly received and the UE detects the second signaling in the third time resource pool in the first frequency domain resource, and the third radio signal is transmitted employing the second antenna port group. In one subembodiment, the first transceiver1303further transmits a fourth radio signal in the first time resource pool; the first signaling determines that the first radio signal is not correctly received, and the fourth radio signal is used for determining the second antenna port group. In one subembodiment, the first transceiver1303further transmits a fifth radio signal in the first time resource pool; the first signaling determines that the first radio signal is correctly received and the UE does not detect the second signaling in the third time resource pool in the first frequency domain resource, the fifth radio signal is used for determining a third antenna port group, and the third antenna port group is different from the second antenna port group. In one subembodiment, the first receiver1301further receives a candidate radio signal in the first frequency domain resource; a channel measurement for the candidate radio signal is used for determining the second antenna port group. In one subembodiment, the first receiver1301further receives first information and second information respectively; the first information is used for determining at least one of multiantenna related transmitting of the target radio signal, frequency domain resources occupied by the target radio signal or time domain resources occupied by the target radio signal; the second information is used for determining at least one of multiantenna related transmitting of the candidate radio signal, frequency domain resources occupied by the candidate radio signal or time domain resources occupied by the candidate radio signal; the first information and the second information are transmitted through an air interface. In one subembodiment, the first receiver1301includes at least the former two of the receiver456, the receiving processor452, the beam manager441or the controller/processor490illustrated in Embodiment 4. In one subembodiment, the first transmitter1302includes at least the former two of the transmitter456, the transmitting processor455, the beam manager441or the controller/processor490illustrated in Embodiment 4. In one subembodiment, the first transceiver1303includes at least the former three of the transmitter/receiver456, the receiving processor452, the transmitting processor455, the beam manager441or the controller/process or490illustrated in Embodiment 4. Embodiment 14 Embodiment 14 illustrates a structure block diagram of a processing device in a base station, as shown inFIG.14. InFIG.14, the processing device1400in the base station mainly includes a second transmitter1401, a second receiver1402and a second transceiver1403. The second transmitter1401transmits a target radio signal. The second receiver1402receives a first radio signal in a first time resource pool. The second transceiver1403transmits a first signaling in a second time resource pool in a first frequency domain resource. In Embodiment 14, a channel measurement for the target radio signal is used for triggering a transmitting of the first radio signal; the first signaling is transmitted employing a first antenna port group, the first radio signal is used for determining a second antenna port group, and the first antenna port group is different from the second antenna port group; the first signaling is a physical layer signaling, and the first signaling is used for determining whether the first radio signal is correctly received. In one subembodiment, the second transceiver1403transmits a second signaling in a third time resource pool in the first frequency domain resource; the second signaling is transmitted employing the second antenna port group, and the second signaling is used for determining that the second antenna port group is acknowledged by the base station. In one subembodiment, the second transceiver1403transmits a third radio signal in the third time resource pool in the first frequency domain resource; the first signaling determines that the first radio signal is correctly received and a transmitter of the first radio signal detects the second signaling in the third time resource pool in the first frequency domain resource, and the third radio signal is transmitted employing the second antenna port group. In one subembodiment, the second transceiver1403receives a fourth radio signal in the first time resource pool; the first signaling determines that the first radio signal is not correctly received, and the fourth radio signal is used for determining the second antenna port group. In one subembodiment, the second transceiver1403receives a fifth radio signal in the first time resource pool; the first signaling determines that the first radio signal is correctly received and a transmitter of the first radio signal does not detect the second signaling in the third time resource pool in the first frequency domain resource, the fifth radio signal is used for determining a third antenna port group, and the third antenna port group is different from the second antenna port group. In one subembodiment, the second transmitter1401transmits a candidate radio signal in the first frequency domain resource; a channel measurement for the candidate radio signal is used for determining the second antenna port group. In one subembodiment, the second transmitter1401transmits first information and second information respectively; the first information is used for determining at least one of multiantenna related transmitting of the target radio signal, frequency domain resources occupied by the target radio signal or time domain resources occupied by the target radio signal; the second information is used for determining at least one of multiantenna related transmitting of the candidate radio signal, frequency domain resources occupied by the candidate radio signal or time domain resources occupied by the candidate radio signal; the first information and the second information are transmitted through an air interface. In one subembodiment, the second transmitter1401includes at least the former two of the transmitter416, the transmitting processor415, the beam manager471or the controller/processor illustrated in Embodiment 4. In one subembodiment, the second receiver1402includes at least the former two of the receiver416, the receiving processor412, the beam manager471or the controller/processor440illustrated in Embodiment 4. In one subembodiment, the second transceiver1403includes at least the former three of the transmitter/receiver416, the receiving processor412, the transmitting processor415, the beam manager471or the controller/processor440illustrated in Embodiment 4. The ordinary skill in the art may understand that all or part steps in the above method may be implemented by instructing related hardware through a program. The program may be stored in a computer readable storage medium, for example Read-Only Memory (ROM), hard disk or compact disc, etc. Optionally, all or part steps in the above embodiments also may be implemented by one or more integrated circuits. Correspondingly, each module unit in the above embodiment may be realized in the form of hardware, or in the form of software function modules. The disclosure is not limited to any combination of hardware and software in specific forms. The UE and terminal in the disclosure include but not limited to unmanned aerial vehicles, communication modules on unmanned aerial vehicles, telecontrolled aircrafts, aircrafts, diminutive airplanes, mobile phones, tablet computers, notebooks, vehicle-mounted communication equipment, wireless sensor, network cards, terminals for Internet of Things, REID terminals, NB-IOT terminals, Machine Type Communication (MTC) terminals, enhanced MTC (eMTC) terminals, data cards, low-cost mobile phones, low-cost tablet computers, etc. The base station in the disclosure includes but not limited to macro-cellular base stations, micro-cellular base stations, home base stations, relay base stations, gNBs (NR Nodes B), Transmitter Receiver Points (TRPs) and radio communication equipment. The above are merely the preferred embodiments of the disclosure and are not intended to limit the scope of protection of the disclosure. Any modification, equivalent substitute and improvement made within the spirit and principle of the disclosure are intended to be included within the scope of protection of the disclosure.
77,904
11863482
DETAILED DESCRIPTION Embodiments can provide for a User Equipment (UE) that can receive Downlink Control Information (DCI) from a base station. The DCI can contain a Channel State Information (CSI) request on a physical control channel in a subframe of a first serving cell. The CSI request can direct the UE to perform CSI measurements for at least one aperiodic Channel State Information Reference Signal (CSI-RS) of a second serving cell. The UE can ascertain CSI-RS resources in a subframe of the second serving cell based on at least the DCI contents. The UE can determine CSI based on the ascertained CSI-RS resources. The UE can then send the determined CSI to the base station. Embodiments can further provide for a UE that can receive Zero Power Channel State Information Reference Signal (ZP-CSI-RS) configuration information for an aperiodic ZP-CSI-RS of a serving cell. The UE can receive DCI on a physical control channel in a subframe of the serving cell. The DCI can indicate whether a Physical Downlink Shared Channel (PDSCH) of the UE in the subframe of the serving cell is rate-matched around resource elements indicated by the ZP-CSI-RS configuration information. The UE can decode the PDSCH in the subframe of the serving cell based on rate-matching around the resource elements indicated by the ZP-CSI-RS configuration when the DCI indicates the PDSCH of the UE is rate-matched around the resource elements indicated by the ZP-CSI-RS configuration. Embodiments can further provide for signaling channel state information reference signals for Long Term Evolution (LTE) operation in unlicensed spectrum. For example, embodiments can provide mechanisms in Release 13 LTE to deliver downlink CSI-RS and associated control information from a multi-antenna base station (eNB) to a User Equipment (UE) to assist the eNB in its multi-antenna precoding operations, as applied to LTE License-Assisted Access (LTE-LAA) deployment scenarios in which the CSI-RS's are transmitted to the UE on a secondary carrier operating in unlicensed spectrum. Access to the unlicensed spectrum at any given time and location can depend on whether the unlicensed spectrum is not being used by others, so the eNB may not rely on transmitting the CSI-RS on a duty cycle as is done in previous LTE releases. The UE-specific configuration of the CSI-RS resources can be done on higher layers, and one problem solved can be how to notify the UE of which received subframe contains the CSI-RS transmissions. According to a possible embodiment, when the UE receives a DL grant on the unlicensed carrier in a subframe, the grant contains a CSI-RS request, so the UE knows that an aperiodic CSI-RS transmission is in the subframe. The UE measures the CSI-RS and reports the CSI to the eNB. The CSI report can include one or more of CQI (Channel Quality Information), RI (Rank Information), PMI (Precoding Matrix Indication), PTI (Precoder Type Indication) and/or other information. FIG.1is an example block diagram of a system100according to a possible embodiment. The system100can include a first UE110and a base station120, such as an Enhanced Node-B (eNB). The first UE110and the base station120can communicate on different cells130and140. The cell130can be a first cell, such as a primary cell and the UE110can be connected to the primary cell. The cell140can be a second cell, such as a secondary cell. Furthermore, the second cell140can be a cell that operates on unlicensed spectrum. The cells130and140can also be cells associated with other base stations, can be a macro cells, can be micro cells, can be femto cells, and/or can be any other cells useful for operation with a LTE network. The system100can also include a second UE112that can communicate with the base station120on cells132and142in a similar manner to the first UE110. The UE's110and112can be any devices that can access a wireless wide area network. For example, the user devices110and112can be wireless terminals, portable wireless communication devices, smartphones, cellular telephones, flip phones, personal digital assistants, personal computers having cellular network access cards, selective call receivers, tablet computers, or any other device that is capable of operating on a wireless wide area network. FIG.2is an example signal flow diagram200according to a possible embodiment. The signal flow diagram200shows signals and operations of the UE110and the base station120. From the base station perspective, at210, the base station110may or may not perform a Listen-Before-Talk (LBT) procedure to determine if a carrier is clear for a carrier frequency corresponding to a second serving cell, such as second cell140, prior to sending the aperiodic CSI-RS on the second serving cell. The LBT procedure can be performed to determine if the carrier is clear for transmissions. At220, the base station120can transmit DCI in a subframe to the UE110. The DCI can contain a CSI request on a physical control channel in a subframe of a first serving cell, such as the first cell130. The CSI request can direct the UE110to perform CSI measurements for at least one aperiodic CSI-RS of the second serving cell. The DCI can include an indication of resources for the aperiodic CSI-RS in a subframe of the second serving cell. The second serving cell may or may not operate in an unlicensed spectrum. Contents of the DCI for determining the CSI-RS resources can be in a CSI request field and/or another field of a control channel. The DCI can also indicate granted resources on which the CSI is sent by the UE110to the base station120. For example, the base station120can indicate to the UE110which resources it is granted for sending the CSI to the base station120. The CSI-RS resources in the subframe can be based on a higher layer signaled CSI-RS configuration for aperiodic CSI-RS transmission, where the higher layer can be a layer higher than a physical layer. At240, the base station120can send the CSI-RS based on the indicated resources for the aperiodic CSI-RS in the subframe of the second serving cell. At260, the base station120can receive, in another subframe, a message containing the requested CSI from the UE110. From the UE perspective, at220, the UE110can receive DCI from the base station120. The DCI can contain a CSI request on a physical control channel in a subframe of a first serving cell. The CSI request can direct the UE to perform CSI measurements for at least one aperiodic CSI-RS of a second serving cell. Receiving DCI can include receiving CSI-RS configuration information for the at least one aperiodic CSI-RS. DCI contents for determining the CSI-RS resources can be in at least one of a CSI request field and another field of a control channel. The DCI can also indicate granted resources on which the CSI is sent by the UE110to the base station120. The DCI, such as DCI format 0 or other DCI format (e.g. DCI format 1A), may contain fields such as “aperiodic CSI request,” “resources for transmissions,” “modulation and coding scheme,” and other fields. The DCI can be transmitted by the base station120and received by the UE110on control channels, such as a Physical Downlink Control Channel (PDCCH), an Enhanced PDCCH (EPDCCH), and/or other types of control channels, where the DCI can be content of the control channel. The DCI can also contain Channel State Information Interference Measurement (CSI-IM) configuration information for a CSI-IM of the serving cell and a set of resource elements used for determining CSI can be indicated by the CSI-IM configuration information. The first serving cell can be a primary serving cell, a licensed secondary serving cell, an unlicensed secondary serving cell, and/or any other cell. The second serving cell can be a licensed secondary serving cell, an unlicensed secondary serving cell, and/or any other cell. The first and second serving cells can be the same serving cell or different serving cells. Aperiodic subframes of cells can be aperiodic in that they do not follow a specific duty cycle. At230, the UE110can ascertain CSI-RS resources in a subframe of the second serving cell based on at least the DCI contents. CSI-RS resources can include at least a set of resource elements, antenna ports, a scrambling identifier (scrambling ID), and/or other resources for a UE to measure CSI. The subframe of an ascertained CSI-RS resource may or may not be same subframe as the subframe in which the DCI is received. The ascertained CSI-RS resources in the subframe can also be based on a higher layer signaled CSI-RS configuration for aperiodic CSI-RS transmission, where the higher layer can be a layer higher than a physical layer. Higher layers can include a Radio Resource Control (RRC) layer, a Media Access Control (MAC) layer, and other layers higher than a physical layer. At250, the UE110can determine CSI based on the ascertained CSI-RS resources. At260, the UE110can send the determined CSI to the base station120. FIG.3is an example signal flow diagram300according to a possible embodiment. The signal flow diagram300shows signals and operations of the first UE110, the base station120, and the second UE112. Signals from the signal flow diagram300can be performed in parallel with, sequential with, or in various orders with signals from the signal flow diagram200. From the first UE110perspective, at310the first UE110can receive ZP-CSI-RS configuration information for an aperiodic ZP-CSI-RS of a serving cell. The aperiodic ZP-CSI-RS can be at least one aperiodic ZP-CSI-RS of multiple aperiodic ZP-CSI-RS's. The ZP-CSI-RS's can be aperiodic in that they do not follow a specific duty cycle. The ZP-CSI-RS configuration information can include a bitmap, where each bit of the bitmap can correspond to a set of RE's. Each bit of the bitmap can also indicate whether a given RE corresponds to a ZP-CSI-RS. This configuration can be signaled to the UE by the eNB via higher layer signaling, such as on a Radio Resource Control (RRC) layer. For example, higher layers can include a Radio Resource Control (RRC) layer, a Media Access Control (MAC) layer, and other layers higher than a physical layer. The ZP-CSI-RS configuration information can be received in a DCI. At320, the first UE110can receive DCI on a physical control channel in a subframe of the serving cell. The DCI can indicate whether a PDSCH of the UE in the subframe of the serving cell is rate-matched around resource elements indicated by the ZP-CSI-RS configuration information. The DCI can include a field that indicates the ZP-CSI-RS configuration based on which PDSCH is rate-matched in the subframe. At330, the first UE110can decode the PDSCH in the subframe of the serving cell based on rate-matching around the resource elements indicated by the ZP-CSI-RS configuration when the DCI indicates the PDSCH of the first UE110is rate-matched around the resource elements indicated by the ZP-CSI-RS configuration. The subframe of the PDSCH in330can be the same subframe as the one in which the UE received the DCI in320. Decoding can include receiving data on PDSCH after rate-matching around zero power RE's indicated by the ZP-CSI-RS configuration information. Rate-matching can include skipping resource elements including a ZP-CSI-RS. For example, rate-matching can include determining whether PDSCH contents are not placed in or not mapped to RE's, i.e., determining which RE's do not include PDSCH contents. At340, the first UE110can receive a CSI-RS. At350, the first UE110can determine CSI based on a set of resource elements that is a subset of resource elements indicated by the ZP-CSI-RS configuration information, such as based on the CSI-RS received in the subset of resource elements. The subframe in320can be a first subframe and at360, the first UE110can send the determined CSI in a second subframe. The first subframe and the second subframe can be sequential or can be separated by at least one intervening subframe or time period. Separating the first and second subframe can give a UE some processing time to measure the RS and report it back to an eNB on the uplink. The CSI can be computed using two components: a channel part which can be based on a Non-Zero Power CSI-RS (NZP-CSI-RS) configuration, and an interference part which can be based on a CSI-IM configuration, which can be similar to the NZP-CSI-RS configuration in structure, such as in signaling format. From the base station120perspective, at310, the base station120can signal a ZP-CSI-RS configuration for an aperiodic ZP-CSI-RS for the first UE110via at least one higher layer, where the higher layer can be higher than a physical layer. At320, the base station120can indicate via a control channel to the first UE110in a subframe as to whether the first UE110rate-matches PDSCH around resource elements indicated by a ZP-CSI-RS configuration for an aperiodic ZP-CSI-RS for the first UE110via at least one higher layer. The base station120can indicate the rate-matching via a control channel of a first serving cell. The first serving cell can be a primary cell and the first UE110can be connected to the primary cell. At340, the base station120can transmit a CSI-RS in resource elements that are subset of resource elements indicated by the ZP-CSI-RS configuration. At370, the base station120can transmit an aperiodic CSI request to a UE. The UE112that the base station120transmits the aperiodic CSI request to can be the first UE110and/or the second UE112. The aperiodic CSI request can direct the first UE110and/or the second UE112to measure CSI based on resource elements in the subframe that are a subset of the ZP-CSI-RS configuration. The aperiodic CSI request can be a request for CSI for resources elements that are aperiodic, such as for aperiodic CSI-RS's. The aperiodic CSI request can direct the first UE110and/or the second UE112to measure CSI based on resource elements in a subframe of a second serving cell where the resources elements are a subset of the ZP-CSI-RS configuration. At380, the base station120can receive in another subframe, a message containing the requested CSI from the first UE110and/or the second UE112. Embodiments can provide for CSI enhancements for LTE operation in unlicensed spectrum, such as for Licensed Assisted Access for LTE (LAA-LTE) so that LAA-LTE can coexist of with other unlicensed spectrum deployments and can provide physical layer options and enhancements to LTE to meet design targets and requirements. Listen-Before-Talk (LBT) and discontinuous transmission requirements can be used for operation in unlicensed spectrum, such as 5 GHz, (e.g. used for Wi-Fi), in some countries/regions. Modifications to physical layer signaling and assumptions from a base station to a user equipment (eNB to UE), as compared to the LTE Release 12 (Rel 12) standard, can be implemented to operate LAA-LTE UEs in unlicensed spectrum with such requirements. The modifications can also help in improving LAA-LTE andWi-Fi coexistence on the same unlicensed carrier. In LTE operation on a licensed carrier, signals, such as synchronization signals, Cell specific Reference Signals (CRS), CSI-RS, and CSI-IM, are typically transmitted periodically, since the medium, i.e. frequency spectrum, is always assumed to be available given the operator has exclusive use of the spectrum. However, the medium on an unlicensed carrier is not always available for an LTE operator. For example, the frequency spectrum of an unlicensed carrier, such as the spectrum used for Wi-Fi, can be shared with other users. The physical layer design of LTE can be adapted so that it can work on a medium that may be available for discontinuous time-periods and/or a medium that may not be with the same periodicity as that for LTE operation on a licensed carrier. For instance, for supplemental downlink using LTE in unlicensed spectrum, such as 5 GHz spectrum, where medium access is based on LBT, in some situations an eNB should perform a clear-channel assessment, such as energy detection, according to some requirements, such as regulatory, Clear Channel Assessment (CCA) per IEEE 802.11 standard, and/or other requirements, to detect if the medium is free. If the eNB detects that medium is free, then the eNB can start transmitting signals according to an LTE format, such as using an LTE subframe structure, for some amount of time, such as a few milliseconds, before it has to give up the medium and/or perform another CCA for accessing the medium. Multiple techniques can be employed to use LTE in unlicensed spectrum. According to one technique, for LTE Rel10-12 Carrier Aggregation (CA) or dual connectivity, the eNB can configure a Secondary serving cell (Scell) to the UE to provide additional frequency resources, such as a secondary carrier or a secondary Component Carrier (CC), for communication in addition to a Primary serving cell (Pcell). For a UE, the unlicensed carrier can be utilized as a Scell in the carrier aggregation context, where the Pcell can be operating on a licensed carrier, where both cross-carrier and same-carrier scheduling could be supported for the Scell. For convenience, a Scell operating in unlicensed carrier can be denoted as Scell-u. According to another technique, the Scell-u can be aligned substantially in the time domain with another cell, such as a Scell or a Pcell, at radio frame and subframe level. According to another technique, the eNB can perform CCA to determine when the medium is free. If it determines the medium is free, then it can transmit LTE signals on the Scell-u. According to another technique, the UE may be using discovery signals for Radio Resource Management (RRM) measurements and reporting the measurements on the unlicensed carrier, such as the corresponding to Scell-u. Since discovery signals can occur with sparse periodicity, it may be assumed that the discovery signals are always transmitted on an unlicensed carrier. For example, the discovery signals may not be subject to the LBT limitation. Alternatively, if the medium is not available at a particular discovery signal occasion, the corresponding discovery signals may be moved in time to a suitable time interval where the medium is available. According to another technique, the eNB may need to create guard intervals, such as of <1 Orthogonal Frequency Division Multiplex (OFDM) symbol (which is typically around 70 microseconds), in its downlink transmissions to the UEs. Since CCA duration can be ˜20 us, a guard period of 1 OFDM symbol of duration 70 us per current LTE frame structure may be useful to support CCA. This can be achieved by creating shortened downlink subframes for UEs. For example, for Time Division Duplex (TDD) operation on a Scell-u, shortened uplink subframes may be required, which can be achieved using a fake Sounding Reference Signal (SRS) symbol, such as by configuring the last symbol of the subframe as a SRS symbol instead of a CCA. Single symbol guard intervals occurring at the beginning of a subframe can be possible using cross-carrier scheduling, and using pdschStartSymbol=1 signaling for the unlicensed carrier, assuming a subframe starts with symbol 0 numbering for the first symbol in the subframe. Single symbol guard intervals occurring at the beginning of a subframe can be possible using self-carrier scheduling, Enhanced Physical Downlink Control Channel (EPDCCH), and using epdcchStartSymbol>=1 signaling for the unlicensed carrier. For single symbol guard intervals occurring at the end of the subframe, the downlink subframe can be shortened and special subframe formats, such as defined for TDD, can be used to create guard intervals in the downlink subframes. According to another technique, it may be useful to have the eNB may create guard intervals of length 1 ms or higher in its downlink transmissions to the UEs. The guard interval can serve several purposes, such as to allow eNB to release the medium for a few subframes to meet the channel occupancy requirements per the LBT specification, etc., and such as to allow eNB energy savings, interference reduction, etc. The UE can be oblivious to the exact purpose of what the guard period is used for. This technique can use subframe-level ON/OFF for a UE that is in activated state on the Scell. According to another technique, for subframe-level ON/OFF, in the activated state, the multiple aspects can be considered. According to one aspect, for Primary Synchronization Channel (PSS)/Secondary Synchronization Channel (SSS), given overhead is small, a UE may assume that the REs corresponding to PSS/SSS are always occupied irrespective of whether the subframe is ON or OFF. Another option is to always assume that PSS/SSS, except for those PSS/SSS that occur in Discovery signals is not transmitted for Scells. Another option is for the UE in activated state to assume PSS/SSS is transmitted in predetermined synchronization subframes, such as subframe 0, 5 for Frequency Division Duplex (FDD), subframe 1, 6 for TDD, etc., only if a DCI format (PDCCH/EPDCCH) is detected or PDSCH scheduled in that subframe. According to another aspect, for Cell-specific Reference Signal (CRS), if a UE detects and finds a PDCCH/EPDCCH/PDSCH for the Scell in a subframe, the UE can assume the subframe is not OFF, such as assume a CRS is present in the subframe. Otherwise, the UE can assume that CRS is not present in the subframe. According to another aspect, a discovery signal may be always transmitted irrespective of ON/OFF status of a subframe. Alternately, if the medium is busy, the discovery signal occasion can be moved to the nearest time duration when medium is available. Aperiodic discovery signal scheduling can be possible. A discovery signal occasion can include a burst of PSS/SSS+CRS+CSI-RS every M milliseconds, where M can be 40, 80 or 160. According to another aspect, for (E)PDCCH, a UE can blindly detect, or monitor a set of (E)PDCCH candidates, every subframe on the Scell to detect DCI. If the UE detects DCI in a subframe, the UE can follow the DCI, and assume that the subframe is not OFF. According to another aspect, for PDSCH, if the UE detects DCI scheduling PDSCH in the Scell in a subframe, the UE can follow the DCI, and assume that the corresponding Scell subframe is not OFF. The DCI may come from self-scheduling or cross-carrier scheduling. An issue can still remain on how to handle Channel State Information Reference Signal CSI-RS transmissions and CSI reporting based on CSI-RS. One option to handle CSI-RS transmission for activated Scell ON/OFF operation on a Scell-u can use a current periodic CSI-RS structure for CSI transmission and the eNB can occasionally drop CSI-RS in a subframe if the medium is not available. The UE can have multiple choices to report CSI. One way the UE can report CSI is to measure and report CSI assuming that CSI-RS is always present periodically on the unlicensed carrier. In this case, the UE may be measuring and reporting just “noise” if the Scell is off In this case, averaging measurements across multiple CSI-RS occasions may not be used. Also, an ON/OFF Indicator or ON/OFF detection may not be required. FIG.4is an example illustration of subframes400of another way a UE can report CSI according to a possible embodiment. The subframes400can include subframes410transmitted in a downlink by an eNB Pcell, subframes420transmitted in a downlink by an eNB Scell, and subframes430transmitted in an uplink by a UE. The subframes420can be transmitted in a downlink by an unlicensed eNB Scell. In this embodiment, the UE can measure and report CSI assuming CSI-RS is present only in ON subframes, such as subframe422, and use the CSI-RS of a most recent ON subframe to report CSI in a given uplink subframe432. In this embodiment, an ON/OFF Indicator or ON/OFF detection required may be required. The UE can report CSI assuming CSI-RS is present only in ON subframes. If the most recent subframe configured for CSI-RS is an OFF subframe424, the UE can report Out-Of-Range (OOR) in a given uplink subframe434. In this case, an ON/OFF Indicator or ON/OFF detection may be required. This case may result in extra transmissions from the UE on a PUCCH from OOR transmissions. The UE can report out-of-range indicator with the smallest rank value to reduce uplink payload. To improve the ‘ON/OFF indication’ approach, a maximum frame duration can be configured and known to the UE. The maximum frame duration can be the longest duration that the eNB can hold the medium. The eNB can send an ‘ON’ indication via PCell when it acquires the channel. UE can then start to measure and report CSI per configuration. After the maximum frame duration, the UE can stop measuring and reporting CSI. The UE can then wait until it sees another ON indication. If another ON indication is seen while the UE is measuring and reporting, a maximum frame duration timer can be reset, such as to the maximum frame duration. If the eNB or Scell goes to OFF state early, the UE can continue measuring and reporting CSI, which would not pose a problem. This approach can make it unnecessary to send ON/OFF indications every subframe. Only ON indications may be needed and only when the eNB acquires/reacquires the medium. FIG.5is an example illustration of subframes500of another way a UE can report CSI according to a possible embodiment. The subframes500can include subframes510transmitted in a downlink by an eNB Pcell, subframes520transmitted in a downlink by an eNB Scell, and subframes530transmitted in an uplink by a UE. The subframes520can be transmitted in a downlink by an unlicensed eNB S cell. This option for handling CSI-RS transmission for activated Scell ON/OFF operation on a Scell-u can use an aperiodic CSI-RS structure for CSI where the CSI-RS can be dynamically scheduled, such as via a Pcell and using cross-carrier scheduling. This option can relate to the signal flow diagram200. A corresponding CSI reporting grant can be a broadcast grant so that multiple UEs can measure CSI-RS and report the CSI according to their respective periodic/aperiodic CSI reporting schedule. The subframe522in which the CSI-RS is received can be considered to be the “reference” subframe for CSI measurement, and the CSI can be reported in a subframe532on a PUCCH or a Physical Uplink Shared Channel (PUSCH). An aperiodic CSI request may also be sent by an eNB to request the CSI aperiodically. According to a possible implementation, an aperiodic CSI request including a request of CSI for an Scell can be an indicator that CSI-RS is present on the SCell subframe including the control channel in which the CSI request is sent. FIG.6is an example illustration of subframes600of another way a UE can report CSI according to a possible embodiment similar to the previous embodiment. The subframes600can include subframes610transmitted in a downlink by an eNB Pcell, subframes620transmitted in a downlink by an eNB S cell, and subframes630transmitted in an uplink by a UE. The subframes620can be transmitted in a downlink by an unlicensed eNB Scell. This option for handling CSI-RS transmission for activated Scell ON/OFF operation on a Scell-u can use aperiodic CSI-RS structure for CSI, where the CSI-RS can be dynamically scheduled, such as via a Pcell and using cross-carrier scheduling with some validity/expiry timer. The grant can be a broadcast grant so that multiple UEs can measure CSI-RS and report according to their respective periodic CSI reporting schedule. This can allow an eNB to allow the CSI reports to be aligned with medium access boundaries. For instance, the eNB may request the UE to measure CSI in the last subframe of the current medium access so that the eNB has some knowledge when it regains medium access in the next attempt. The reference subframes for CSI measurements can be the subframes622and624in which the CSI-RS is received, and the CSI may be reported on the PUCCH or PUSCH632. An aperiodic CSI request may also be sent by the eNB to request the CSI aperiodically. For unlicensed carriers, aperiodic CSI-RS can be scheduled for the UE in the first subframe that begins after the medium is available. Several options can be used to provide aperiodic CSI-RS transmission and reporting aperiodic CSI based on aperiodic CSI-RS transmission. According to a possible option, higher layers indicate number of CSI-RS antenna ports, resources (REs) and/or a power setting, such as the absence of subframe-offset and periodicity, for one or more CSI-RS configuration, such as one or more CSI-RS resource configuration and one or more CSI-IM resource configuration corresponding to one or more CSI processes and/or CSI subframe sets. This CSI-RS configuration can differ from the Rel-11/12 CSI-RS configuration for CSI reporting in that this CSI-RS configuration can indicate an aperiodic CSI-RS transmission, such as due to absence of subframe-offset and/or periodicity. According to another possible option, a separate aperiodic CSI reporting grant with its own Cyclic Redundancy Check (CRC), such as a CSI-Radio Network Temporary Identifier (CSI-RNTI), can be used for CSI-RS transmissions. The grant can explicitly indicate the CSI-RS resource and the uplink resources, such as Resource Blocks (RBs), on which the CSI feedback is transmitted. The grant may be transmitted on the Pcell or Scell-u. According to another possible option, an aperiodic CSI-RS transmission message, such as sent on physical control channel, can indicate whether a subframe contains the CSI-RS. According to another possible option, an aperiodic CSI-RS transmission message, such as sent on physical control channel, can indicate the subframes in which CSI-RS is transmitted. The message may be valid only for a certain number of subframes, such as 10 subframes, and may indicate a subframe subset, such as subframe 0, 4, 8, that can have CSI-RS within a limited time period. This message can refer to CSI-RS in multiple subframes. According to another possible option, the CSI-RS configuration can also include separate channel measurement RS and interference measurement resources. According to another possible option, when sending a CQI based on measuring a desired signal and interfering signal, the RE's on which the desired signal is retrieved can be different than the RE's on which the interference is measured. For the timing between the transmission of aperiodic CSI request and corresponding CSI-RS transmission/measurement subframe, an aperiodic CSI request DCI for an Scell-u received in subframe n can imply various types of information to a UE. For example, it can imply that subframe n contains CSI-RS for Scell-u, where the CSI-RS configuration can be a new Rel-13 CSI-RS configuration indicated by higher layers that includes RE's, antenna ports, and/or a power setting, such as an absence of subframe-offset and periodicity. An aperiodic CSI request DCI for an Scell-u received in subframe n can also imply that subframe n contains CSI-RS according to a CSI-RS configuration X, where the CSI-RS configuration X can be selected from a set of new Rel-13 CSI-RS configurations configured by higher layers that include includes RE's, antenna ports, and/or a power setting, such as the absence of subframe-offset and periodicity. The configuration X can be indicated in the DCI requesting the aperiodic CSI-RS request. For example, two bits in DCI can be used to signal one of four CSI-RS configurations including no aperiodic CSI trigger, such as shown in Table 1. TABLE 1CSI-RS Indicator field for aperiodic CSI-RS configurationValue ofCSI-RSIndicatorfieldDescription‘00’No aperiodic CSI report is triggered‘01’Aperiodic CSI report is triggered for a 1st CSI-RSconfiguration for serving cell u‘10’Aperiodic CSI report is triggered for a 2nd set of CSI-RS configuration for serving cell u‘11’Aperiodic CSI report is triggered for a 3rd set of CSI-RS configuration for serving cell u The CSI-RS configuration can also include separate channel measurement CSI-RS and interference measurement resources. Besides aperiodic CSI request in uplink DCI format and Random Access Response (RAR) grants that are supported in LTE, the aperiodic CSI request can be triggered from DCI formats used for Downlink (DL) data scheduling on Scell-u in a subframe. The aperiodic CSI report can be triggered for a set of CSI process(es) and/or {CSI process, CSI subframe set}-pair(s) configured by higher layers for serving cell u and can be reported on a higher-layer configured uplink resource according to a higher-layer configured CSI-RS configuration in the subframe. Two bits in DCI may be used to signal whether aperiodic CSI is triggered and selecting one of three uplink resources configured by higher layers, such as according to the mapping in Table 2. TABLE 2CSI request field and Physical Uplink Shared Channel (PUSCH)resource for Aperiodic CSI (can be extra with Table 1).Value ofCSI requestfieldDescription‘00’No aperiodic CSI report is triggered‘01’Aperiodic CSI report is triggered for a set of CSI process(es) and/or {CSIprocess, CSI subframe set}-pair(s) configured by higher layers for servingcell c and on a first PUSCH resource configured by the higher layers‘10’Aperiodic CSI report is triggered for a set of CSI process(es) and/or {CSIprocess, CSI subframe set}-pair(s) configured by higher layers for servingcell c and on a second PUSCH resource configured by the higher layers‘11’Aperiodic CSI report is triggered for a set of CSI process(es) and/or {CSIprocess, CSI subframe set}-pair(s) configured by higher layers for servingcell c and on a third PUSCH resource configured by the higher layers The dynamic resource configuration of CSI-RS described in Table 1 can be combined with the dynamic uplink resource configuration in Table 2, for example, as a separate bit-field or a jointly-encoded bit-field. A possible embodiment can provide a Zero Power-CSI-RS (ZP-CSI-RS) indicator in DCI formats for downlink transmission modes 1 to 10, TM1-10. A UE may be configured by higher layers in one out of the various transmission modes. Each transmission mode has associated set of DCI formats and PDSCH transmission scheme/schemes, such as single antenna port transmission based on CRS, transmit diversity based on CRS, open loop MIMO based on CRS, closed loop MIMO based on CRS, closed loop MIMO based on UE-specific demodulation reference signals, etc. This embodiment can relate to the signal flow diagram200. For example, whenever CSI-RS is scheduled aperiodically in a subframe, it can also be useful to ensure that UEs can receive data in that subframe irrespective of whether an aperiodic CSI request is triggered for a given UE. This can imply that the Physical Downlink Shared Channel (PDSCH) should be rate-matched around the aperiodic CSI-RS. This can be done by creating a new dynamic ZP-CSI-RS indicator field that is sent via DCI formats used for DL data scheduling in all transmission modes. For TM10, which corresponds to PDSCH based on UE-specific DMRS, this can be achieved by adding another ZP-CSI-RS indicator field in the DCI formats used for DL data scheduling or by adding a Rel-13 ZP-CSI-RS indicator to the PDSCH Rate Matching and QuasiCoLocation Indicator (PQI) fields in DCI format 2D. Example values of the indicator field and corresponding descriptions are shown in Table 3. TM10 is defined typically for supporting coordinated multipoint transmission, using PDSCH transmission schemes based on UE-specific DMRS. TABLE 3Dynamic ZP-CSI-RS Rate-matching parameter.Value ofZP-CSI-RSindicatorfieldDescription‘00’No additional ZP-CSI-RS REs indicated by the DCI‘01’PDSCH is rate-matched around first set of dynamic ZP-CSI-RS REs in the subframe‘10’PDSCH is rate-matched around second set of dynamicZP-CSI-RS REs in the subframe‘11’PDSCH is rate-matched around third set of dynamic ZP-CSI-RS REs in the subframe Rate matching can mean skipping the ZP-CSI-RS RE's for data, but measuring the CSI-RS if instructed to. For PDSCH demodulation in TM10, the antenna port(s) on which a Demodulation Reference Signal (DMRS) are transmitted is assumed to be quasi co-located with CSI-RS antenna ports, such as the antenna port on which CSI-RS is transmitted, corresponding to an indicated CSI-RS resource configuration. An antenna port can be defined such that the channel over which a symbol on the antenna port is conveyed can be inferred from the channel over which another symbol on the same antenna port is conveyed. Two antenna ports can be said to be quasi co-located if the large-scale properties of the channel over which a symbol on one antenna port is conveyed can be inferred from the channel over which a symbol on the other antenna port is conveyed. The large-scale properties can include one or more of delay spread, Doppler spread, Doppler shift, average gam, and average delay. If CSI-RS is transmitted aperiodically, then other than CSI feedback, CSI-RS signals can also be used for quasi-colocation purposes, such as for determining the large scale properties of the channel. In this case, there it can be useful to use an explicit indication of CSI-RS transmissions. The CSI-RS resource configuration can be indicated for quasi-colocation with Demodulation Reference Signal (DM-RS). The UE can use measurements on one or more CSI-RS corresponding to previous aperiodic CSI request triggers associated with the indicated CSI-RS configuration for quasi co-location purposes. The UE can be configured to assume that one or more of the antenna port(s) corresponding to the CSI-RS resource configuration(s) as part of the discovery signals, such as within the discovery signal occasion for the cell, for which the UE can assume non-zero transmission power for the CSI-RS and the DM-RS antenna ports associated with the PDSCH are quasi co-located. Alternatively, or in addition, the UE can be configured to assume that the CRS antenna port transmission as part of the discovery signals within the discovery signal occasion for the cell and the DM-RS antenna ports associated with the PDSCH are quasi co-located. The following Quasi-Co-Location (QCL) indicator relationship can be used for PDSCH based on TM10 in Rel-11 where ↔ can indicate that the corresponding antenna ports are quasi co-located:DMRS↔periodic CSI-RS resource↔periodic CRS(cell-ID, Number of antenna ports, MBSFN pattern, etc). If both CSI-RS and CRS are sparse, such as transmitted aperiodically, then the QCL relationship can be:DMRS↔aperiodic CSI-RS resource↔Discovery signal (or a subset CRS/CSI-RS of the discovery signal). FIG.7is an example block diagram of an apparatus700, such as the UE110or the UE112, according to a possible embodiment. The apparatus700can include a housing710, a controller720within the housing710, audio input and output circuitry730coupled to the controller720, a display740coupled to the controller720, a transceiver750coupled to the controller720, an antenna755coupled to the transceiver750, a user interface760coupled to the controller720, a memory770coupled to the controller720, and a network interface780coupled to the controller720. The apparatus700can perform the methods described in all the embodiments. The display740can be a viewfinder, a liquid crystal display (LCD), a light emitting diode (LED) display, a plasma display, a projection display, a touch screen, or any other device that displays information. The transceiver750can include a transmitter and/or a receiver. The audio input and output circuitry730can include a microphone, a speaker, a transducer, or any other audio input and output circuitry. The user interface760can include a keypad, a keyboard, buttons, a touch pad, a joystick, a touch screen display, another additional display, or any other device useful for providing an interface between a user and an electronic device. The network interface780can be a universal serial bus port, an Ethernet port, an infrared transmitter/receiver, a USB port, an IEEE 1397 port, a WLAN transceiver, or any other interface that can connect an apparatus to a network or computer and that can transmit and receive data communication signals. The memory770can include a random access memory, a read only memory, an optical memory, a flash memory, a removable memory, a hard drive, a cache, or any other memory that can be coupled to a wireless communication device. The apparatus700or the controller720may implement any operating system, such as Microsoft Windows®, UNIX®, or LINUX®, Android™, or any other operating system. Apparatus operation software may be written in any programming language, such as C, C++, Java or Visual Basic, for example. Apparatus software may also run on an application framework, such as, for example, a Java® framework, a .NET® framework, or any other application framework. The software and/or the operating system may be stored in the memory770or elsewhere on the apparatus700. The apparatus700or the controller720may also use hardware to implement disclosed operations. For example, the controller720may be any programmable processor. Disclosed embodiments may also be implemented on a general-purpose or a special purpose computer, a programmed microprocessor or microprocessor, peripheral integrated circuit elements, an application-specific integrated circuit or other integrated circuits, hardware/electronic logic circuits, such as a discrete element circuit, a programmable logic device, such as a programmable logic array, field programmable gate-array, or the like. In general, the controller720may be any controller or processor device or devices capable of operating an electronic device and implementing the disclosed embodiments. In operation, the transceiver750can receive DCI containing a CSI request on a physical control channel in a subframe of a first serving cell. The CSI request can direct the apparatus700to perform CSI measurements for at least one aperiodic CSI-RS of a second serving cell. The DCI can also indicate granted resources on which the CSI is sent by the apparatus700to the base station. The controller720can ascertain CSI-RS resources in a subframe of the second serving cell based on at least the DCI contents and configured to determine CSI based on the ascertained CSI-RS resources. The transceiver750can send the determined CSI to a base station. According to another possible embodiment, the transceiver750can receive ZP-CSI-RS configuration information for an aperiodic ZP-CSI-RS of a serving cell. The transceiver750can also receive DCI on a physical control channel in a subframe of the serving cell. The DCI can indicate whether a PDSCH of the UE in the subframe of the serving cell is rate-matched around resource elements indicated by the ZP-CSI-RS configuration information. The controller720can decode the PDSCH in the subframe of the serving cell based on rate-matching around the resource elements indicated by the ZP-CSI-RS configuration when the DCI indicates the PDSCH of the UE is rate-matched around the resource elements indicated by the ZP-CSI-RS configuration. FIG.8is an example block diagram of a base station800, such as the eNB120, according to a possible embodiment. The base station800may include a controller810, a memory820, a database interface830, a transceiver840, Input/Output (I/O) device interface850, a network interface860, and a bus870. The base station800can implement any operating system, such as Microsoft Windows®, UNIX, or LINUX, for example. Base station operation software may be written in any programming language, such as C, C++, Java or Visual Basic, for example. The base station software can run on an application framework, such as, for example, a Java® server, a .NET® framework, or any other application framework. The transceiver840can create a data connection with the UE110. The controller810can be any programmable processor. Disclosed embodiments can also be implemented on a general-purpose or a special purpose computer, a programmed microprocessor or microprocessor, peripheral integrated circuit elements, an application-specific integrated circuit or other integrated circuits, hardware/electronic logic circuits, such as a discrete element circuit, a programmable logic device, such as a programmable logic array, field programmable gate-array, or the like. In general, the controller810can be any controller or processor device or devices capable of operating a base station and implementing the disclosed embodiments. The memory820can include volatile and nonvolatile data storage, including one or more electrical, magnetic, or optical memories, such as a Random Access Memory (RAM), cache, hard drive, or other memory device. The memory820can have a cache to speed access to specific data. The memory820can also be connected to a Compact Disc-Read Only Memory (CD-ROM), Digital Video Disc-Read Only memory (DVD-ROM), DVD read write input, tape drive, thumb drive, or other removable memory device that allows media content to be directly uploaded into a system. Data can be stored in the memory820or in a separate database. For example, the database interface830can be used by the controller810to access the database. The database can contain any formatting data to connect the terminal110to the network130. The I/O device interface850can be connected to one or more input and output devices that may include a keyboard, a mouse, a touch screen, a monitor, a microphone, a voice-recognition device, a speaker, a printer, a disk drive, or any other device or combination of devices that accept input and/or provide output. The I/O device interface850can receive a data task or connection criteria from a network administrator. The network connection interface860can be connected to a communication device, modem, network interface card, a transceiver, or any other device capable of transmitting and receiving signals to and from the network130. The components of the base station800can be connected via the bus870, may be linked wirelessly, or may be otherwise connected. Although not required, embodiments can be implemented using computer-executable instructions, such as program modules, being executed by an electronic device, such as a general purpose computer. Generally, program modules can include routine programs, objects, components, data structures, and other program modules that perform particular tasks or implement particular abstract data types. The program modules may be software-based and/or may be hardware-based. For example, the program modules may be stored on computer readable storage media, such as hardware discs, flash drives, optical drives, solid state drives, CD-ROM media, thumb drives, and other computer readable storage media that provide non-transitory storage aside from a transitory propagating signal. Moreover, embodiments may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network personal computers, minicomputers, mainframe computers, and other computing environments. In operation, the controller810can control operations of the apparatus800. The transceiver840can transmit, to a UE, DCI containing a CSI request on a physical control channel in a subframe of a first serving cell. The CSI request can direct the UE to perform CSI measurements for at least one aperiodic CSI-RS of a second serving cell. The DCI can include an indication of resources for the aperiodic CSI-RS in a subframe of the second serving cell. The transceiver840can receive, from the UE, in another subframe, a message containing the requested CSI. The controller810can also perform a LBT procedure to determine if a carrier is clear for a carrier frequency corresponding to the second serving cell prior to sending the aperiodic CSI-RS on the second serving cell. In operation according to another possible embodiment, the transceiver840can signal a ZP-CSI-RS configuration for an aperiodic ZP-CSI-RS for a first UE via at least one higher layer, where the higher layer can be higher than a physical layer. The transceiver840can indicate via a control channel to the first UE in a subframe as to whether the first UE rate-matches PDSCH around resource elements indicated by a ZP-CSI-RS configuration for an aperiodic ZP-CSI-RS for the first UE via at least one higher layer. The transceiver840can transmit a CSI-RS in resource elements that are subset of resource elements indicated by the ZP-CSI-RS configuration. The transceiver840can transmit an aperiodic CSI request to a UE, the aperiodic CSI request directing the UE to measure CSI based on resource elements in the subframe that are a subset of the ZP-CSI-RS configuration, where the UE in this step can be the original UE or a different UE in earlier steps. FIG.9is an example illustration of a subframe900according to a possible embodiment. The subframe900can include at least one resource block910including a plurality of resource elements920. The subframe900can include resource elements930of a first CSI-RS configuration. The subframe900can include resource elements940of a first CSI-RS configuration. The subframe900can also include resource elements950indicated by a ZP-CSI-RS configuration. It should be noted that the subframe900only shows resource elements for CSI-RS configurations and ZP-CSI-RS configurations for conceptual purposes and understanding of concepts of the disclosed embodiments and does not necessarily represent an actual subframe, resource elements, CSI-RS configurations, and ZP-CSI-RS configurations. Embodiments can provide for a base station that dynamically schedules CSI-RS in a subframe for a UE. According to some embodiments a UE can receive, from a base station, an aperiodic CSI request on a control channel in a subframe requesting CSI feedback for a serving cell. The UE can determine the CSI-RS resources in the subframe based on the control channel contents. The UE can determine CSI information based on the determined CSI-RS resources. The UE can then send the CSI information to the base station. According to some embodiments, a base station can configure an aperiodic ZP-CSI-RS configuration and subframe offset for a UE via higher layers than a physical layer. The base station can indicate to the UE in a subframe via a control channel whether the UE rate-matches PDSCH around the resource elements indicated by the ZP-CSI-RS configuration in the subframe. The base station can transmit a CSI-RS in the resource elements that are subset of REs indicated by the ZP-CSI-RS configuration in the subframe. The base station can transmit an aperiodic CSI request to a second UE. The aperiodic CSI request can indicate, such as direct or request, the second UE to measure CSI based on resource elements in the subframe that are a subset of the ZP-CSI-RS configuration. According to some embodiments, a UE can receive an aperiodic CSI request on a control channel in a subframe, where the CSI request requests CSI feedback for a serving cell. The UE can determine the CSI-RS resources for the serving cell are present in the subframe based on received aperiodic CSI request. The UE can determine CSI information based on CSI-RS resources for the serving cell in the subframe. The UE can then send the CSI information to the base station in a second subframe. According to some embodiments, a UE can be configured with an aperiodic ZP-CSI-RS configuration and subframe offset via higher layers. The UE can also be configured with an aperiodic CSI-RS configuration and subframe offset via higher layers, where the RE's of CSI-RS can be a subset of RE's of ZP-CSI-RS. In a subframe a control channel a base station can indicate to the UE whether its PDSCH is rate-matched around the resource elements indicated by the ZP-CSI-RS configuration. In the same subframe, the UE can receive a CSI-RS in RE's that are subset of RE's indicated by the ZP-CSI-RS configuration. The UE can receive an aperiodic CSI request. The aperiodic CSI request can indicate, such as direct or request, the UE to measure CSI based on RE's indicated by the CSI-RS configuration in the subframe in which the CSI request was received. The UE can determine CSI information based on CSI-RS resources for the serving cell in the subframe, and send the CSI information to the base station in a second subframe. The method of this disclosure can be implemented on a programmed processor. However, the controllers, flowcharts, and modules may also be implemented on a general purpose or special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit elements, an integrated circuit, a hardware electronic or logic circuit such as a discrete element circuit, a programmable logic device, or the like. In general, any device on which resides a finite state machine capable of implementing the flowcharts shown in the figures may be used to implement the processor functions of this disclosure. While this disclosure has been described with specific embodiments thereof, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. For example, various components of the embodiments may be interchanged, added, or substituted in the other embodiments. Also, all of the elements of each figure are not necessary for operation of the disclosed embodiments. For example, one of ordinary skill in the art of the disclosed embodiments would be enabled to make and use the teachings of the disclosure by simply employing the elements of the independent claims. Accordingly, embodiments of the disclosure as set forth herein are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the disclosure. In this document, relational terms such as “first,” “second,” and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The phrase “at least one of” followed by a list is defined to mean one, some, or all, but not necessarily all of, the elements in the list. The terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “a,” “an,” or the like does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element. Also, the term “another” is defined as at least a second or more. The terms “including,” “having,” and the like, as used herein, are defined as “comprising.” Furthermore, the background section is written as the inventor's own understanding of the context of some embodiments at the time of filing and includes the inventor's own recognition of any problems with existing technologies and/or problems experienced in the inventor's own work.
55,356
11863483
DETAILED DESCRIPTION The embodiments set forth below represent information to enable those skilled in the art to practice the embodiments and illustrate the best mode of practicing the embodiments. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the disclosure and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure. Radio Node: As used herein, a “radio node” is either a radio access node or a wireless device. Radio Access Node: As used herein, a “radio access node” or “radio network node” is any node in a radio access network of a cellular communications network that operates to wirelessly transmit and/or receive signals. Some examples of a radio access node include, but are not limited to, a base station (e.g., a next generation or New Radio (NR) base station (gNB) in a Third Generation Partnership Project (3GPP) Fifth Generation (5G) NR network or an enhanced or evolved Node B (eNB) in a 3GPP Long Term Evolution (LTE) network), a high-power or macro base station, a low-power base station (e.g., a micro base station, a pico base station, a home eNB, or the like), and a relay node. Core Network Node: As used herein, a “core network node” is any type of node in a core network. Some examples of a core network node include, e.g., a Mobility Management Entity (MME), a Packet Data Network Gateway (P-GW), a Service Capability Exposure Function (SCEF), or the like. Wireless Device: As used herein, a “wireless device” is any type of device that has access to (i.e., is served by) a cellular communications network by wirelessly transmitting and/or receiving signals to a radio access node(s). Some examples of a wireless device include, but are not limited to, a User Equipment (UE) in a 3GPP network and a Machine Type Communication (MTC) device. Network Node: As used herein, a “network node” is any node that is either part of the radio access network or the core network of a cellular communications network/system. Note that the description given herein focuses on a 3GPP cellular communications system and, as such, 3GPP terminology or terminology similar to 3GPP terminology is oftentimes used. However, the concepts disclosed herein are not limited to a 3GPP system. Note that, in the description herein, reference may be made to the term “cell;” however, particularly with respect to 5G NR concepts, beams may be used instead of cells and, as such, it is important to note that the concepts described herein are equally applicable to both cells and beams. As noted above, exactly how SRS transmissions are configured and triggered for NR is still under discussion. A text proposal to Third Generation Partnership Project (3GPP) Technical Specification (TS) 38.331 defining the SRS related parameters is given below.2.1.1.1 SRS-ConfigThe SRS-Config IE is used to configure sounding reference signal transmissions. The configuration defines a list of SRS-Resources and a list of SRS-ResourceSets. Each resource set defines a set of SRS-Resources. The network triggers the transmission of the set of SRS-Resources using a configured aperiodicSRS-ResourceTrigger (that is carried in physical layer downlink control information, ‘L1 DCI’). SRS-Config Information Element -- ASNISTART-- SRS configuration allowing to add and remove sets of SRS resourcesSRS-Config : :=SEQUENCE {srs-ResourceSetToReleaseListSEQUENCE (SIZE(0 . . . maxNrofSRS-ResourceSets) OF SRS-ResourceSetId   OPTIONAL,-- Need ONsrs-ResourceSetToAddModListSEQUENCE (SIZE(0 . . . maxNrofSRS-ResourceSets) OF SRS-ResourceSet      OPTIONAL-- Need ONsrs-ResourceToReleaseListSEQUENCE(SIZE(1..maxNrofSRS-Resources) ) OF SRS-ResourceIdOPTIONAL,    --Need ONsrs-ResourceToAddModListSEQUENCE(SIZE(1 . . . maxNrofSRS-Resources) ) OF SRS-ResourceOPTIONAL-- Need ON-- Configuration of simultaneous SRS and PUCCH (see 38.214, section 6.2.1)pucch-SRS-SimultaneousTransmissionBOOLEAN}-- A set of SRS resourcesSRS-ResourceSet : :=SEQUENCE {srs-ResourceSetIdSRS-ResourceSetIdsrs-ResourcesIdsSEQUENCE(SIZE(1 . . . maxNrofSRS-ResourcesPerSet) ) OF SRS-ResourceId-- The DCI “code point” upon which the UE shall transmit SRS according to thisSRS resource set configuration.-- (see 38.214, section x.x.x.x)aperiodicSRS-ResourceTriggerTYPE_FFS!}SRS-ResourceSet Id : :=INTEGER (0 . . . maxNrofSRS-ResourceSets−1)SRS-Resource : :=SEQUENCE {srs-ResourceIdSRS-ResourceId,nrofSRS-PortsENUMERATED {1port2ports, 4ports},-- Comb value (2 or 4) and comb offset (see 38.214, section 6.2.1)transmissionCombENUMERATED {n2,n4},-- OFDM symbol location of the SRS resource within a slot including number of-- OFDM symbols (1, 2, or 4 per SRS resource) (see 38.214, section 6.2.1)resourceMappingTYPE_FFS!,-- Includes parameters capturing SRS frequency hopping (see 38.214, section6.2.1)freqHoppingTYPE_FFS!,-- Time domain behavior of SRS resource configuration (see 38.214, section6.2.1)resourceTypeTYPE_FFS!,-- Periodicity and slot offset for periodic/semi-persistent SRS (see 38.214,section 6.2.1)slotConfigurationTYPE_FFS!,-- Wideband and partial band SRS (see 38.214, section 6.2.1)freqBandTYPE_FFS!,-- ADD DESCRIPTION (see 38.214, section 6.2.1)sequenceIdTYPE_FFS!,}SRS-Resourceld : :=INTEGER (0 . . . maxNrofSRS-Resources−1) Thus, the RRC configuration of “SRS transmission settings” are done with the Information Element (IE) SRS-Config, which contains a list of SRS-Resources (the list constitutes a “pool” of resources) wherein each SRS resource contains information of the physical mapping of the reference signal on the time-frequency grid, time-domain information, sequence Identifiers (IDs), etc. The SRS-Config also contains a list of SRS resource sets, which contains a list of SRS resources and an associated DCI trigger state. Thus, when a certain DCI state is triggered, it indicates that the SRS resources in the associated set shall be transmitted by the UE. In NR, the following three types of SRS transmissions are supported:Periodic SRS (P SRS): SRS is transmitted periodically in certain slots. This SRS transmission is semi-statically configured by the RRC using parameters such as SRS resource, periodicity, and slot offset.Aperiodic SRS (AP SRS): This is a one-shot SRS transmission that can happen in any slot. Here, one-shot means that SRS transmission only happens once per trigger. The SRS resources (i.e., the resource element locations which consist of subcarrier locations and Orthogonal Frequency Division Multiplexing (OFDM) symbol locations) for AP SRS are semi-statically configured. The transmission of AP SRS is triggered by dynamic signaling through Physical Downlink Control Channel (PDCCH). Multiple AP SRS resources can be grouped into a SRS resource set and the triggering is done on a set level.Semi-Persistent SRS (SP SRS): Similar to P SRS, resources for SP SRS transmissions are semi-statically configured with parameters such as periodicity and slot offset. However, unlike P SRS, dynamic signaling is needed to activate and possibly deactivate the SRS transmission. In the case of SP SRS, the gNB first RRC configures the UE with the SP SRS resources. The SP SRS resource set is then activated via Medium Access Control (MAC) Control Element (CE). NR supports spatial relation indication for SRS resources, where the spatial relation can be either to a downlink Reference Signal (RS) (SSB or CSI-RS) or by the UE previously transmitted SRS. The spatial relation is primarily used to indicate what uplink transmission beam the UE may use for precoding the SRS, i.e. it is a form of uplink beam indication. If a UE is capable of beam correspondence, the uplink beam may be derived from the downlink beam management procedure and a spatial relation to a downlink RS can be indicated, whereon the UE may transmit the SRS in the reciprocal direction to how it set its receive beam when receiving the downlink RS. Alternatively, an uplink beam management procedure can be used, where the UE transmits an SRS beam sweep and the gNB refers back to one of the swept beams in a previously transmitted SRS resource to indicate the spatial relation to the SRS resource. The below table summarizes how the spatial relation to a target SRS resource is indicated for the different time domain behaviors. SpatialTargetSignallingparameterReference RSRSmodeSpatialSSB/CSI-RS (at leastP SRSRRCP-CSIRS and SP -CSI-RS),P-SRS FFS: AP-CSI-RS,SP-SRSSpatialSSB/CSI-RS(at leastSP-SRSRRC + MAC-CEP-CSIRS and SP -CSI-RS),P-SRS/SP-SRS FFS: AP-SRS,AP-CSI-RSSpatialSSB/CSI-FS (at leastAP SRSRRC or RRC +P-CSIRS and SP- CSI-RS),MAC CE forP-SRS, SP-SRS, AP-SRSconfiguration,Working assumption:indication withAP-CSI-RSDCI MAC CE Activation of CSI-RS is provided in Long Term Evolution (LTE). Release 13 Full Dimension MIMO (FD-MIMO) specification in LTE supports an enhanced CSI-RS reporting called Class B for beamformed CSI-RS. Therein, an LTE RRC_CONNECTED UE can be configured with K beams (where 1<K≤8) where each beam can consist of 1, 2, 4, or 8 CSI-RS ports. For CSI feedback purposes (Precoder Matrix Indicator (PMI), Rank Indicator (RI), and Channel Quality Information (CQI)), there is a CSI-RS Resource Indicator per CSI-RS. As part of the CSI, the UE reports CSI-RS Index (CRI) to indicate the preferred beam where the CRI is wideband. Other CSI components such as RI/CQI/PMI are based on legacy codebook (i.e., Release 12) and CRI reporting periodicity is an integer multiple of the RI reporting periodicity. An illustration of beamformed CSI-RS is given inFIG.2. InFIG.2, the UE reports CRI=2 which corresponds to RI/CQI/PMI being computed using Beamformed CSI-RS 2′. For Release 14 enhanced FD-MIMO (eFD-MIMO), non-periodic beamformed CSI-RS with two different sub-flavors was introduced. The two sub-flavors are aperiodic CSI-RS and semi-persistent CSI-RS. In both these flavors, the CSI-RS resources are configured for the UE as in Release 13 with K CSI-RS resources, and MAC CE activation of N out of K CSI-RS resources (N≤K) is specified. Alternatively stated, after the K CSI-RS resources are configured to be aperiodic CSI-RS or semi-persistent CSI-RS, the UE waits for MAC CE activation of N out of K CSI-RS resources. In the case of aperiodic CSI-RS, in addition to MAC CE activation, a DCI trigger is sent to the UE so that one of the activated CSI-RS resources is selected by the UE for CSI computation and subsequent reporting. In the case of semi-persistent CSI-RS, once the CSI-RS resources are activated by the MAC CE, the UE can use the activated CSI-RS resources for CSI computation and reporting. The MAC CE activation/deactivation command is specified in Section 5.19 of TS 36.321 where the specification text is reproduced below:The network may activate and deactivate the configured CSI-RS resources of a serving cell by sending the Activation/Deactivation of CSI-RS resources MAC control element described in subclause 6.1.3.14. The configured CSI-RS resources are initially deactivated upon configuration and after a handover. The abovementioned Section 6.1.3.14 of TS 36.321 is reproduced below:The Activation/Deactivation of CSI-RS resources MAC control element is identified by a MAC PDU subheader with LCID as specified in table 6.2.1-1. It has variable size as the number of configured CSI process (N) and is defined in FIG. 6.1.3.14-1 [See FIG. 16]. Activation/Deactivation CSI-RS command is defined in FIG. 6.1.3.14-2 [See FIG. 17] and activates or deactivates CSI-RS resources for a CSI process. Activation/Deactivation of CSI-RS resources MAC control element applies to the serving cell on which the UE receives the Activation/Deactivation of CSI-RS resources MAC control element.The Activation/Deactivation of CSI-RS resources MAC control elements is defined as follows:Ri: this field indicates the activation/deactivation status of the CSI-RS resources associated with CSI-RS-ConfigNZPId i for the CSI-RS process. The Rifield is set to “I” to indicate that CSI-RS resource associated with CSI-RS-ConfigNZPId i for the CSI-RS process shall be activated. The Rifield is set to “0” to indicate that the CSI-RS-ConfigNZPId i shall be deactivated; The MAC activation was introduced in LTE to be able to configure the UE with more CSI-RS resources than the maximum number of CSI-RS resources the UE is able to support for CSI feedback. The MAC CE would then selectively activate up to the maximum number of CSI-RS resources supported by the UE for CSI feedback. The benefit of MAC CE activation for CSI-RS is that the network may, without the need to reconfigure by RRC, activate another set of N CSI-RS resources among the K resources configured for the UE. There currently exist certain challenge(s). In particular, Medium Access Control (MAC) Control Element (CE) Sounding Reference Signal (SRS) set activation has not been specified in NR, but the requirement is that spatial relation information to both downlink and uplink Reference Signals (RSs) needs to be conveyed. Certain aspects of the present disclosure and their embodiments may provide solutions to these or other challenges. Systems and methods are disclosed herein for efficiently indicating spatial relations for a Semi-Persistent SRS (SP SRS) resource(s) in MAC CE, e.g., using 1-2 bit format field together with resource identifier (ID) that has varying size to fill a MAC CE octet. In some embodiments, the format field ranges from 1 to 2 bits, instead of the common 2 bits since there are three types of identifiers. This allows for the format field and the identifier to fit in one octet. Certain embodiments may provide one or more of the following technical advantage(s). MAC CE for SRS resource set activation is provided in a manner that gives Quasi Co-Location (QCL) information per resource in the resource set in an efficient and flexible manner due to the disclosed format indicator presented herein. Two example embodiments are described below. The difference between these embodiments is in how the size of the format (F) field is captured. The mechanism in the receiver of the MAC CE would be the same. In the first embodiment, the size of the F field is described as 1 bit. In the second embodiment, the size of the F field is 2 bits. Note that these example embodiments are only examples. Other variations may be used, as will be apparent to one of skill in the art upon reading the present disclosure. In a first embodiment, SP SRS activation or deactivation (denoted herein as activation/deactivation) is provided via a MAC CE as described below. As described, the MAC CE also provides an indication of a spatial relation for the activated/deactivated SP SRS resource. While the term SP SRS “resource” is sometimes used herein, it is to be understood that the SP SRS resource can be, at least in some embodiments, an SP SRS “resource set”. The design of the MAC CE in accordance with the first embodiment is shown inFIG.3. This MAC CE is of fixed size and has the following fields:A: Indicates whether the MAC CE is for Activation (set to “1”) or Deactivation (set to “0”). The size of the field is 1 bit. The A field is also referred to herein as an “activation” field or an “activation/deactivation” field.C: Indicates whether the MAC CE is for the normal uplink carrier (set to “1”) or the supplementary uplink carrier (set to “0”). The size of the field is 1 bit. The C field is also referred to herein as a “carrier” field.F: Indicates which ID is present in the ID field. If this field is set to “1” then the ID field contains a 7-bit CSI-RS resource ID. If this field is set to “0,” then if the first bit of the ID field is “1,” then the remaining 6 bits of the ID field contain a 6-bit Synchronization Signal Block (SSB) ID. If this field is set to “0,” then if the first bit of the ID field is “0,” then the remaining 6 bits of the ID field contain one reserved bit and a 5-bit SRS resource ID. The size of this field is 1 bit. The F field is also referred to herein as the “format” field.ID: This field carries the ID as indicated by the F field. The MAC entity shall ignore this field if the A field is set to “0.” The size of the field is 7 bits. In alternatives of the first embodiment, the meaning of the bits are switched such that if the F field is set to “0” then the ID field contains a 7-bit CSI-RS resource ID while if the F field is set to “1,” then if the first bit of the ID field is “0” the remaining 6 bits of the ID field contain a 6-bit SSB ID, and so forth. In a second embodiment, SP SRS activation/deactivation is provided via a MAC CE as described below. As described, the MAC CE also provides an indication of a spatial relation for the activated/deactivated SP SRS resource. The design of the MAC CE for the second embodiment is shown inFIG.4. This MAC CE is of fixed size and has the following fields:A: Indicates whether the MAC CE is for Activation (set to “1”) or Deactivation (set to “0”). The size of the field is 1 bit. The A field is also referred to herein as an “activation” field or an “activation/deactivation” field.C: Indicates whether the MAC CE is for the normal uplink carrier (set to “1”) or the supplementary uplink carrier (set to “0”). The size of the field is 1 bit. The C field is also referred to herein as the “carrier” field.F: Indicates which ID is present in the ID field. If the first bit of this field is set to “1,” then the ID field contains six of the seven bits of a CSI-RS resource ID. Together with the second bit of this field, the full 7-bit CSI-RS resource ID can be constructed. If this field is set to “01,” then the ID field contains a SSB ID. If this field is set to “00,” then the ID field contains 1 R-bit and a 5-bit SRS resource ID. The size of this field is 2 bits. The F field is also referred to herein as the “format” field.ID: This field carries the ID as indicated by the F field. The MAC entity shall ignore this field if the A field is set to 0. The size of the field is 7 bits. Common Part for Both Alternatives Both the first embodiment and the second embodiment include the following common aspects. For example, the format field fits in 8 bits together with the resource ID. This is constructed as follows. The MAC CE octet has 8 bits, and one of the following is transmitted:SSB ID (the size of ID<=6 bits)SRS resource ID (the size of ID<=5 bits)Channel State Information RS (CSI-RS) resource ID (the size of ID<=7 bits) The common solution is to have a 2-bit format field with four codepoints to indicate which type the following field has, i.e., which one of the above is signaled. But that becomes 2+7=9 bits. Embodiments of the present disclosure enable both the format indicator and the resource ID to be fit into the 8-bit octet of the MAC CE. For example:For the whole octet (F+ID):If the first bit is set to 1:The remaining 7 bits are CSI-RS resource ID.Else if the first bit (F field) is set to 0:If the second bit (first bit of ID field) is set to 1:The remaining 6 bits are SSB ID.If the second bit (first bit of ID field) is set to 0:There is one reserved bit, and the remaining 5 bits are SRS resource ID. Although the subject matter described herein may be implemented in any appropriate type of system using any suitable components, the embodiments disclosed herein are described in relation to a wireless network, such as the example wireless network illustrated inFIG.5. For simplicity, the wireless network ofFIG.5only depicts a network506, network nodes560and560B, and Wireless Devices (WDs)510,510B, and510C. In practice, a wireless network may further include any additional elements suitable to support communication between wireless devices or between a wireless device and another communication device, such as a landline telephone, a service provider, or any other network node or end device. Of the illustrated components, the network node560and the WD510are depicted with additional detail. The wireless network may provide communication and other types of services to one or more wireless devices to facilitate the wireless devices' access to and/or use of the services provided by, or via, the wireless network. The wireless network may comprise and/or interface with any type of communication, telecommunication, data, cellular, and/or radio network or other similar type of system. In some embodiments, the wireless network may be configured to operate according to specific standards or other types of predefined rules or procedures. Thus, particular embodiments of the wireless network may implement communication standards, such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), LTE, and/or other suitable Second, Third, Fourth, or Fifth Generation (2G, 3G, 4G, or 5G) standards; Wireless Local Area Network (WLAN) standards, such as the IEEE 802.11 standards; and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave, and/or ZigBee standards. The network506may comprise one or more backhaul networks, core networks, Internet Protocol (IP) networks, Public Switched Telephone Networks (PSTNs), packet data networks, optical networks, Wide Area Networks (WANs), Local Area Networks (LANs), WLANs, wired networks, wireless networks, metropolitan area networks, and other networks to enable communication between devices. The network node560and the WD510comprise various components described in more detail below. These components work together in order to provide network node and/or wireless device functionality, such as providing wireless connections in a wireless network. In different embodiments, the wireless network may comprise any number of wired or wireless networks, network nodes, base stations, controllers, wireless devices, relay stations, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections. As used herein, network node refers to equipment capable, configured, arranged, and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the wireless network to enable and/or provide wireless access to the wireless device and/or to perform other functions (e.g., administration) in the wireless network. Examples of network nodes include, but are not limited to, Access Points (APs) (e.g., radio APs), Base Stations (BSs) (e.g., radio base stations, Node Bs, eNBs, and gNBs). Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and may then also be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station may be a relay node or a relay donor node controlling a relay. A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or Remote Radio Units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such RRUs may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a Distributed Antenna System (DAS). Yet further examples of network nodes include Multi-Standard Radio (MSR) equipment such as MSR BSs, network controllers such as Radio Network Controllers (RNCs) or BS Controllers (BSCs), Base Transceiver Stations (BTSs), transmission points, transmission nodes, Multi-Cell/Multicast Coordination Entities (MCEs), core network nodes (e.g., Mobile Switching Centers (MSCs), MMEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Center (E-SMLCs)), and/or Minimization of Drive Tests (MDTs). As another example, a network node may be a virtual network node as described in more detail below. More generally, however, network nodes may represent any suitable device (or group of devices) capable, configured, arranged, and/or operable to enable and/or provide a wireless device with access to the wireless network or to provide some service to a wireless device that has accessed the wireless network. InFIG.5, the network node560includes processing circuitry570, a device readable medium580, an interface590, auxiliary equipment584, a power source586, power circuitry587, and an antenna562. Although the network node560illustrated in the example wireless network ofFIG.5may represent a device that includes the illustrated combination of hardware components, other embodiments may comprise network nodes with different combinations of components. It is to be understood that a network node comprises any suitable combination of hardware and/or software needed to perform the tasks, features, functions, and methods disclosed herein. Moreover, while the components of the network node560are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, a network node may comprise multiple different physical components that make up a single illustrated component (e.g., the device readable medium580may comprise multiple separate hard drives as well as multiple Random Access Memory (RAM) modules). Similarly, the network node560may be composed of multiple physically separate components (e.g., a Node B component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components. In certain scenarios in which the network node560comprises multiple separate components (e.g., BTS and BSC components), one or more of the separate components may be shared among several network nodes. For example, a single RNC may control multiple Node Bs. In such a scenario, each unique Node B and RNC pair may in some instances be considered a single separate network node. In some embodiments, the network node560may be configured to support multiple Radio Access Technologies (RATs). In such embodiments, some components may be duplicated (e.g., a separate device readable medium580for the different RATs) and some components may be reused (e.g., the same antenna562may be shared by the RATs). The network node560may also include multiple sets of the various illustrated components for different wireless technologies integrated into the network node560, such as, for example, GSM, Wideband Code Division Multiple Access (WCDMA), LTE, NR, WiFi, or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or a different chip or set of chips and other components within the network node560. The processing circuitry570is configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being provided by a network node. These operations performed by the processing circuitry570may include processing information obtained by the processing circuitry570by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. The processing circuitry570may comprise a combination of one or more of a microprocessor, a controller, a microcontroller, a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or any other suitable computing device, resource, or combination of hardware, software, and/or encoded logic operable to provide, either alone or in conjunction with other network node560components, such as the device readable medium580, network node560functionality. For example, the processing circuitry570may execute instructions stored in the device readable medium580or in memory within the processing circuitry570. Such functionality may include providing any of the various wireless features, functions, or benefits discussed herein. In some embodiments, the processing circuitry570may include a System on a Chip (SOC). In some embodiments, the processing circuitry570may include one or more of Radio Frequency (RF) transceiver circuitry572and baseband processing circuitry574. In some embodiments, the RF transceiver circuitry572and the baseband processing circuitry574may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of the RF transceiver circuitry572and the baseband processing circuitry574may be on the same chip or set of chips, boards, or units. In certain embodiments, some or all of the functionality described herein as being provided by a network node, base station, eNB, or other such network device may be performed by the processing circuitry570executing instructions stored on the device readable medium580or memory within the processing circuitry570. In alternative embodiments, some or all of the functionality may be provided by the processing circuitry570without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner. In any of those embodiments, whether executing instructions stored on a device readable storage medium or not, the processing circuitry570can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry570alone or to other components of the network node560, but are enjoyed by the network node560as a whole, and/or by end users and the wireless network generally. The device readable medium580may comprise any form of volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid state memory, remotely mounted memory, magnetic media, optical media, RAM, Read Only Memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry570. The device readable medium580may store any suitable instructions; data or information, including a computer program; software; an application including one or more of logic, rules, code, tables, etc.; and/or other instructions capable of being executed by the processing circuitry570and utilized by the network node560. The device readable medium580may be used to store any calculations made by the processing circuitry570and/or any data received via the interface590. In some embodiments, the processing circuitry570and the device readable medium580may be considered to be integrated. The interface590is used in the wired or wireless communication of signaling and/or data between the network node560, a network506, and/or WDs510. As illustrated, the interface590comprises port(s)/terminal(s)594to send and receive data, for example to and from the network506over a wired connection. The interface590also includes radio front end circuitry592that may be coupled to, or in certain embodiments a part of, the antenna562. The radio front end circuitry592comprises filters598and amplifiers596. The radio front end circuitry592may be connected to the antenna562and the processing circuitry570. The radio front end circuitry592may be configured to condition signals communicated between the antenna562and the processing circuitry570. The radio front end circuitry592may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. The radio front end circuitry592may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of the filters598and/or the amplifiers596. The radio signal may then be transmitted via the antenna562. Similarly, when receiving data, the antenna562may collect radio signals which are then converted into digital data by the radio front end circuitry592. The digital data may be passed to the processing circuitry570. In other embodiments, the interface590may comprise different components and/or different combinations of components. In certain alternative embodiments, the network node560may not include separate radio front end circuitry592; instead, the processing circuitry570may comprise radio front end circuitry and may be connected to the antenna562without separate radio front end circuitry592. Similarly, in some embodiments, all or some of the RF transceiver circuitry572may be considered a part of the interface590. In still other embodiments, the interface590may include the one or more ports or terminals594, the radio front end circuitry592, and the RF transceiver circuitry572as part of a radio unit (not shown), and the interface590may communicate with the baseband processing circuitry574, which is part of a digital unit (not shown). The antenna562may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. The antenna562may be coupled to the radio front end circuitry592and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In some embodiments, the antenna562may comprise one or more omni-directional, sector, or panel antennas operable to transmit/receive radio signals between, for example, 2 gigahertz (GHz) and 66 GHz. An omni-directional antenna may be used to transmit/receive radio signals in any direction, a sector antenna may be used to transmit/receive radio signals from devices within a particular area, and a panel antenna may be a line of sight antenna used to transmit/receive radio signals in a relatively straight line. In some instances, the use of more than one antenna may be referred to as Multiple Input Multiple Output (MIMO). In certain embodiments, the antenna562may be separate from the network node560and may be connectable to the network node560through an interface or port. The antenna562, the interface590, and/or the processing circuitry570may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by a network node. Any information, data, and/or signals may be received from a WD, another network node, and/or any other network equipment. Similarly, the antenna562, the interface590, and/or the processing circuitry570may be configured to perform any transmitting operations described herein as being performed by a network node. Any information, data, and/or signals may be transmitted to a WD, another network node, and/or any other network equipment. The power circuitry587may comprise, or be coupled to, power management circuitry and is configured to supply the components of the network node560with power for performing the functionality described herein. The power circuitry587may receive power from the power source586. The power source586and/or the power circuitry587may be configured to provide power to the various components of the network node560in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). The power source586may either be included in, or be external to, the power circuitry587and/or the network node560. For example, the network node560may be connectable to an external power source (e.g., an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to the power circuitry587. As a further example, the power source586may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, the power circuitry587. The battery may provide backup power should the external power source fail. Other types of power sources, such as photovoltaic devices, may also be used. Alternative embodiments of the network node560may include additional components beyond those shown inFIG.5that may be responsible for providing certain aspects of the network node's functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, the network node560may include user interface equipment to allow input of information into the network node560and to allow output of information from the network node560. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node560. As used herein, WD refers to a device capable, configured, arranged, and/or operable to communicate wirelessly with network nodes and/or other WDs. Unless otherwise noted, the term WD may be used interchangeably herein with UE. Communicating wirelessly may involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air. In some embodiments, a WD may be configured to transmit and/or receive information without direct human interaction. For instance, a WD may be designed to transmit information to a network on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the network. Examples of a WD include, but are not limited to, a smart phone, a mobile phone, a cell phone, a Voice over IP (VoIP) phone, a wireless local loop phone, a desktop computer, a Personal Digital Assistant (PDA), a wireless camera, a gaming console or device, a music storage device, a playback appliance, a wearable terminal device, a wireless endpoint, a mobile station, a tablet, a laptop, Laptop Embedded Equipment (LEE), Laptop Mounted Equipment (LME), a smart device, a wireless Customer Premise Equipment (CPE), a vehicle mounted wireless terminal device, etc. A WD may support Device-to-Device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), Vehicle-to-Everything (V2X), and may in this case be referred to as a D2D communication device. As yet another specific example, in an Internet of Things (IoT) scenario, a WD may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another WD and/or a network node. The WD may in this case be a Machine-to-Machine (M2M) device, which may in a 3GPP context be referred to as a MTC device. As one particular example, the WD may be a UE implementing the 3GPP Narrowband IoT (NB-IoT) standard. Particular examples of such machines or devices are sensors, metering devices such as power meters, industrial machinery, home or personal appliances (e.g., refrigerators, televisions, etc.), or personal wearables (e.g., watches, fitness trackers, etc.). In other scenarios, a WD may represent a vehicle or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation. A WD as described above may represent the endpoint of a wireless connection, in which case the device may be referred to as a wireless terminal. Furthermore, a WD as described above may be mobile, in which case it may also be referred to as a mobile device or a mobile terminal. As illustrated inFIG.5, a WD510includes an antenna511, an interface514, processing circuitry520, a device readable medium530, user interface equipment532, auxiliary equipment534, a power source536, and power circuitry537. The WD510may include multiple sets of one or more of the illustrated components for different wireless technologies supported by the WD510, such as, for example, GSM, WCDMA, LTE, NR, WiFi, WiMAX, or Bluetooth wireless technologies, just to mention a few. These wireless technologies may be integrated into the same or different chips or set of chips as other components within the WD510. The antenna511may include one or more antennas or antenna arrays configured to send and/or receive wireless signals and is connected to the interface514. In certain alternative embodiments, the antenna511may be separate from the WD510and be connectable to the WD510through an interface or port. The antenna511, the interface514, and/or the processing circuitry520may be configured to perform any receiving or transmitting operations described herein as being performed by a WD. Any information, data, and/or signals may be received from a network node and/or another WD. In some embodiments, radio front end circuitry and/or the antenna511may be considered an interface. As illustrated, the interface514comprises radio front end circuitry512and the antenna511. The radio front end circuitry512comprises one or more filters518and amplifiers516. The radio front end circuitry512is connected to the antenna511and the processing circuitry520and is configured to condition signals communicated between the antenna511and the processing circuitry520. The radio front end circuitry512may be coupled to or be a part of the antenna511. In some embodiments, the WD510may not include separate radio front end circuitry512; rather, the processing circuitry520may comprise radio front end circuitry and may be connected to the antenna511. Similarly, in some embodiments, some or all of RF transceiver circuitry522may be considered a part of the interface514. The radio front end circuitry512may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. The radio front end circuitry512may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of the filters518and/or the amplifiers516. The radio signal may then be transmitted via the antenna511. Similarly, when receiving data, the antenna511may collect radio signals which are then converted into digital data by the radio front end circuitry512. The digital data may be passed to the processing circuitry520. In other embodiments, the interface514may comprise different components and/or different combinations of components. The processing circuitry520may comprise a combination of one or more of a microprocessor, a controller, a microcontroller, a CPU, a DSP, an ASIC, a FPGA, or any other suitable computing device, resource, or combination of hardware, software, and/or encoded logic operable to provide, either alone or in conjunction with other WD510components, such as the device readable medium530, WD510functionality. Such functionality may include providing any of the various wireless features or benefits discussed herein. For example, the processing circuitry520may execute instructions stored in the device readable medium530or in memory within the processing circuitry520to provide the functionality disclosed herein. As illustrated, the processing circuitry520includes one or more of the RF transceiver circuitry522, baseband processing circuitry524, and application processing circuitry526. In other embodiments, the processing circuitry520may comprise different components and/or different combinations of components. In certain embodiments, the processing circuitry520of the WD510may comprise a SOC. In some embodiments, the RF transceiver circuitry522, the baseband processing circuitry524, and the application processing circuitry526may be on separate chips or sets of chips. In alternative embodiments, part or all of the baseband processing circuitry524and the application processing circuitry526may be combined into one chip or set of chips, and the RF transceiver circuitry522may be on a separate chip or set of chips. In still alternative embodiments, part or all of the RF transceiver circuitry522and the baseband processing circuitry524may be on the same chip or set of chips, and the application processing circuitry526may be on a separate chip or set of chips. In yet other alternative embodiments, part or all of the RF transceiver circuitry522, the baseband processing circuitry524, and the application processing circuitry526may be combined in the same chip or set of chips. In some embodiments, the RF transceiver circuitry522may be a part of the interface514. The RF transceiver circuitry522may condition RF signals for the processing circuitry520. In certain embodiments, some or all of the functionality described herein as being performed by a WD may be provided by the processing circuitry520executing instructions stored on the device readable medium530, which in certain embodiments may be a computer-readable storage medium. In alternative embodiments, some or all of the functionality may be provided by the processing circuitry520without executing instructions stored on a separate or discrete device readable storage medium, such as in a hard-wired manner. In any of those particular embodiments, whether executing instructions stored on a device readable storage medium or not, the processing circuitry520can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry520alone or to other components of the WD510, but are enjoyed by the WD510as a whole, and/or by end users and the wireless network generally. The processing circuitry520may be configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being performed by a WD. These operations, as performed by the processing circuitry520, may include processing information obtained by the processing circuitry520by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored by the WD510, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. The device readable medium530may be operable to store a computer program; software; an application including one or more of logic, rules, code, tables, etc.; and/or other instructions capable of being executed by the processing circuitry520. The device readable medium530may include computer memory (e.g., RAM or ROM), mass storage media (e.g., a hard disk), removable storage media (e.g., a CD or a DVD), and/or any other volatile or non-volatile, non-transitory device readable and/or computer executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry520. In some embodiments, the processing circuitry520and the device readable medium530may be considered to be integrated. The user interface equipment532may provide components that allow for a human user to interact with the WD510. Such interaction may be of many forms, such as visual, audial, tactile, etc. The user interface equipment532may be operable to produce output to the user and to allow the user to provide input to the WD510. The type of interaction may vary depending on the type of user interface equipment532installed in the WD510. For example, if the WD510is a smart phone, the interaction may be via a touch screen; if the WD510is a smart meter, the interaction may be through a screen that provides usage (e.g., the number of gallons used) or a speaker that provides an audible alert (e.g., if smoke is detected). The user interface equipment532may include input interfaces, devices and circuits, and output interfaces, devices and circuits. The user interface equipment532is configured to allow input of information into the WD510, and is connected to the processing circuitry520to allow the processing circuitry520to process the input information. The user interface equipment532may include, for example, a microphone, a proximity or other sensor, keys/buttons, a touch display, one or more cameras, a Universal Serial Bus (USB) port, or other input circuitry. The user interface equipment532is also configured to allow output of information from the WD510and to allow the processing circuitry520to output information from the WD510. The user interface equipment532may include, for example, a speaker, a display, vibrating circuitry, a USB port, a headphone interface, or other output circuitry. Using one or more input and output interfaces, devices, and circuits of the user interface equipment532, the WD510may communicate with end users and/or the wireless network, and allow them to benefit from the functionality described herein. The auxiliary equipment534is operable to provide more specific functionality which may not be generally performed by WDs. This may comprise specialized sensors for doing measurements for various purposes, interfaces for additional types of communication such as wired communications, etc. The inclusion and type of components of the auxiliary equipment534may vary depending on the embodiment and/or scenario. The power source536may, in some embodiments, be in the form of a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic devices, or power cells may also be used. The WD510may further comprise the power circuitry537for delivering power from the power source536to the various parts of the WD510which need power from the power source536to carry out any functionality described or indicated herein. The power circuitry537may in certain embodiments comprise power management circuitry. The power circuitry537may additionally or alternatively be operable to receive power from an external power source, in which case the WD510may be connectable to the external power source (such as an electricity outlet) via input circuitry or an interface such as an electrical power cable. The power circuitry537may also in certain embodiments be operable to deliver power from an external power source to the power source536. This may be, for example, for the charging of the power source536. The power circuitry537may perform any formatting, converting, or other modification to the power from the power source536to make the power suitable for the respective components of the WD510to which power is supplied. FIG.6illustrates one embodiment of a UE in accordance with various aspects described herein. As used herein, a user equipment or UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device. Instead, a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller). Alternatively, a UE may represent a device that is not intended for sale to, or operation by, an end user but which may be associated with or operated for the benefit of a user (e.g., a smart power meter). A UE600may be any UE identified by 3GPP, including a NB-IoT UE, a MTC UE, and/or an enhanced MTC (eMTC) UE. The UE600, as illustrated inFIG.6, is one example of a WD configured for communication in accordance with one or more communication standards promulgated by 3GPP, such as 3GPP's GSM, UMTS, LTE, and/or 5G standards. As mentioned previously, the term WD and UE may be used interchangeable. Accordingly, althoughFIG.6is a UE, the components discussed herein are equally applicable to a WD, and vice-versa. InFIG.6, the UE600includes processing circuitry601that is operatively coupled to an input/output interface605, an RF interface609, a network connection interface611, memory615including RAM617, ROM619, and a storage medium621or the like, a communication subsystem631, a power source613, and/or any other component, or any combination thereof. The storage medium621includes an operating system623, an application program625, and data627. In other embodiments, the storage medium621may include other similar types of information. Certain UEs may utilize all of the components shown inFIG.6, or only a subset of the components. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc. InFIG.6, the processing circuitry601may be configured to process computer instructions and data. The processing circuitry601may be configured to implement any sequential state machine operative to execute machine instructions stored as machine-readable computer programs in the memory, such as one or more hardware-implemented state machines (e.g., in discrete logic, FPGA, ASIC, etc.); programmable logic together with appropriate firmware; one or more stored programs, general purpose processors, such as a microprocessor or DSP, together with appropriate software; or any combination of the above. For example, the processing circuitry601may include two CPUs. Data may be information in a form suitable for use by a computer. In the depicted embodiment, the input/output interface605may be configured to provide a communication interface to an input device, output device, or input and output device. The UE600may be configured to use an output device via the input/output interface605. An output device may use the same type of interface port as an input device. For example, a USB port may be used to provide input to and output from the UE600. The output device may be a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof. The UE600may be configured to use an input device via the input/output interface605to allow a user to capture information into the UE600. The input device may include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like. The presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user. A sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, another like sensor, or any combination thereof. For example, the input device may be an accelerometer, a magnetometer, a digital camera, a microphone, and an optical sensor. InFIG.6, the RF interface609may be configured to provide a communication interface to RF components such as a transmitter, a receiver, and an antenna. The network connection interface611may be configured to provide a communication interface to a network643A. The network643A may encompass wired and/or wireless networks such as a LAN, a WAN, a computer network, a wireless network, a telecommunications network, another like network or any combination thereof. For example, the network643A may comprise a WiFi network. The network connection interface611may be configured to include a receiver and a transmitter interface used to communicate with one or more other devices over a communication network according to one or more communication protocols, such as Ethernet, Transmission Control Protocol (TCP)/IP, Synchronous Optical Networking (SONET), Asynchronous Transfer Mode (ATM), or the like. The network connection interface611may implement receiver and transmitter functionality appropriate to the communication network links (e.g., optical, electrical, and the like). The transmitter and receiver functions may share circuit components, software, or firmware, or alternatively may be implemented separately. The RAM617may be configured to interface via a bus602to the processing circuitry601to provide storage or caching of data or computer instructions during the execution of software programs such as the operating system, application programs, and device drivers. The ROM619may be configured to provide computer instructions or data to the processing circuitry601. For example, the ROM619may be configured to store invariant low-level system code or data for basic system functions such as basic Input and Output (I/O), startup, or reception of keystrokes from a keyboard that are stored in a non-volatile memory. The storage medium621may be configured to include memory such as RAM, ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically EPROM (EEPROM), magnetic disks, optical disks, floppy disks, hard disks, removable cartridges, or flash drives. In one example, the storage medium621may be configured to include the operating system623, the application program625such as a web browser application, a widget or gadget engine, or another application, and the data file627. The storage medium621may store, for use by the UE600, any of a variety of various operating systems or combinations of operating systems. The storage medium621may be configured to include a number of physical drive units, such as a Redundant Array of Independent Disks (RAID), a floppy disk drive, flash memory, a USB flash drive, an external hard disk drive, a thumb drive, a pen drive, a key drive, a High-Density Digital Versatile Disc (HD-DVD) optical disc drive, an internal hard disk drive, a Blu-Ray optical disc drive, a Holographic Digital Data Storage (HDDS) optical disc drive, an external mini-Dual In-Line Memory Module (DIMM), Synchronous Dynamic RAM (SDRAM), external micro-DIMM SDRAM, smartcard memory such as a Subscriber Identity Module (SIM) or a Removable User Identity (RUIM) module, other memory, or any combination thereof. The storage medium621may allow the UE600to access computer-executable instructions, application programs, or the like, stored on transitory or non-transitory memory media, to off-load data or to upload data. An article of manufacture, such as one utilizing a communication system, may be tangibly embodied in the storage medium621, which may comprise a device readable medium. InFIG.6, the processing circuitry601may be configured to communicate with a network643B using the communication subsystem631. The network643A and the network643B may be the same network or networks or different network or networks. The communication subsystem631may be configured to include one or more transceivers used to communicate with the network643B. For example, the communication subsystem631may be configured to include one or more transceivers used to communicate with one or more remote transceivers of another device capable of wireless communication such as another WD, UE, or base station of a Radio Access Network (RAN) according to one or more communication protocols, such as IEEE 802.6, Code Division Multiple Access (CDMA), WCDMA, GSM, LTE, Universal Terrestrial RAN (UTRAN), WiMax, or the like. Each transceiver may include a transmitter633and/or a receiver635to implement transmitter or receiver functionality, respectively, appropriate to the RAN links (e.g., frequency allocations and the like). Further, the transmitter633and the receiver635of each transceiver may share circuit components, software, or firmware, or alternatively may be implemented separately. In the illustrated embodiment, the communication functions of the communication subsystem631may include data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the Global Positioning System (GPS) to determine a location, another like communication function, or any combination thereof. For example, the communication subsystem631may include cellular communication, WiFi communication, Bluetooth communication, and GPS communication. The network643B may encompass wired and/or wireless networks such as a LAN, a WAN, a computer network, a wireless network, a telecommunications network, another like network, or any combination thereof. For example, the network643B may be a cellular network, a WiFi network, and/or a near-field network. A power source613may be configured to provide Alternating Current (AC) or Direct Current (DC) power to components of the UE600. The features, benefits, and/or functions described herein may be implemented in one of the components of the UE600or partitioned across multiple components of the UE600. Further, the features, benefits, and/or functions described herein may be implemented in any combination of hardware, software, or firmware. In one example, the communication subsystem631may be configured to include any of the components described herein. Further, the processing circuitry601may be configured to communicate with any of such components over the bus602. In another example, any of such components may be represented by program instructions stored in memory that, when executed by the processing circuitry601, perform the corresponding functions described herein. In another example, the functionality of any of such components may be partitioned between the processing circuitry601and the communication subsystem631. In another example, the non-computationally intensive functions of any of such components may be implemented in software or firmware and the computationally intensive functions may be implemented in hardware. FIG.7is a schematic block diagram illustrating a virtualization environment700in which functions implemented by some embodiments may be virtualized. In the present context, virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices, and networking resources. As used herein, virtualization can be applied to a node (e.g., a virtualized base station or a virtualized radio access node) or to a device (e.g., a UE, a WD, or any other type of communication device) or components thereof and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components (e.g., via one or more applications, components, functions, virtual machines, or containers executing on one or more physical processing nodes in one or more networks). In some embodiments, some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines implemented in one or more virtual environments700hosted by one or more of hardware nodes730. Further, in embodiments in which the virtual node is not a radio access node or does not require radio connectivity (e.g., a core network node), then the network node may be entirely virtualized. The functions may be implemented by one or more applications720(which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) operative to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein. The applications720are run in the virtualization environment700which provides hardware730comprising processing circuitry760and memory790. The memory790contains instructions795executable by the processing circuitry760whereby the application720is operative to provide one or more of the features, benefits, and/or functions disclosed herein. The virtualization environment700comprises general-purpose or special-purpose network hardware devices730comprising a set of one or more processors or processing circuitry760, which may be Commercial Off-the-Shelf (COTS) processors, dedicated ASICs, or any other type of processing circuitry including digital or analog hardware components or special purpose processors. Each hardware device730may comprise memory790-1which may be non-persistent memory for temporarily storing instructions795or software executed by the processing circuitry760. Each hardware device730may comprise one or more Network Interface Controllers (NICs)770, also known as network interface cards, which include a physical network interface780. Each hardware device730may also include non-transitory, persistent, machine-readable storage media790-2having stored therein software795and/or instructions executable by the processing circuitry760. The software795may include any type of software including software for instantiating one or more virtualization layers750(also referred to as hypervisors), software to execute virtual machines740, as well as software allowing it to execute functions, features, and/or benefits described in relation with some embodiments described herein. The virtual machines740, comprise virtual processing, virtual memory, virtual networking or interface, and virtual storage, and may be run by a corresponding virtualization layer750or hypervisor. Different embodiments of the instance of virtual appliance720may be implemented on one or more of the virtual machines740, and the implementations may be made in different ways. During operation, the processing circuitry760executes the software795to instantiate the hypervisor or virtualization layer750, which may sometimes be referred to as a Virtual Machine Monitor (VMM). The virtualization layer750may present a virtual operating platform that appears like networking hardware to the virtual machine740. As shown inFIG.7, the hardware730may be a standalone network node with generic or specific components. The hardware730may comprise an antenna7225and may implement some functions via virtualization. Alternatively, the hardware730may be part of a larger cluster of hardware (e.g., such as in a data center or CPE) where many hardware nodes work together and are managed via a Management and Orchestration (MANO)7100, which, among others, oversees lifecycle management of the applications720. Virtualization of the hardware is in some contexts referred to as Network Function Virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers and CPE. In the context of NFV, the virtual machine740may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of the virtual machines740, and that part of the hardware730that executes that virtual machine740, be it hardware dedicated to that virtual machine740and/or hardware shared by that virtual machine740with others of the virtual machines740, forms a separate Virtual Network Element (VNE). Still in the context of NFV, Virtual Network Function (VNF) is responsible for handling specific network functions that run in one or more virtual machines740on top of the hardware networking infrastructure730and corresponds to the application720inFIG.7. In some embodiments, one or more radio units7200that each include one or more transmitters7220and one or more receivers7210may be coupled to the one or more antennas7225. The radio units7200may communicate directly with the hardware nodes730via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station. In some embodiments, some signaling can be effected with the use of a control system7230, which may alternatively be used for communication between the hardware nodes730and the radio unit7200. With reference toFIG.8, in accordance with an embodiment, a communication system includes a telecommunication network810, such as a 3GPP-type cellular network, which comprises an access network811, such as a RAN, and a core network814. The access network811comprises a plurality of base stations812A,812B,812C, such as Node Bs, eNBs, gNBs, or other types of wireless APs, each defining a corresponding coverage area813A,813B,813C. Each base station812A,812B,812C is connectable to the core network814over a wired or wireless connection815. A first UE891located in coverage area813C is configured to wirelessly connect to, or be paged by, the corresponding base station812C. A second UE892in coverage area813A is wirelessly connectable to the corresponding base station812A. While a plurality of UEs891,892are illustrated in this example, the disclosed embodiments are equally applicable to a situation where a sole UE is in the coverage area or where a sole UE is connecting to the corresponding base station812. The telecommunication network810is itself connected to a host computer830, which may be embodied in the hardware and/or software of a standalone server, a cloud-implemented server, a distributed server, or as processing resources in a server farm. The host computer830may be under the ownership or control of a service provider, or may be operated by the service provider or on behalf of the service provider. Connections821and822between telecommunication network810and the host computer830may extend directly from the core network814to the host computer830or may go via an optional intermediate network820. The intermediate network820may be one of, or a combination of more than one of, a public, private, or hosted network; the intermediate network820, if any, may be a backbone network or the Internet; in particular, the intermediate network820may comprise two or more sub-networks (not shown). The communication system ofFIG.8as a whole enables connectivity between the connected UEs891,892and the host computer830. The connectivity may be described as an Over-the-Top (OTT) connection850. The host computer830and the connected UEs891,892are configured to communicate data and/or signaling via the OTT connection850, using the access network811, the core network814, any intermediate network820, and possible further infrastructure (not shown) as intermediaries. The OTT connection850may be transparent in the sense that the participating communication devices through which the OTT connection850passes are unaware of routing of uplink and downlink communications. For example, the base station812may not or need not be informed about the past routing of an incoming downlink communication with data originating from the host computer830to be forwarded (e.g., handed over) to a connected UE891. Similarly, the base station812need not be aware of the future routing of an outgoing uplink communication originating from the UE891towards the host computer830. Example implementations, in accordance with an embodiment, of the UE, base station, and host computer discussed in the preceding paragraphs will now be described with reference toFIG.9. In a communication system900, a host computer910comprises hardware915including a communication interface916configured to set up and maintain a wired or wireless connection with an interface of a different communication device of the communication system900. The host computer910further comprises processing circuitry918, which may have storage and/or processing capabilities. In particular, the processing circuitry918may comprise one or more programmable processors, ASICs, FPGAs, or combinations of these (not shown) adapted to execute instructions. The host computer910further comprises software911, which is stored in or accessible by the host computer910and executable by the processing circuitry918. The software911includes a host application912. The host application912may be operable to provide a service to a remote user, such as a UE930connecting via an OTT connection950terminating at the UE930and the host computer910. In providing the service to the remote user, the host application912may provide user data which is transmitted using the OTT connection950. The communication system900further includes a base station920provided in a telecommunication system and comprising hardware925enabling it to communicate with the host computer910and with the UE930. The hardware925may include a communication interface926for setting up and maintaining a wired or wireless connection with an interface of a different communication device of the communication system900, as well as a radio interface927for setting up and maintaining at least a wireless connection970with the UE930located in a coverage area (not shown inFIG.9) served by the base station920. The communication interface926may be configured to facilitate a connection960to the host computer910. The connection960may be direct or it may pass through a core network (not shown inFIG.9) of the telecommunication system and/or through one or more intermediate networks outside the telecommunication system. In the embodiment shown, the hardware925of the base station920further includes processing circuitry928, which may comprise one or more programmable processors, ASICs, FPGAs, or combinations of these (not shown) adapted to execute instructions. The base station920further has software921stored internally or accessible via an external connection. The communication system900further includes the UE930already referred to. The UE's930hardware935may include a radio interface937configured to set up and maintain a wireless connection970with a base station serving a coverage area in which the UE930is currently located. The hardware935of the UE930further includes processing circuitry938, which may comprise one or more programmable processors, ASICs, FPGAs, or combinations of these (not shown) adapted to execute instructions. The UE930further comprises software931, which is stored in or accessible by the UE930and executable by the processing circuitry938. The software931includes a client application932. The client application932may be operable to provide a service to a human or non-human user via the UE930, with the support of the host computer910. In the host computer910, the executing host application912may communicate with the executing client application932via the OTT connection950terminating at the UE930and the host computer910. In providing the service to the user, the client application932may receive request data from the host application912and provide user data in response to the request data. The OTT connection950may transfer both the request data and the user data. The client application932may interact with the user to generate the user data that it provides. It is noted that the host computer910, the base station920, and the UE930illustrated inFIG.9may be similar or identical to the host computer830, one of the base stations812A,8128,812C, and one of the UEs891,892ofFIG.8, respectively. This is to say, the inner workings of these entities may be as shown inFIG.9and independently, the surrounding network topology may be that ofFIG.8. InFIG.9, the OTT connection950has been drawn abstractly to illustrate the communication between the host computer910and the UE930via the base station920without explicit reference to any intermediary devices and the precise routing of messages via these devices. The network infrastructure may determine the routing, which may be configured to hide from the UE930or from the service provider operating the host computer910, or both. While the OTT connection950is active, the network infrastructure may further take decisions by which it dynamically changes the routing (e.g., on the basis of load balancing consideration or reconfiguration of the network). The wireless connection970between the UE930and the base station920is in accordance with the teachings of the embodiments described throughout this disclosure. One or more of the various embodiments improve the performance of OTT services provided to the UE930using the OTT connection950, in which the wireless connection970forms the last segment. More precisely, the teachings of these embodiments may improve, e.g., data rate, latency, and/or power consumption and thereby provide benefits such as, e.g., reduced user waiting time, relaxed restriction on file size, better responsiveness, and/or extended battery lifetime. A measurement procedure may be provided for the purpose of monitoring data rate, latency, and other factors on which the one or more embodiments improve. There may further be an optional network functionality for reconfiguring the OTT connection950between the host computer910and the UE930, in response to variations in the measurement results. The measurement procedure and/or the network functionality for reconfiguring the OTT connection950may be implemented in the software911and the hardware915of the host computer910or in the software931and the hardware935of the UE930, or both. In some embodiments, sensors (not shown) may be deployed in or in association with communication devices through which the OTT connection950passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which the software911,931may compute or estimate the monitored quantities. The reconfiguring of the OTT connection950may include message format, retransmission settings, preferred routing, etc.; the reconfiguring need not affect the base station920, and it may be unknown or imperceptible to the base station920. Such procedures and functionalities may be known and practiced in the art. In certain embodiments, measurements may involve proprietary UE signaling facilitating the host computer910's measurements of throughput, propagation times, latency, and the like. The measurements may be implemented in that the software911and931causes messages to be transmitted, in particular empty or ‘dummy’ messages, using the OTT connection950while it monitors propagation times, errors, etc. FIG.10is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station, and a UE which may be those described with reference toFIGS.8and9. For simplicity of the present disclosure, only drawing references toFIG.10will be included in this section. In step1010, the host computer provides user data. In sub-step1011(which may be optional) of step1010, the host computer provides the user data by executing a host application. In step1020, the host computer initiates a transmission carrying the user data to the UE. In step1030(which may be optional), the base station transmits to the UE the user data which was carried in the transmission that the host computer initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In step1040(which may also be optional), the UE executes a client application associated with the host application executed by the host computer. FIG.11is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station, and a UE which may be those described with reference toFIGS.8and9. For simplicity of the present disclosure, only drawing references toFIG.11will be included in this section. In step1110of the method, the host computer provides user data. In an optional sub-step (not shown) the host computer provides the user data by executing a host application. In step1120, the host computer initiates a transmission carrying the user data to the UE. The transmission may pass via the base station, in accordance with the teachings of the embodiments described throughout this disclosure. In step1130(which may be optional), the UE receives the user data carried in the transmission. FIG.12is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station, and a UE which may be those described with reference toFIGS.8and9. For simplicity of the present disclosure, only drawing references toFIG.12will be included in this section. In step1210(which may be optional), the UE receives input data provided by the host computer. Additionally or alternatively, in step1220, the UE provides user data. In sub-step1221(which may be optional) of step1220, the UE provides the user data by executing a client application. In sub-step1211(which may be optional) of step1210, the UE executes a client application which provides the user data in reaction to the received input data provided by the host computer. In providing the user data, the executed client application may further consider user input received from the user. Regardless of the specific manner in which the user data was provided, the UE initiates, in sub-step1230(which may be optional), transmission of the user data to the host computer. In step1240of the method, the host computer receives the user data transmitted from the UE, in accordance with the teachings of the embodiments described throughout this disclosure. FIG.13is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station, and a UE which may be those described with reference toFIGS.8and9. For simplicity of the present disclosure, only drawing references toFIG.13will be included in this section. In step1310(which may be optional), in accordance with the teachings of the embodiments described throughout this disclosure, the base station receives user data from the UE. In step1320(which may be optional), the base station initiates transmission of the received user data to the host computer. In step1330(which may be optional), the host computer receives the user data carried in the transmission initiated by the base station. FIG.14depicts a method in accordance with particular embodiments, the method begins at step1400where the network node560(e.g., base station) transmits a MAC CE that includes an indication of a SP SRS resource to be activated or deactivated (activated/deactivated) and information that indicates a spatial relationship for the SP SRS resource (step1400). Again, as noted above, while the term SP SRS “resource” is sometimes used herein, it is to be understood that the SP SRS resource can be, at least in some embodiments, an SP SRS “resource set.” The MAC CE can be that of any of the embodiments described herein (e.g., any one of the first embodiment and the second embodiment described above with respect to, e.g.,FIGS.3and4). The WD510receives the MAC CE (step1402) and, optionally, transmits SRS in accordance with the information received in the MAC CE (step1404). For example, if a SP SRS resource is activated, the WD510transmits SRS on the activated SP SRS resource using, e.g., the uplink beam indicated by the spatial relationship indicated in the MAC CE. FIG.15illustrates a schematic block diagram of an apparatus1500in a wireless network (for example, the wireless network shown inFIG.5). The apparatus may be implemented in a wireless device or network node (e.g., the WD510or the network node560shown inFIG.5). The apparatus1500is operable to carry out the example method described with reference toFIG.14and possibly any other processes or methods disclosed herein. It is also to be understood that the method ofFIG.14is not necessarily carried out solely by the apparatus1500. At least some operations of the method can be performed by one or more other entities. The virtual apparatus1500may comprise processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include DSPs, special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as ROM, RAM, cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments. In some implementations, the processing circuitry may be used to cause one or more units1502, and any other suitable units of the apparatus1500, to perform corresponding functions according one or more embodiments of the present disclosure. The term unit may have conventional meaning in the field of electronics, electrical devices, and/or electronic devices and may include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein. Some example embodiments are as follows. Group A Embodiments Embodiment 1: A method of operation of a wireless device for activating a semi-persistent sounding reference signal resource for a wireless device in a cellular communications network, comprising receiving, from a network node, a Medium Access Control, MAC, Control Element, CE, comprising: an indication of a semi-persistent sounding reference signal resource to be activated/deactivated; and information that indicates a spatial relation for the semi-persistent sounding reference signal resource to be activated/deactivated. Embodiment 2: The method of embodiment 1 wherein the information that indicates the spatial relation comprises: an indication of a type of reference signal for which the spatial relation is provided; and an identifier of a reference signal resource for the type of reference signal for which the spatial relation is provided. Embodiment 3: The method of embodiment 2 wherein the indication of the type of reference signal indicates that the type of reference signal is a Channel State Information Reference Signal, CSI-RS, a Synchronization Signal Block, SSB, or a Sounding Reference Signal, SRS. Embodiment 4: The method of embodiment 2 wherein the indication of the type of reference signal comprises two bits that indicate the type of reference signal, wherein: a first state of the two bits indicates that the type of reference signal is a first type of reference signal; a second state of the two bits indicates that the type of reference signal is a second type of reference signal; and a third state of the two bits indicates that the type of reference signal is a third type of reference signal. Embodiment 5: The method of embodiment 4 wherein the first type of reference signal is a Channel State Information Reference Signal, CSI-RS, the second type of reference signal is a Synchronization Signal Block, SSB, and the third type of reference signal is a Sounding Reference Signal, SRS. Embodiment 6: The method of embodiment 2 wherein the MAC CE comprises: a first octet that comprises the indication of the semi-persistent sounding reference signal resource to be activated/deactivated; and a second octet that comprises the indication of the type of reference signal for which the spatial relation is provided and the identifier of the reference signal resource for the type of reference signal for which the spatial relation is provided. Embodiment 7: The method of embodiment 6 wherein:if a first bit in the second octet is set to a first state:the first bit serves as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Channel State Information Reference Signal, CSI-RS; andremaining bits in the second octet serve as the identifier of the reference signal resource for the CSI-RS;if the first bit in the second octet is set to a second state:if a second bit in the second octet is set to a first state:the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Synchronization Signal Block, SSB; andremaining bits in the second octet serve as the identifier of the reference signal resource for the SSB; andif the second bit in the second octet is set to a second state:the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Sounding Reference Signal, SRS; andall but one of the remaining bits in the second octet serve as the identifier of the reference signal resource for the SRS. Embodiment 8: The method of embodiment 6 wherein a first bit in the second octet is set to a first state such that the first bit serves as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Channel State Information Reference Signal, CSI-RS; and remaining bits in the second octet serve as the identifier of the reference signal resource for the CSI-RS. Embodiment 9: The method of embodiment 6 wherein: a first bit in the second octet is set to a second state; a second bit in the second octet is set to a first state such that the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Synchronization Signal Block, SSB; and remaining bits in the second octet serve as the identifier of the reference signal resource for the SSB. Embodiment 10: The method of embodiment 6 wherein: a first bit in the second octet is set to a second state; a second bit in the second octet is set to a second state such that the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Sounding Reference Signal, SRS; and all but one of the remaining bits in the second octet serve as the identifier of the reference signal resource for the SRS. Embodiment 11: The method of embodiment 1 wherein: if a first bit of an octet of the MAC CE is set to a first state, the remaining bits in the octet comprise a first set of fields; if the first bit of the octet is set to a second state and a second bit of the octet is set to a first state, the remaining bits in the octet comprise a second set of fields; and if the first bit of the octet is set to a second state and the second bit of the octet is set to a second state, the remaining bits in the octet comprising a third set of fields. Embodiment 12: The method of embodiment 11 wherein the first set of fields comprises a field comprising bits providing an identifier of a Channel State Information Reference Signal, CSI-RS, resource for which a spatial relation is indicated. Embodiment 13: The method of embodiment 11 or 12 wherein the second set of fields comprises a field comprising bits providing an identifier of a Synchronization Signal Block, SSB, resource for which a spatial relation is indicated. Embodiment 14: The method of any one of embodiments 11 to 13 wherein the third set of fields comprises a field comprising bits providing an identifier of a Sounding Reference Signal, SRS, resource for which a spatial relation is indicated. Embodiment 15: The method of any of the previous embodiments, further comprising: providing user data; and forwarding the user data to a host computer via the transmission to the base station. Group B Embodiments Embodiment 16: A method of operation of a network node (e.g., a base station) for activating a semi-persistent sounding reference signal resource for a wireless device in a cellular communications network, comprising transmitting, to a wireless device, a Medium Access Control, MAC, Control Element, CE, comprising: an indication of a semi-persistent sounding reference signal resource to be activated/deactivated; and information that indicates a spatial relation for the semi-persistent sounding reference signal resource to be activated/deactivated. Embodiment 17: The method of embodiment 16 wherein the information that indicates the spatial relation comprises: an indication of a type of reference signal for which the spatial relation is provided; and an identifier of a reference signal resource for the type of reference signal for which the spatial relation is provided. Embodiment 18: The method of embodiment 17 wherein the indication of the type of reference signal indicates that the type of reference signal is a Channel State Information Reference Signal, CSI-RS, a Synchronization Signal Block, SSB, or a Sounding Reference Signal, SRS. Embodiment 19: The method of embodiment 17 wherein the indication of the type of reference signal comprises two bits that indicate the type of reference signal, wherein: a first state of the two bits indicates that the type of reference signal is a first type of reference signal; a second state of the two bits indicates that the type of reference signal is a second type of reference signal; and a third state of the two bits indicates that the type of reference signal is a third type of reference signal. Embodiment 20: The method of embodiment 19 wherein the first type of reference signal is a Channel State Information Reference Signal, CSI-RS, the second type of reference signal is a Synchronization Signal Block, SSB, and the third type of reference signal is a Sounding Reference Signal, SRS. Embodiment 21: The method of embodiment 17 wherein the MAC CE comprises: a first octet that comprises the indication of the semi-persistent sounding reference signal resource to be activated/deactivated; and a second octet that comprises the indication of the type of reference signal for which the spatial relation is provided and the identifier of the reference signal resource for the type of reference signal for which the spatial relation is provided. Embodiment 22: The method of embodiment 21 wherein:if a first bit in the second octet is set to a first state:the first bit serves as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Channel State Information Reference Signal, CSI-RS; andremaining bits in the second octet serve as the identifier of the reference signal resource for the CSI-RS;if the first bit in the second octet is set to a second state:if a second bit in the second octet is set to a first state:the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Synchronization Signal Block, SSB; andremaining bits in the second octet serve as the identifier of the reference signal resource for the SSB; andif the second bit in the second octet is set to a second state:the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Sounding Reference Signal, SRS; andall but one of the remaining bits in the second octet serve as the identifier of the reference signal resource for the SRS. Embodiment 23: The method of embodiment 21 wherein: a first bit in the second octet is set to a first state such that the first bit serves as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Channel State Information Reference Signal, CSI-RS; and remaining bits in the second octet serve as the identifier of the reference signal resource for the CSI-RS. Embodiment 24: The method of embodiment 21 wherein: a first bit in the second octet is set to a second state; a second bit in the second octet is set to a first state such that the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Synchronization Signal Block, SSB; and remaining bits in the second octet serve as the identifier of the reference signal resource for the SSB. Embodiment 25: The method of embodiment 21 wherein: a first bit in the second octet is set to a second state; a second bit in the second octet is set to a second state such that the first bit and the second bit serve as the indication of the type of reference signal for which the spatial relation is provided and the type of reference signal for which the spatial relation is provided is a Sounding Reference Signal, SRS; and all but one of the remaining bits in the second octet serve as the identifier of the reference signal resource for the SRS. Embodiment 26: The method of embodiment 16 wherein: if a first bit of an octet of the MAC CE is set to a first state, the remaining bits in the octet comprise a first set of fields; if the first bit of the octet is set to a second state and a second bit of the octet is set to a first state, the remaining bits in the octet comprise a second set of fields; and if the first bit of the octet is set to a second state and the second bit of the octet is set to a second state, the remaining bits in the octet comprising a third set of fields. Embodiment 27: The method of embodiment 26 wherein the first set of fields comprises a field comprising bits providing an identifier of a Channel State Information Reference Signal, CSI-RS, resource for which a spatial relation is indicated. Embodiment 28: The method of embodiment 26 or 27 wherein the second set of fields comprises a field comprising bits providing an identifier of a Synchronization Signal Block, SSB, resource for which a spatial relation is indicated. Embodiment 29: The method of any one of embodiments 26 to 28 wherein the third set of fields comprises a field comprising bits providing an identifier of a Sounding Reference Signal, SRS, resource for which a spatial relation is indicated. Embodiment 30: The method of any of the previous embodiments, further comprising: obtaining user data; and forwarding the user data to a host computer or a wireless device. Group C Embodiments Embodiment 31: A wireless device for activating a semi-persistent sounding reference signal resource for a wireless device in a cellular communications network, the wireless device comprising: processing circuitry configured to perform any of the steps of any of the Group A embodiments; and power supply circuitry configured to supply power to the wireless device. Embodiment 32: A base station for activating a semi-persistent sounding reference signal resource for a wireless device in a cellular communications network, the base station comprising: processing circuitry configured to perform any of the steps of any of the Group B embodiments; power supply circuitry configured to supply power to the base station. Embodiment 33: A User Equipment, UE, for activating a semi-persistent sounding reference signal resource for a wireless device in a cellular communications network, the UE comprising: antennas configured to send and receive wireless signals; radio front-end circuitry connected to the antennas and to processing circuitry, and configured to condition signals communicated between the antenna and the processing circuitry; the processing circuitry being configured to perform any of the steps of any of the Group A embodiments; an input interface connected to the processing circuitry and configured to allow input of information into the UE to be processed by the processing circuitry; an output interface connected to the processing circuitry and configured to output information from the UE that has been processed by the processing circuitry; and a battery connected to the processing circuitry and configured to supply power to the UE. Embodiment 34: A communication system including a host computer comprising: processing circuitry configured to provide user data; and a communication interface configured to forward the user data to a cellular network for transmission to a User Equipment, UE, wherein the cellular network comprises a base station having a radio interface and processing circuitry, the base station's processing circuitry configured to perform any of the steps of any of the Group B embodiments. Embodiment 35: The communication system of the previous embodiment further including the base station. Embodiment 36: The communication system of the previous 2 embodiments, further including the UE, wherein the UE is configured to communicate with the base station. Embodiment 37: The communication system of the previous 3 embodiments, wherein: the processing circuitry of the host computer is configured to execute a host application, thereby providing the user data; and the UE comprises processing circuitry configured to execute a client application associated with the host application. Embodiment 38: A method implemented in a communication system including a host computer, a base station and a User Equipment, UE, the method comprising: at the host computer, providing user data; and at the host computer, initiating a transmission carrying the user data to the UE via a cellular network comprising the base station, wherein the base station performs any of the steps of any of the Group B embodiments. Embodiment 39: The method of the previous embodiment, further comprising, at the base station, transmitting the user data. Embodiment 40: The method of the previous 2 embodiments, wherein the user data is provided at the host computer by executing a host application, the method further comprising, at the UE, executing a client application associated with the host application. Embodiment 41: A User Equipment, UE, configured to communicate with a base station, the UE comprising a radio interface and processing circuitry configured to perform the method of the previous 3 embodiments. Embodiment 42: A communication system including a host computer comprising: processing circuitry configured to provide user data; and a communication interface configured to forward user data to a cellular network for transmission to a User Equipment, UE, wherein the UE comprises a radio interface and processing circuitry, the UE's components configured to perform any of the steps of any of the Group A embodiments. Embodiment 43: The communication system of the previous embodiment, wherein the cellular network further includes a base station configured to communicate with the UE. Embodiment 44: The communication system of the previous 2 embodiments, wherein: the processing circuitry of the host computer is configured to execute a host application, thereby providing the user data; and the UE's processing circuitry is configured to execute a client application associated with the host application. Embodiment 45: A method implemented in a communication system including a host computer, a base station and a User Equipment, UE, the method comprising: at the host computer, providing user data; and at the host computer, initiating a transmission carrying the user data to the UE via a cellular network comprising the base station, wherein the UE performs any of the steps of any of the Group A embodiments. Embodiment 46: The method of the previous embodiment, further comprising at the UE, receiving the user data from the base station. Embodiment 47: A communication system including a host computer comprising: communication interface configured to receive user data originating from a transmission from a User Equipment, UE, to a base station, wherein the UE comprises a radio interface and processing circuitry, the UE's processing circuitry configured to perform any of the steps of any of the Group A embodiments. Embodiment 48: The communication system of the previous embodiment, further including the UE. Embodiment 49: The communication system of the previous 2 embodiments, further including the base station, wherein the base station comprises a radio interface configured to communicate with the UE and a communication interface configured to forward to the host computer the user data carried by a transmission from the UE to the base station. Embodiment 50: The communication system of the previous 3 embodiments, wherein: the processing circuitry of the host computer is configured to execute a host application; and the UE's processing circuitry is configured to execute a client application associated with the host application, thereby providing the user data. Embodiment 51: The communication system of the previous 4 embodiments, wherein: the processing circuitry of the host computer is configured to execute a host application, thereby providing request data; and the UE's processing circuitry is configured to execute a client application associated with the host application, thereby providing the user data in response to the request data. Embodiment 52: A method implemented in a communication system including a host computer, a base station and a User Equipment, UE, the method comprising: at the host computer, receiving user data transmitted to the base station from the UE, wherein the UE performs any of the steps of any of the Group A embodiments. Embodiment 53: The method of the previous embodiment, further comprising, at the UE, providing the user data to the base station. Embodiment 54: The method of the previous 2 embodiments, further comprising: at the UE, executing a client application, thereby providing the user data to be transmitted; and at the host computer, executing a host application associated with the client application. Embodiment 55: The method of the previous 3 embodiments, further comprising: at the UE, executing a client application; and at the UE, receiving input data to the client application, the input data being provided at the host computer by executing a host application associated with the client application, wherein the user data to be transmitted is provided by the client application in response to the input data. Embodiment 56: A communication system including a host computer comprising a communication interface configured to receive user data originating from a transmission from a User Equipment, UE, to a base station, wherein the base station comprises a radio interface and processing circuitry, the base station's processing circuitry configured to perform any of the steps of any of the Group B embodiments. Embodiment 57: The communication system of the previous embodiment further including the base station. Embodiment 58: The communication system of the previous 2 embodiments, further including the UE, wherein the UE is configured to communicate with the base station. Embodiment 59: The communication system of the previous 3 embodiments, wherein: the processing circuitry of the host computer is configured to execute a host application; the UE is configured to execute a client application associated with the host application, thereby providing the user data to be received by the host computer. Embodiment 60: A method implemented in a communication system including a host computer, a base station and a User Equipment, UE, the method comprising: at the host computer, receiving, from the base station, user data originating from a transmission which the base station has received from the UE, wherein the UE performs any of the steps of any of the Group A embodiments. Embodiment 61: The method of the previous embodiment, further comprising at the base station, receiving the user data from the UE. Embodiment 62: The method of the previous 2 embodiments, further comprising at the base station, initiating a transmission of the received user data to the host computer. At least some of the following abbreviations may be used in this disclosure. If there is an inconsistency between abbreviations, preference should be given to how it is used above. If listed multiple times below, the first listing should be preferred over any subsequent listing(s).2G Second Generation3G Third Generation3GPP Third Generation Partnership Project4G Fourth Generation5G Fifth GenerationAC Alternating CurrentAP Access PointAP SRS Aperiodic Sounding Reference SignalASIC Application Specific Integrated CircuitATM Asynchronous Transfer ModeBS Base StationBSC Base Station ControllerBTS Base Transceiver StationCD Compact DiskCDMA Code Division Multiple AccessCE Control ElementCOTS Commercial Off-the-ShelfCP-OFDM Cyclic Prefix Orthogonal Frequency Division MultiplexingCPE Customer Premise EquipmentCPU Central Processing UnitCQI Channel Quality InformationCRI Channel State Information Reference Signal IndexCSI-RS Channel State Information Reference SignalD2D Device-to-DeviceDAS Distributed Antenna SystemDC Direct CurrentDCI Downlink Control InformationDIMM Dual In-Line Memory ModuleDSP Digital Signal ProcessorDVD Digital Video DiskEEPROM Electrically Erasable Programmable Read Only MemoryeFD-MIMO Enhanced Full Dimension Multiple Input Multiple OutputeMTC Enhanced Machine-Type CommunicationeNB Enhanced or Evolved Node BEPROM Erasable Programmable Read Only MemoryE-SMLC Evolved Serving Mobile Location CenterFDD Frequency Division DuplexingFD-MIMO Full Dimension Multiple Input Multiple OutputFPGA Field Programmable Gate ArrayGHz GigahertzgNB Next Generation or New Radio Base StationGPS Global Positioning SystemGSM Global System for Mobile CommunicationsHDDS Holographic Digital Data StorageHD-DVD High-Density Digital Versatile DiscID IdentifierIE Information ElementI/O Input and OutputIoT Internet of ThingsIP Internet ProtocolLAN Local Area NetworkLEE Laptop Embedded EquipmentLME Laptop Mounted EquipmentLTE Long Term EvolutionM2M Machine-to-MachineMAC Medium Access ControlMANO Management and OrchestrationMCE Multi-Cell/Multicast Coordination EntityMCS Modulation and Coding StateMDT Minimization of Drive TestsMIMO Multiple Input Multiple OutputMME Mobility Management EntityMSC Mobile Switching CenterMSR Multi-Standard RadioMTC Machine Type CommunicationNB-IoT Narrowband Internet of ThingsNFV Network Function VirtualizationNIC Network Interface ControllerNR New RadioO&M Operation and MaintenanceOFDM Orthogonal Frequency Division MultiplexingOSS Operations Support SystemOTT Over-the-TopPDA Personal Digital AssistantPDCCH Physical Downlink Control ChannelP-GW Packet Data Network GatewayPMI Precoder Matrix IndicatorPROM Programmable Read Only MemoryP SRS Periodic Sounding Reference SignalPSTN Public Switched Telephone NetworksPUSCH Physical Uplink Shared ChannelQCL Quasi Co-LocationRAID Redundant Array of Independent DisksRAM Random Access MemoryRAN Radio Access NetworkRAT Radio Access TechnologyRF Radio FrequencyRI Rank IndicatorRNC Radio Network ControllerROM Read Only MemoryRRC Radio Resource ControlRRH Remote Radio HeadRRU Remote Radio UnitRS Reference SignalRUIM Removable User IdentitySCEF Service Capability Exposure FunctionSDRAM Synchronous Dynamic Random Access MemorySIM Subscriber Identity ModuleSOC System on a ChipSON Self-Organizing NetworkSONET Synchronous Optical NetworkingSP SRS Semi-Persistent Sounding Reference SignalSRI Sounding Reference Signal Resource IndicatorSRS Sounding Reference SignalSSB Synchronization Signal BlockTCP Transmission Control ProtocolTDD Time Division DuplexingTFRE Time/Frequency Resource ElementTPMI Transmit Precoder Matrix IndicatorTRI Transmission Rank IndicatorTRP Transmit-Receive PointTS Technical SpecificationUE User EquipmentUMTS Universal Mobile Telecommunications SystemUSB Universal Serial BusUTRAN Universal Terrestrial Radio Access NetworkV2I Vehicle-to-InfrastructureV2V Vehicle-to-VehicleV2X Vehicle-to-EverythingVMM Virtual Machine MonitorVNE Virtual Network ElementVNF Virtual Network FunctionVoIP Voice over Internet ProtocolWAN Wide Area NetworkWCDMA Wideband Code Division Multiple AccessWD Wireless DeviceWiMax Worldwide Interoperability for Microwave AccessWLAN Wireless Local Area Network Those skilled in the art will recognize improvements and modifications to the embodiments of the present disclosure. All such improvements and modifications are considered within the scope of the concepts disclosed herein.
112,683
11863484
DETAILED DESCRIPTION The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the present disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness. The terms used, in the following description, for indicating access nodes, network entities, messages, interfaces between network entities, and diverse identity information is provided for convenience of explanation. Accordingly, the terms used in the following description are not limited to specific meanings but may be replaced by other terms equivalent in technical meanings. In the following descriptions, the terms and definitions given in the 3GPP standards are used for convenience of explanation. However, the present disclosure is not limited by use of these terms and definitions and other arbitrary terms and definitions may be employed instead. Table 1 lists the acronyms used throughout the present disclosure. TABLE 1AcronymFull name5GC5G Core NetworkACKAcknowledgementAMAcknowledged ModeAMFAccess and Mobility Management FunctionARQAutomatic Repeat RequestASAccess StratumASN.1Abstract Syntax Notation OneBSRBuffer Status ReportBWPBandwidth PartCACarrier AggregationCAGClosed Access GroupCGCell GroupC-RNTICell RNTICSIChannel State InformationDCIDownlink Control InformationDRB(user) Data Radio BearerDRXDiscontinuous ReceptionHARQHybrid Automatic Repeat RequestIEInformation elementLCGLogical Channel GroupMACMedium Access ControlMIBMaster Information BlockNASNon-Access StratumNG-RANNG Radio Access NetworkNRNR Radio AccessPBRPrioritised Bit RatePCellPrimary CellPCIPhysical Cell IdentifierPDCCHPhysical Downlink Control ChannelPDCPPacket Data Convergence ProtocolPDSCHPhysical Downlink Shared ChannelPDUProtocol Data UnitPHRPower Headroom ReportPLMNPublic Land Mobile NetworkPRACHPhysical Random Access ChannelPRBPhysical Resource BlockPSSPrimary Synchronisation SignalPUCCHPhysical Uplink Control ChannelPUSCHPhysical Uplink Shared ChannelRACHRandom Access ChannelRANRadio Access NetworkRA-RNTIRandom Access RNTIRATRadio Access TechnologyRBRadio BearerRLCRadio Link ControlRNARAN-based Notification AreaRNAURAN-based Notification Area UpdateRNTIRadio Network Temporary IdentifierRRCRadio Resource ControlRRMRadio Resource ManagementRSRPReference Signal Received PowerRSRQReference Signal Received QualityRSSIReceived Signal Strength IndicatorSCellSecondary CellSCSSubcarrier SpacingSDAPService Data Adaptation ProtocolSDUService Data UnitSFNSystem Frame NumberS-GWServing GatewaySISystem InformationSIBSystem Information BlockSpCellSpecial CellSRBSignalling Radio BearerSRSSounding Reference SignalSSBSS/PBCH blockSSSSecondary Synchronisation SignalSULSupplementary UplinkTMTransparent ModeUCIUplink Control InformationUEUser EquipmentUMUnacknowledged ModeCRPCell Reselection PriorityLPPLTE positioning protocolposSIBpositioning SIBposSIpositioning System InformationTRPTransmission-Reception PointDL-Downlink Time Difference Of ArrivalTDOA Table 2 lists the terminologies and their definition used throughout the present disclosure. TABLE 2TerminologyDefinitionallowedCG-ListList of configured grants for the corresponding logical channel.This restriction applies only when the UL grant is a configuredgrant. If present, UL MAC SDUs from this logical channel canonly be mapped to the indicated configured grant configuration.If the size of the sequence is zero, then UL MAC SDUs from thislogical channel cannot be mapped to any configured grantconfigurations. If the field is not present, UL MAC SDUs fromthis logical channel can be mapped to any configured grantconfigurations.allowedSCS-ListList of allowed sub-carrier spacings for the corresponding logicalchannel. If present, UL MAC SDUs from this logical channel canonly be mapped to the indicated numerology. Otherwise, ULMAC SDUs from this logical channel can be mapped to anyconfigured numerology.allowedServingCellsList of allowed serving cells for the corresponding logicalchannel. If present, UL MAC SDUs from this logical channel canonly be mapped to the serving cells indicated in this list.Otherwise, UL MAC SDUs from this logical channel can bemapped to any configured serving cell of this cell group.Carrier frequencycenter frequency of the cell.Cellcombination of downlink and optionally uplink resources. Thelinking between the carrier frequency of the downlink resourcesand the carrier frequency of the uplink resources is indicated inthe system information transmitted on the downlink resources.Cell Groupin dual connectivity, a group of serving cells associated witheither the MeNB or the SeNB.Cell reselectionA process to find a better suitable cell than the current servingcell based on the system information received in the currentserving cellCell selectionA process to find a suitable cell either blindly or based on thestored informationDedicated signallingSignalling sent on DCCH logical channel between the networkand a single UE.discardTimerTimer to control the discard of a PDCP SDU. Starting when theSDU arrives. Upon expiry, the SDU is discarded.FThe Format field in MAC subheader indicates the size of theLength field.FieldThe individual contents of an information element are referred toas fields.Frequency layerset of cells with the same carrier frequency.Global cell identityAn identity to uniquely identifying an NR cell. It is consisted ofcellIdentity and plmn-Identity of the first PLMN-Identity inplmn-IdentityList in SIB1.gNBnode providing NR user plane and control plane protocolterminations towards the UE, and connected via the NG interfaceto the 5GC.Handoverprocedure that changes the serving cell of a UE inRRC_CONNECTED.Information elementA structural element containing single or multiple fields isreferred as information element.LThe Length field in MAC subheader indicates the length of thecorresponding MAC SDU or of the corresponding MAC CELCID6 bit logical channel identity in MAC subheader to denote whichlogical channel traffic or which MAC CE is included in the MACsubPDUMAC-IMessage Authentication Code - Integrity. 16 bit or 32 bit bitstring calculated by NR Integrity Algorithm based on the securitykey and various fresh inputsLogical channela logical path between a RLC entity and a MAC entity. There aremultiple logical channel types depending on what type ofinformation is transferred e.g. CCCH (Common ControlChannel), DCCH (Dedicate Control Channel), DTCH (DedicateTraffic Channel), PCCH (Paging Control Channel)LogicalChannelConfigThe IE LogicalChannelConfig is used to configure the logicalchannel parameters. It includes priority, prioritisedBitRate,allowedServingCells, allowedSCS-List, maxPUSCH-Duration,logicalChannelGroup, allowedCG-List etclogicalChannelGroupID of the logical channel group, as specified in TS 38.321, whichthe logical channel belongs toMAC CEControl Element generated by a MAC entity. Multiple types ofMAC CEs are defined, each of which is indicated bycorresponding LCID. A MAC CE and a corresponding MACsub-header comprises MAC subPDUMaster Cell Groupin MR-DC, a group of serving cells associated with the MasterNode, comprising of the SpCell (PCell) and optionally one ormore SCells.maxPUSCH-Restriction on PUSCH-duration for the corresponding logicalDurationchannel. If present, UL MAC SDUs from this logical channel canonly be transmitted using uplink grants that result in a PUSCHduration shorter than or equal to the duration indicated by thisfield. Otherwise, UL MAC SDUs from this logical channel canbe transmitted using an uplink grant resulting in any PUSCHduration.NRNR radio accessPCellSpCell of a master cell group.PDCP entityThe process triggered upon upper layer request. It includes thereestablishmentinitialization of state variables, reset of header compression andmanipulating of stored PDCP SDUs and PDCP PDUs. Thedetails can be found in 5.1.2 of 38.323PDCP suspendThe process triggered upon upper layer request. When triggered,transmitting PDCP entity set TX_NEXT to the initial value anddiscard all stored PDCP PDUs. The receiving entity stop andreset t-Reordering, deliver all stored PDCP SDUs to the upperlayer and set RX_NEXT and RX_DELIV to the initial valuePDCP-configThe IE PDCP-Config is used to set the configurable PDCPparameters for signalling and data radio bearers. For a data radiobearer, discardTimer, pdcp-SN-Size, header compressionparameters, t-Reordering and whether integrity protection isenabled are configured. For a signaling radio bearer, t-Reorderingcan be configuredPLMN ID Checkthe process that checks whether a PLMN ID is the RPLMNidentity or an EPLMN identity of the UE.Primary CellThe MCG cell, operating on the primary frequency, in which theUE either performs the initial connection establishment procedureor initiates the connection re-establishment procedure.Primary SCG CellFor dual connectivity operation, the SCG cell in which the UEperforms random access when performing the Reconfigurationwith Sync procedure.priorityLogical channel priority, as specified in TS 38.321. an integerbetween 0 and 7. 0 means the highest priority and 7 means thelowest priorityPUCCH SCella Secondary Cell configured with PUCCH.Radio BearerLogical path between a PDCP entity and upper layer (i.e. SDAPentity or RRC)RLC bearerRLC and MAC logical channel configuration of a radio bearer inone cell group.RLC bearerThe lower layer part of the radio bearer configuration comprisingconfigurationthe RLC and logical channel configurations.RX_DELIVThis state variable indicates the COUNT value of the first PDCPSDU not delivered to the upper layers, but still waited for.RX_NEXTThis state variable indicates the COUNT value of the next PDCPSDU expected to be received.RX_REORDThis state variable indicates the COUNT value following theCOUNT value associated with the PDCP Data PDU whichtriggered t-Reordering.Serving CellFor a UE in RRC_CONNECTED not configured with CA/DCthere is only one serving cell comprising of the primary cell. Fora UE in RRC_CONNECTED configured with CA/DC the term′serving cells′ is used to denote the set of cells comprising of theSpecial Cell(s) and all secondary cells.SpCellprimary cell of a master or secondary cell group.Special CellFor Dual Connectivity operation the term Special Cell refers tothe PCell of the MCG or the PSCell of the SCG, otherwise theterm Special Cell refers to the PCell.SRBSignalling Radio Bearers″ (SRBs) are defined as Radio Bearers(RBs) that are used only for the transmission of RRC and NASmessages.SRB0SRB0 is for RRC messages using the CCCH logical channelSRB1SRB1 is for RRC messages (which may include a piggybackedNAS message) as well as for NAS messages prior to theestablishment of SRB2, all using DCCH logical channel;SRB2SRB2 is for NAS messages and for RRC messages which includelogged measurement information, all using DCCH logicalchannel. SRB2 has a lower priority than SRB1 and may beconfigured by the network after AS security activation;SRB3SRB3 is for specific RRC messages when UE is in (NG)EN-DCor NR-DC, all using DCCH logical channelSRB4SRB4 is for RRC messages which include application layermeasurement reporting information, all using DCCH logicalchannel.Suitable cellA cell on which a UE may camp. Following criteria applyThe cell is part of either the selected PLMN or the registeredPLMN or PLMN of the Equivalent PLMN listThe cell is not barredThe cell is part of at least one TA that is not part of the list of″Forbidden Tracking Areas for Roaming″ (TS 22.011 [18]),which belongs to a PLMN that fulfils the first bullet above.The cell selection criterion S is fulfilled (i.e. RSRP and RSRQare better than specific values In the present invention, “trigger” or “triggered” and “initiate” or “initiated” may be used in the same meaning. In the present invention, “radio bearers allowed for the second resume procedure”, “radio bearers for which the second resume procedure is set”, and “radio bearers for which the second resume procedure is enabled” may all have the same meaning. FIG.1Ais a diagram illustrating the architecture of an 5G system and a NG-RAN to which the disclosure may be applied. 5G system consists of NG-RAN1a-01and 5GC1a-02. An NG-RAN node is either:A gNB, providing NR user plane and control plane protocol terminations towards the UE; orAn ng-eNB, providing E-UTRA user plane and control plane protocol terminations towards the UE. The gNBs1a-05or1a-06and ng-eNBs1a-03or1a-04are interconnected with each other by means of the Xn interface. The gNBs and ng-eNBs are also connected by means of the NG interfaces to the 5GC, more specifically to the AMF (Access and Mobility Management Function) and to the UPF (User Plane Function). AMF1a-07and UPF1a-08may be realized as a physical node or as separate physical nodes. A gNB1a-05or1a-06or an ng-eNBs1a-03or1a-04hosts the functions listed below.Functions for Radio Resource Management such as Radio Bearer Control, Radio Admission Control, Connection Mobility Control, Dynamic allocation of resources to UEs in uplink, downlink and sidelink (scheduling); andIP and Ethernet header compression, uplink data decompression and encryption of user data stream; andSelection of an AMF at UE attachment when no routing to an MME can be determined from the information provided by the UE; andRouting of User Plane data towards UPF; andScheduling and transmission of paging messages; andScheduling and transmission of broadcast information (originated from the AMF or O&M); andMeasurement and measurement reporting configuration for mobility and scheduling; andSession Management; andQoS Flow management and mapping to data radio bearers; andSupport of UEs in RRC_INACTIVE state; andRadio access network sharing; andTight interworking between NR and E-UTRA; andSupport of Network Slicing. The AMF1a-07hosts the functions such as NAS signaling, NAS signaling security, AS security control, SMF selection, Authentication, Mobility management and positioning management. The UPF1a-08hosts the functions such as packet routing and forwarding, transport level packet marking in the uplink, QoS handling and the downlink, mobility anchoring for mobility etc. FIG.1Bis a diagram illustrating a wireless protocol architecture in an 5G system to which the disclosure may be applied. User plane protocol stack consists of SDAP1b-01or1b-02, PDCP1b-03or1b-04, RLC1b-05or1b-06, MAC1b-07or1b-08and PHY1b-09or1b-10. Control plane protocol stack consists of NAS1b-11or1b-11b-, RRC1b-13or1b-14, PDCP, RLC, MAC and PHY. Each protocol sublayer performs functions related to the operations listed in the Table 3. TABLE 3SublayerFunctionsNASauthentication, mobility management, security control etcRRCSystem Information, Paging, Establishment, maintenance andrelease of an RRC connection, Security functions,Establishment, configuration, maintenance and release ofSignalling Radio Bearers (SRBs) and Data Radio Bearers(DRBs), Mobility, QoS management, Detection of andrecovery from radio link failure, NAS message transferetc.SDAPMapping between a QoS flow and a data radio bearer,Marking Qos flow ID (QFI) in both DL and UL packets.PDCPTransfer of data, Header compression and decompression,Ciphering and deciphering, Integrity protection and integrityverification, Duplication, Reordering and in-order delivery,Out-of-order delivery etc.RLCTransfer of upper layer PDUs, Error Correction throughARQ, Segmentation and re-segmentation of RLC SDUs,Reassembly of SDU, RLC re-establishment etc.MACMapping between logical channels and transport channels,Multiplexing/demultiplexing of MAC SDUs belonging to oneor different logical channels into/from transport blocks (TB)delivered to/from the physical layer on transport channels,Scheduling information reporting, Priority handling betweenUEs, Priority handling between logical channels of one UEetc.PHYChannel coding, Physical-layer hybrid-ARQ processing, Ratematching, Scrambling, Modulation, Layer mapping,Downlink Control Information, Uplink Control Informationetc. FIG.1Cis a diagram illustrating a structure of a positioning system according to an embodiment of the present disclosure. The terminal1c-03is connected to the LMF1c-33through the gNB1c-13and the AMF1c-23. Hereinafter, gNB is also referred to as a base station, AMF as an access mobility function, and LMF as a location management function. The base station provides the TRP function. AMF stores the capability of the terminal related to location confirmation and relays the signaling between the location management function and the terminal. AMF may be connected to several base stations. One AMF can be connected to several LMFs. The AMF may initially select the LMF for any terminal. The AMF may select another LMF when the terminal moves to a new cell. The LMF manages the support of different location services for target UEs, including positioning of UEs and delivery of assistance data to UEs. The LMF may interact with a target UE in order to deliver assistance data if requested for a particular location service, or to obtain a location estimate if that was requested. For positioning of a target UE, the LMF decides on the position methods to be used The positioning methods may yield a location estimate for UE-based position methods and/or positioning measurements for UE-assisted and network-based position methods. The LMF may combine all the received results and determine a single location estimate for the target UE (hybrid positioning). Additional information like accuracy of the location estimate and velocity may also be determined. FIG.1Dis a diagram illustrating a protocol hierarchical structure for signaling between a location management function and a terminal according to an embodiment of the present disclosure. The terminal and LMF exchange signaling through LPP1d-03. LPP defines various control messages related to positioning. The LPP control message is included in the NAS1d-13message and delivered to the AMF, and the AMF delivers the LPP control message included in the NAS message to the LMF. LPP is a protocol applied to both LTE and NR. Hereinafter, LPP is also called positioning protocol. FIG.2Ashows the types of positioning method. The positioning methods are GNSS positioning2a-01, OTDOA positioning2a-05, Barometric pressure sensor positioning2a-03, DL-AoD positioning2a-07, DL-TDOA positioning2a-09, UL-TDOA positioning2a-11, etc. GNSS positioning and barometric pressure sensor positioning are positioning methods independent of radio access technology, OTDOA positioning is a positioning method using an LTE downlink signal, and DL-AoD positioning and DL-TDOA positioning are positioning methods using a specific NR downlink signal. The specific NR downlink signal is a positioning reference signal (PRS). UL-TDOA positioning is a positioning method using a specific NR uplink signal. The specific NR uplink signal is a sounding reference signal (SRS). FIG.2Bis a diagram illustrating positioning assistance data. Assistance data may be transmitted to the positioning device so that each positioning can be performed more quickly and accurately. The assistance data may be provided through system information or transmitted through an LPP message. The positioning device may be a terminal or a base station. Assistance data is transmitted while being included in assistanceDataElement (assitanceDataElement). One assitanceDataElement contains specific information related to a specific positioning method. For example, GNSS-ReferenceTime assitanceDataElement includes reference time information of GNSS and is transmitted through the positioning SIB called posSibType1-1 or delivered to the terminal through the LPP control message called ProvideAssistanceData. When provided through the positioning SIB, assitanceDataElement is mapped to a specific positioning SIB type. GNSS-related assitanceDataElements2b-01to2b-03are mapped to positioning SIB type 1 and positioning SIB type 2. OTDOA-related assitanceDataElement2b-05is mapped to positioning SIB type 3, barometric pressure sensor positioning-related assistanceDataElement2b-07is mapped to positioning SIB type 4, and DL-AoD and DL-TDOA-related assistanceDataElement2b-11are mapped with positioning SIB type 6. Most of the assistanceDataElements are immediately applicable upon receipt. However, specific information, such as PRS-related assistance data, can be divided into those that are immediately applicable and those that are applicable when a predetermined condition is met that are transmitted through the SIB. For example, NR-DL-PRS-AssistanceData2b-13includes assistance data that is applied immediately, and NR-DL-PRS-ConditionalAssistanceData2b-15includes assistance data that is applied when a predetermined condition is satisfied or is selectively applied. Assistance data immediately applicable is called type 1 assistance data, and assistance data applicable when predetermined conditions are met is called type 2 assistance data. FIG.2Cis a diagram illustrating the structure of NR-DL-PRS-AssistanceData. Definitions of each type of IEs used inFIG.2Cfollow specification 37.355, unless otherwise defined. NR-DL-PRS-AssistanceData provides information on PRS as assistance data for DL-TDOA or DL-AOD. NR-DL-PRS-AssistanceData is provided to the terminal through positioning SIB type 6-1 or through ProvideAssistanceData. One NR-DL-PRS-AssistanceData2c-01is composed of one nr-DL-PRS-ReferenceInfo2c-03and one nr-DL-PRS-AssistanceDataList2c-05. The nr-DL-PRS-ReferenceInfo2c-03provides information on the identifier and frequency of the TRP that provides a reference for nr-DL-PRS-SFN0-Offset or dl-PRS-ResourceSlotOffset, etc. The nr-DL-PRS-AssistanceDataList2c-05is composed of a plurality of NR-DL-PRS-AssistanceDataPerFreq2c-07. One NR-DL-PRS-AssistanceDataPerFreq2c-07provides information on PRS provided at a specific frequency and is composed of nr-DL-PRS-PositioningFrequencyLayer2c-09and nr-DL-PRS-AssistanceDataPerFreq2c-11. NR-DL-PRS-AssistanceDataPerFreq2c-07and nr-DL-PRS-AssistanceDataPerFreq2c-11are different IEs. The nr-DL-PRS-AssistanceDataPerFreq2c-11is composed of a plurality of NR-DL-PRS-AssistanceDataPerTRP2c-13. The nr-DL-PRS-PositioningFrequencyLayer2c-09is common information applied to a plurality of NR-DL-PRS-AssistanceDataPerTRP2c-13. This is composed of information such as the subcarrier interval, the bandwidth of the PRS resource, the PRB from which the PRS resource starts. One NR-DL-PRS-AssistanceDataPerTRP2c-13provides information on PRS provided by a specific TRP. TRP may be a cell. NR-DL-PRS-AssistanceDataPerTRP2c-13consists of information commonly applied to multiple nr-DL-PRS-ResourceSet2c-17and multiple nr-DL-PRS-ResourceSet2c-17. The. Information commonly applied to the plurality of nr-DL-PRS-ResourceSets2c-17includes dl-PRS-ID, a cell identifier corresponding to the TRP and the time offset of the SFN #0 slot #0 for the given TRP with respect to SFN #0 slot #0 of the assistance data reference. One nr-DL-PRS-ResourceSet2c-17consists of one dl-PRS-ResourceList2c-19, and dl-PRS-ResourceList2c-19consists of a plurality of dl-PRS-Resources. One dl-PRS-Resource has an identifier, code sequence information applied to the corresponding PRS, and the starting slot of the DL-PRS Resource with respect to the corresponding DL-PRS-Resource Set Slot Offset and QCL information (beam information) of the corresponding PRS. The PRS-ResourceSet is composed of a plurality of PRSs using the same frequency resource and is a set of PRS resources grouped for beam sweeping. Consequently, one nr-DL-PRS-AssistanceDataList2c-05includes assistance data for a plurality of frequencies. The assistance data for each frequency includes assistance data for a plurality of TRPs. The assistance data for each TRP may provide information on a plurality of DL-PRS-ResourceSets. One DL-PRS-ResourceSet is composed of a plurality of DL-PRS-Resources. The terminal may perform positioning measurement by measuring the plurality of DL-PRS-Resources indicated in the nr-DL-PRS-AssistanceDataList2c-05. NR-DL-PRS-AssistanceData is assistance data that is applied immediately. DL-PRS included in NR-DL-PRS-AssistanceData are continuously transmitted from the time point when the terminal receives NR-DL-PRS-AssistanceData until the terminal stops measuring positioning using DL-PRS, and the terminal immediately use the immediately applied assistance data when positioning measurement using the assistance data is necessary. FIG.2Dis a diagram illustrating the structure of PRS-ConditionalAssistanceData. The PRS-ConditionalAssistanceDataSet (hereinafter, conditional assistance data set)2d-01is composed of a PRS-ConditionalAssistanceDataList2d-03including a plurality of PRS-ConditioanlAssistanceData2d-05(hereinafter, conditional assistance data). Each conditional assistance data2d-05includes PRS-AssistanceData2d-13(hereinafter, assistance data) that is currently being transmitted or that can be started when a terminal request it. The conditional assistance data set includes type 2 assistance data and is provided to the terminal through positioning SIB type 6-4 or through ProvideAssistanceData. Positioning SIB type 6-1 includes only one type 1 assistance data2c-01, and positioning SIB type 6-4 includes one or more type 2 assistance data2d-13. Conditional assistance data2d-05is composed of PRS-ConditionalAssistanceDataId2d-07(hereinafter assistance data id), PRS-ConditionalAssistanceDataStatus2d-09(hereinafter assistance data status), PRS-ConditionalAssistanceDataValidity2d-11(assistance data validity), ReportConfig (hereinafter, Report Configuration), and PRS-AssistanceData2d-13(hereinafter, assistance data). The assistance data id2d-07is an identifier of the related conditional assistance data2d-05or the related assistance data2d-13and is an integer between 0 and 15. The assistance data status2d-09is 1-bit information indicating whether the related assistance data2d-13is being transmitted (or provided). The fact that the assistance data2d-13is being transmitted means that the PRSs specified in the assistance data2d-13are currently being transmitted. If the assistance data status related to the assistance data exists (or the assistance data status is set to the first value), the terminal determines that the PRSs specified in the assistance data are currently being transmitted and performs the necessary operation. If the assistance data status related to the assistance data does not exist (or if the assistance data status is set to the second value), the terminal determines that the PRSs specified in the assistance data are not currently being transmitted. The terminal if necessary, requests the LMF to start transmission of the PRS The assistance data validity2d-11indicates under what conditions the relevant conditional assistance data2d-05or the relevant assistance data2d-13are valid. Alternatively the assistance data validity indicates which conditions to be fulfilled for UE to initiate measurement on the relevant PRS and to report measurement results. The assistance data validity2d-11may include an NR CGI (Cell Global Idnetifier) List or time interval information. The time interval information is composed of the first time point and the second time point. In the terminal, if the NR CGI of the current cell belongs to the NR CGI List, and the current time expressed in UTC (Universal Coordinate Time) belongs to the time interval information expressed in the first time point and the second time point, the related conditional assistance data2d-05or related assistance data2d-13is considered valid. If the assistance data status2d-09of the conditional assistance data2d-05determined to be valid is set to ‘available’, ‘transmit’ or ‘broadcast’, the terminal performs positioning measurement for the related PRS and report measurement results to the LMF. If the assistance data status2d-09of the conditional assistance data2d-05determined to be valid is set to ‘unavailable’, ‘not transmitted’, or ‘non-broadcast’, the terminal requests LMF to activate the conditional assistance data2d-05. Activation of the conditional assistance data means that the PRSs specified in the conditional assistance data are transmitted. The conditional assistance data set2d-01may be provided through a positioning SIB or may be provided through an LPP control message. The assistance data status2d-09is included only in the conditional assistance data set2d-01provided through the positioning SIB, and the assistance data validity is included only in the conditional assistance data set provided through the LPP control message. Alternatively, assistance data status is used only for type 2 assistance data provided through positioning SIB, and assistance data validity is used only for type 2 assistance data provided through assistanceDataProvide. ReportConfig2d-12(hereinafter Report Configuration) is parameters related to positioning measurement result reporting and consists of maxDL-PRS-RSTD-MeasurementsPerTRPPair and timingReportingGranularityFactor. maxDL-PRS-RSTD-MeasurementsPerTRPPair indicates the maximum number of. DL-PRS RSTD measurements for downlink PRS RSTD (Reference Signal Time Difference). timingReportingGranularityFactor indicates recommended reporting granularity for the DL RSTD measurements. The terminal reports the measurement result according to the above ReportConfig when the validity condition of the conditional assistance data is met. The assistance data2d-13of the conditional assistance data2d-05is an IE having the same structure as the PRS-AssistanceData2c-01. The conditional assistance data is classified into conditional assistance data1 received through the positioning SIB and conditional assistance data2 received through the LPP control message. The assistance data status IE is essentially present in conditional assistance data1, but the assistance data status IE does not exist in conditional assistance data2. In conditional assistance data2, assistance data validity exists, but in conditional assistance data1, data validity condition does not exist. The purpose of conditional assistance data1 is to inform the terminal of PRSs in which transmission can be activated in the corresponding cell. The terminal may determine the PRSs required for its own positioning measurement among the PRSs indicated in conditional assistance data1 and may request the LMF to activate the corresponding conditional assistance data. The purpose of conditional assistance data2 is to inform the terminal of PRSs to be measured when a predetermined condition is met. The terminal may measure the PRSs that satisfy the condition among the PRSs specified in conditional assistance data2 and report the results to the LMF. FIG.2Eis a diagram illustrating a system information acquisition process. System Information Block (hereinafter referred to as SIB) includes general SIB and positioning SIB. Types of general SIB include SIB1, SIB2, SIB3, SIB4, SIB5, SIB6, SIB7, SIB8, and SIB9. SIB1 includes information related to scheduling of other system information and radio resource configuration information commonly applied to all terminals. SIB2 includes cell reselection information. SIB3 includes information about neighboring cells for intra-frequency cell resection. SIB4 includes information for inter-frequency cell resection. SIB5 includes E-UTRA frequency information and the like for inter-RAT cell reselection. SIB6 includes ETWS (Earthquake Tsunami Warning System) main notification. SIB7 includes the ETWS sub-notification. SIB8 contains CMAS notifications. SIB9 includes information related to GPS time and Coordinated Universal Time (UTC). The assistance data mapped with the type of positioning SIB is as shown inFIG.2B. One or a plurality of SIBs having the same transmission period are included in one system information (System Information, SI) and transmitted. scheduling information of SI related to general SIB is indicated in SI scheduling Information. The scheduling information of the SI related to the positioning SIB is indicated in the positioning SI scheduling Information. SI scheduling Information and positioning SI scheduling Information are included in SIB1. The SI scheduling Information includes one or more scheduling information and one SI window length. The scheduling information consists of SI broadcast status, SI periodicity, and SIB mapping information. SI broadcast status indicates whether the corresponding SI message is being broadcast. SI periodicity is the period of the corresponding SI message. The SI window length is the length of the SI scheduling window. The SIB mapping information consists of one or a plurality of SIB type information. The SIB type information includes type information indicating one of sibType2, sibType3, sibType4, sibType5, sibType6, sibType7, sibType8, sibType9, sibType10, sibType11, sibType12, sibType13, and sibType14, and a value tag indicating one of integers between 0 and 31. The positioning SI scheduling Information is composed of one or more positioning scheduling information and the like. The positioning scheduling information consists of positioning SI broadcast status, positioning SI periodicity, and positioning SIB mapping information. The positioning SI broadcast status indicates whether the corresponding positioning SI message is being broadcast. The positioning SI periodicity is the period of the positioning SI message. The positioning SIB mapping information consists of one or a plurality of positioning SIB type information. positioning SIB type information consist of a type information indicating one of posSibType1-1, posSibType1-2, posSibType1-3, posSibType1-4, posSibType1-5, posSibType1-6, posSibType1-7, posSibType1-8, posSibType2-1, posSibType2-2, posSibType2-3, posSibType2-4, posSibType2-5, posSibType2-6, posSibType2-7, posSibType2-8, posSibType2-9, posSibType2-10, posSibType2-11, posSibType2-12, posSibType2-13, posSibType2-14, posSibType2-15, posSibType2, posSibType2-17, posSibType2-18, posSibType2-19, posSibType2-20, posSibType2-21, posSibType2-22, posSibType2-23, posSibType3-1, posSibType4-1, posSibType5-1, posSibType6-1, posSibType6-2, posSibType6-2, posSibType6-3 and posSibType6-4. In step2e-11, the terminal2e-01receives SIB1 from the base station (2e-03). SI scheduling Information of SIB1 is set as in2e-13. The positioning SI scheduling Information of SIB1 is set as in2e-15. SI with SI broadcast status set to being broadcast and positioning SI with positioning SI broadcast status set to being broadcast are transmitted according to the order included in SI scheduling Information and positioning SI scheduling Information. For example, it is transmitted in the order of the first SI, the second SI, and the first positioning SI. SI and positioning SI are transmitted within the SI scheduling window and the positioning SI scheduling window. The length of the SI scheduling window and the length of the positioning SI scheduling window are determined by the SI window length of SI scheduling Information. In step2e-17, the terminal receives the first SI in the SI scheduling window for the first SI. The first SI contains only SIB2 as shown in2e-13. As shown in2e-19, the first SI includes one IE called sib-TypeAndInfo, and sib-TypeAndInfo includes SIB2. In step2e-21, the terminal receives the second SI in the SI scheduling window for the second SI. The second SI denotes SIB3 and SIB4 as shown in2e-13. As shown in2e-23, the second SI includes two sib-TypeAndInfo IEs, the first sib-TypeAndInfo includes SIB3, and the second sib-TypeAndInfo includes SIB4. In step2e-25, the terminal receives the first positioning SI in the positioning SI scheduling window for the first positioning SI. The first positioning SI includes positioning SIB 6-1 and positioning SIB 6-2 as shown in2e-15. As shown in2e-27, the first positioning SI includes two posSIB-TypeAndInfo IEs, the first posSIB-TypeAndInfo includes positioning SIB6-1, and the second posSIB-TypeAndInfo includes positioning SIB6-2. As shown in2e-29, one positioning SIB is composed of value tag2, expiration time, and assistanceDataElement. value tag2 indicates one of integers between 0 and 63 and indicates whether broadcast assistance data has been changed. value tag2 is set by LMF. The expiration time indicates the time point at which the contents of the broadcast assistance data expire in UTC. assistanceDataElement is a field containing actual assistance data. General SIB indicates one of the integers between 0 and 31, and the change is indicated by the value tag set by the base station. The positioning SIB indicates one of the integers between 0 and 63 and value tag2 set by the LMF. indicates whether the change has been made or not. Value tag is included in SIB1 and broadcast, and value tag2 is included in positioning SI and broadcast. As shown in2e-15, the second positioning scheduling information is not broadcast. The terminal performs a system information request procedure to receive non-broadcast positioning scheduling information. The terminal should always store valid system information. The terminal maintains the validity of the system information by reacquiring the system information when a predetermined event occurs. When the short message included in the DCI addressed to the P-RNTI indicates systemInfoModification, the terminal receives SIB1, determines the first type SIBs in which the value tag is changed, and receive the first type SIBs in which the value tag is changed and store it. The terminal receives and stores positioning SIs including the second type SIB again without considering the value tag. First type SIB is a general SIB, and second type SIB is a positioning SIB. When 3 hours have elapsed since the terminal successfully received the first type SIB, the terminal discards the first type SIB and initiates a procedure for acquiring the SI including the first type SIB. When the terminal successfully receives the second type SIB, it stores the second type SIB. Then, in a systemInfoModification period starting just before the expiration time of the second type SIB, terminal starts a procedure for acquiring the SI including the second type. The systemInfoModification period is a time interval that occurs sequentially. During one systemInfoModification period, system information cannot be changed. When it is necessary to change the system information, the base station transmits new system information from the time point at which the next systemInfoModification period starts. FIG.2Fis a diagram illustrating a system information request procedure. The terminal can request system information that is not broadcast by using the RRC control message. The RRC_IDLE terminal or RRC_INACTIVE terminal transmits positioning system information request1, and the terminal in RRC_CONNECTED state transmits positioning system information request2. In step2f-11, the RRC_IDLE terminal or RRC_INACTIVE terminal transmits positioning system information request1, which is an RRC control message for requesting positioning system information, to the base station. The positioning system information request1 includes the requested positioning SI list. The requested positioning SI list is a list of SI messages requested by the terminal to be provided to the base station. The requested SI list is a 32-bit bitmap. Each bit of the requested positioning SI list corresponds to each entry according to the order of the entries included in the positioning SI scheduling Information. For example, the first bit corresponds to the first positioning SI of the positioning SI scheduling information. In step2f-13, the RRC_CONNECTED terminal transmits positioning system information request2, which is an RRC control message for requesting positioning system information, to the base station. The positioning system information request2 includes the requested positioning SIB list. The requested positioning SIB list is a list of positioning SIBs requested by the terminal to be provided to the base station, and includes a plurality of positioning SIB type information. The positioning SIB type information indicates the type of positioning SIB requested by the terminal. In step2f-15, the terminal that has transmitted the positioning system information request1 or positioning system information request2 receives SIB1 from the base station. The terminal checks whether the requested positioning SI or SI including the positioning SIB is broadcast. In step2f-17, the terminal receives the positioning SI requested by the terminal or the positioning SI including the positioning SIB requested by the terminal. Positioning system information request1 is transmitted via SRB0 and CCCH. The positioning system information request2 is transmitted via SRB1 and DCCH. Since the size of the control message transmitted through the CCCH is limited, positioning system information request1 reduces the size of transmitted information by indicating the requested SI type information in a bitmap format instead of directly indicating it. On the other hand, since a relatively large message can be transmitted through the DCCH, the positioning system information request2 directly indicates the requested positioning SIB. FIG.2Gis a diagram illustrating the structure of an uplink MAC PDU including an inactive positioning measurement result. The uplink MAC PDU including the inactive positioning measurement result consists of three MAC subPDUs. The MAC SDU (the first SDU)2g-15including the ResumeRequest message belonging to SRB0 is located at the front of the MAC PDU (2g-11), and the MAC SDU (the second SDU) including the LPP segment message belonging to SRB2 (the second SDU)2g-19is located next. The first BSR2g-27is located at the rearmost part. That is, the first MAC subPDU including SRB0 data, the second MAC subPDU including SRB2 data, and the third MAC subPDU including the first BSR are included in the order. The MAC sub-header of the first MAC subPDU and the third MAC subPDU consists of two reserved bits and an LCID field. The MAC sub-header of the second MAC subPDU consists of one reserved bit, an F field, an LCID field, and an L field. This is so that the base station receiving the MAC PDU processes the ResumeRequest first, so that the MAC PDU is recognized as a MAC PDU related to the small data transfer procedure as quickly as possible. The remaining part2g-15excluding the MAC sub-header in the first MAC subPDU and the remaining part2g-27excluding the MAC sub-header in the third MAC subPDU are plain text that is not ciphered. In the second MAC subPDU, the remaining part2g-19except for the MAC sub-header includes data ciphered with a predetermined security key. The MAC sub-header is not ciphered. The reason for locating the MAC subPDUs as described above is that the first MAC subPDU and the second MAC subPDU include data processed by RRC, and the third MAC subPDU includes data processed by MAC, so it is to facilitate the processing operation of the terminal by locating the unciphered data first and locating the ciphered data later. FIG.2His a diagram illustrating the structure of a buffer status report MAC CE. The first BSR MAC CE consists of one logical channel group identifier field2h-01and one first buffer size field2h-03. The logical channel group identifier field2h-01has a 3-bit size and indicates one of the logical channel group identifiers between 0 and 7. The first buffer size field2h-03has a size of 5 bits and indicates one of the first buffer size indexes from 0 to 31. The first buffer size index 0 means that there is no data available for transmission in logical channels belonging to the corresponding logical channel group. The first buffer size index 31 means that the amount of data for transmission of the logical channels belonging to the corresponding logical channel group is greater than the 30th first buffer size. The first buffer size index 1 means that the amount of data for transmission of logical channels belonging to the corresponding logical channel group is greater than 0 and less than or equal to the first buffer size. The first buffer size index n (2<=n<=30) indicates that the amount of data for transmission of the logical channels belonging to the corresponding logical channel group is greater than the n−1st buffer size and less than or equal to the nth first buffer size. The30first buffer sizes are defined in the standard. The second BSR MAC CE consists of 8 LCGi bits2h-11and a plurality of the second buffer size fields2h-13. The LCGi bit indicates whether the second buffer size field for logical channel group i exists. For example, it indicates whether the second buffer size field for LCG1 logical channel group 1 exists. If this field is 1, the second buffer size field for the corresponding LCG exists. The second buffer size field has an 8-bit size and indicates one of the second buffer size indexes between 0 and 255. The second buffer size index 0 means that there is no data available for transmission in logical channels belonging to the corresponding logical channel group. The second buffer size index 254 means that the amount of data for transmission of the logical channels belonging to the corresponding logical channel group is greater than the size of the 253-th second buffer size. The second buffer size index 1 means that the amount of data for transmission of the logical channels belonging to the corresponding logical channel group is greater than 0 and less than or equal to the first second buffer size. The second buffer size index n (2<=n<=253) indicates that the amount of data for transmission of the logical channels belonging to the corresponding logical channel group is greater than the (n−1)th buffer size and less than or equal to the nth buffer size. The second buffer size index 255 is not used. The 252 second buffer sizes are defined in the specification. The first BSR MAC CE is referred to as a BSR to which the first format is applied or the first format BSR. The second BSR MAC CE is referred to as a BSR to which the second format is applied or the second format BSR. Logical channel group is configured when logical channel is configured. A logical channel and a logical channel group are configured with an RRC control message. In general, a buffer size index reflecting the amount of data available for transmission of the RLC layer and the amount of data available for transmission of the PDCP layer is set in buffer size field. FIG.3Ais a diagram illustrating the overall operation of a terminal, a base station, and an LMF. In step3a-11, the terminal selects a NR cell and camps on it. The terminal may select an NR cell in which downlink reference signal received power and downlink reference signal received quality exceed a predetermined threshold. The terminal does not consider neighboring cell information included in the System Information Block in cell selection. In step3a-13, the terminal receives system information from the base station in the selected NR cell. The terminal receives the MIB first and receives SIB1 based on the information of the MIB. The terminal receives the remaining system information by referring to the scheduling information of SIB1. In steps3a-15, the terminal establishes an RRC connection with the base station. The terminal and the base station exchange RRCRequest messages, RRCSetup messages, and RRCSetupComplete messages through the random access process. When the terminal receives the RRCSetup message from the base station, the RRC connection is established. A terminal that has established an RRC connection may perform a positioning preparation procedure and a positioning execution procedure with a base station or LMF. The positioning preparation procedure consists of a UE capability reporting phase3a-17and an assistance data delivery phase3a-19. The positioning execution procedure3a-21,3a-23consists of a terminal and a base station performing positioning measurement using an uplink signal and a downlink signal and reporting it to the LMF. The UE capability reporting phase is performed only in the RRC connected state, but the assistance data delivery phase and the positioning execution procedure may be performed not only in the RRC connected state but also in the RRC inactive state. When the terminal receives assistance data and report configuration from the base station or the LMF, it measures for positioning based on the assistance data, and reports the measurement result to the LMF based on the report configuration. The terminal may receive the first type assistance data in assistanceDataProvide and may receive Report Configuration in positioningDataRequest. Upon receiving the positioningDataRequest, the terminal performs positioning measurement based on the assistance data of the first type assistance data of assistanceDataProvide and reports the measurement result to the LMF based on the Report Configuration of positioningDataRequest. The terminal can receive the second type assistance data including Report Configuration and assistance data validity in one assistanceDataProvide. When the validity of the assistance data is satisfied, the terminal performs the measurement for positioning based on the second type assistance data of the assistanceDataProvide and reports the measurement result to the LMF based on the Report Configuration of the same assistanceDataProvide. FIG.3Bis a diagram illustrating a terminal capability reporting procedure. In step3b-11, the first base station3a-03instructs capability reporting by transmitting a UECapabilityEnquiry RRC message to the terminal3a-01. In step3b-13, the terminal reports the capability by sending a UECapabilityInformation RRC message to the first base station. UECapabilityInformation includes the first capability information and the third capability information. The base station may determine the positioning measurement configuration for the terminal by referring to the first capability information and the third capability information. In step3b-15, the first base station delivers the first capability information and the third capability information to the AMF3a-04, and in step3b-17, the AMF stores the first capability information and the third capability information for future use. In step3b-21, the first LMF3a-05instructs capability reporting by sending an LMF message called requestCapabilities to the terminal. The message includes information indicating for which positioning method the terminal should report capability. In step3b-23, the terminal reports the capability by sending the LMF message provideCapabilities to the first LMF. provideCapabilities includes the second capability information and the third capability information. The first LMF refers to the second capability information and the third capability information to instruct positioning measurement for the terminal and provides assistance data required by the terminal. In step3b-25, the first LMF transfers the second capability information and the third capability information to the AMF, and in step3b-27, the AMF stores the second capability information and the third capability information for future use. At future, the terminal establishes an RRC connection at the second base station3a-07. When the location service for the terminal is started, the AMF provides the first capability information and the third capability information to the second base station in step3b-31, instead of the base station and the LMF directly acquiring the relevant capability information to the terminal, and in step3b-33, the AMF provides the stored second capability information and the third capability information to the second LMF. The first capability information is capability information that the terminal reports to the base station through the RRC control message. It is capability information that LMF does not require only base station requires. The following IEs are applicable. The first capability information is information necessary for the base station to establish positioning measurement and is information about capability closely related to the radio interface. The first capability information1: it indicates whether the UE supports parallel transmission of SRS and PUCCH/PUSCH The first capability information2: Information indicating whether the terminal supports SRS for positioning in the connected state (indicating support of SRS for positioning in RRC_CONNECTED) it is defined for each band of the band combination (or defined within the band combination) It is reported as part of the band combination specific capability information. The terminal reports band specific capability information for each band it supports. The terminal reports band combination specific capability information that is valid only for the band combination within the band combination for each band combination supported by the terminal. Whether the connection state positioning SRS is supported is indicated for each band in the band combination. For example, if the terminal supports band A, band B, and band combination [A, B], the terminal reports to the base station band A specific capability information applied to band A and band B specific capability information applied to band B and band A capability information in the band combination [A,B] and band B capability information in the band combination [A,B]. Terminal reports as band capability information of the band combination whether positioning SRS is supported in connected mode The first capability information3: it indicates the maximum number of configured pathloss reference RSs for PUSCH/PUCCH/SRS for pathloss reference RS update. The first capability information4: it indicates measurement gap pattern(s) optionally supported by the UE for PRS measurement. The first capability information5: it indicates support of small data transfer via SRB2. The second capability information is capability information that the terminal reports to the LMF through the LPP control message. It is the capability information that LMF needs and base station does not need. The following IEs are applicable. The second capability information is information required for the LMF to establish positioning measurement and positioning report. It is information on capability closely related to the positioning function. The second capability information1: It indicate several positioning modes using a bit map. positioning mode information indicates a mode supported by the UE among UE-assisted and LMF-based mode, LMF-based mode, LMF-assisted and UE based mode, UE based mode and UE standalone mode. The second capability information2: It indicates the target device's LPP message segmentation capabilities. If bit0 is 1, it indicates that the target device can receive the segmented LPP message. If bit1 is 1, it indicates that the target device can transmit a segmented LPP message. The second capability information3: It indicates whether the target device can perform positioning measurement using PRS for a predetermined positioning method in an inactive state. The predetermined positioning method may be, for example, DL-AoD or DL-TDOA. That is, it indicates whether the terminal can measure PRS in the inactive state. The second capability information4: It indicates whether the target device can report the positioning measurement result in the inactive state. The third capability information is capability information that the terminal reports to the LMF through the LPP control message and to the base station through the RRC control message. It is the capability information required by both the LMF and the base station, and the following IEs are applicable. The third capability information1: It indicates support of SRS for positioning in RRC_INACTIVE. It is defined per band and reported as part of band specific capability information. The third capability information2: It is outer loop power control related information. It indicates whether the UE supports OLPC for SRS for positioning. The third capability information3: It indicates whether the UE supports spatial relations for SRS for positioning. The first capability information2 (indicating whether positioning SRS is supported in CONNECTED state) is reported to base station per band combination (or per feature set). The third capability information1 (indicating whether positioning SRS is supported in INACTIVE state) is reported per band to base station and to LMF. The definition of FeatureSet can be referred to 3GPP specification 38.331 and 38.306. Capability information on positioning SRS in INACTIVE state is reported both to base station and to LMF. Capability information on PRS in INACTIVE state is reported to LMF only. FIG.3Cis illustrating assistance data delivery phase. The assistance data is classified into immediate assistance data (first type assistance data) and conditional assistance data (second type assistance data). The base station may provide assistance data using the positioning SIB. The LMF sets the contents of the assistance data included in the positioning SIB. The LMF can provide assistance data to the terminal using the LPP control message. The terminal may acquire assistance data through system information in the idle state as in steps3a-13or may acquire assistance data through system information after RRC connection state transition3a-15. When the location service is started, the terminal may initiate a procedure for obtaining assistance data. The location service may be started regardless of the RRC state of the terminal. In step3c-11, the terminal receives SIB1 from the base station. The terminal stores SI scheduling Information and positioning SI scheduling Information. The terminal transitions to the connected state through steps3a-15and3a-17and performs the terminal capability reporting step. If the location service is started, the terminal performs steps3a-19to obtain assistance data. In step3c-13, the terminal receives the SI including the positioning SIB from the base station and determines whether required assistance data is provided in the corresponding cell. Required assistance data means assistance data for a positioning method supported by a terminal or assistance data for a positioning method to be used in a disclosed location service. The terminal determines, through the positioning SI scheduling information of SIB1, the required assistance data directly or indirectly provided from the corresponding cell and the required assistance data not provided from the corresponding cell. The assistance data currently being transmitted from the corresponding cell, that is, the assistance data of the positioning SIB in which the positioning SI broadcast status is set to being broadcast, is assistance data directly provided from the corresponding cell. Assistance data that is not currently transmitted from the corresponding cell but may be transmitted in the future, that is, the assistance data of the positioning SIB in which the positioning SI broadcast status is set to non-broadcast, is assistance data that is indirectly provided from the corresponding cell. The terminal receives the positioning SI including the positioning SIB provided directly in step3c-13as follows.1: Determining the time interval in which the positioning SI/positioning SIB can be transmitted based on the SI window length in the SI scheduling information and positioning SIB mapping information and the order of the SI scheduling information in the positioning SI scheduling information obtained from SIB1.2: Monitoring SI-RNTI in the time interval3: Receive a MAC PDU scheduled through SI-RNTI in the time interval4: Acquire positioning SI included in the MAC PDU In order to obtain the necessary positioning SIB provided indirectly, the terminal generates a positioning system information request2 requesting the positioning SIB to the base station. In step3c-15, the terminal sends positioning system information request2 to the base station. The terminal sets the requested positioning SIB list as follows.1: Identifying the positioning SI mapped with the required positioning SIB2: Identifying the positioning SI in which the positioning SI broadcast status is non-broadcast among the positioning SIs3: Determining the positioning SIB mapped to the positioning SI4: Including the positioning SIB type information in the requested positioning SIB list That is, the terminal includes, in the requested positioning SIB list, a positioning SIB mapped to a positioning SI in which the positioning SI broadcast status is set to non-broadcast among the required positioning SIBs. In step3c-17, the terminal receives the requested indirect positioning SIB/positioning SI from the base station. The indirect positioning SIB includes immediate assistance data1. The immediate assistance data1 may be, for example, GNSS-related assistance data included in positioning SIB1-x or positioning SIB2-x. Alternatively, immediate assistance data1 may be NR-DL-PRS-AssistanceData included in positioning SIB 6-1. In step3c-21, the terminal receives the indirect positioning SIB/positioning SI requested from the base station. The indirect positioning SIB includes conditional assistance data1. The conditional assistance data1 may be, for example, a conditional assistance data set included in positioning SIB 6-4. The base station includes the immediate assistance data and the conditional assistance data in different positioning SIBs and maps the positioning SIB corresponding to the immediate assistance data and the positioning SIB corresponding to the conditional assistance data to the different positioning SIs. Through this, terminals requiring only immediate assistance data and terminals requiring only conditional assistance data can receive only required. In addition, assistance data can be provided more flexibly, for example, immediate assistance data is transmitted directly to positioning SIB/direct positioning SI and conditional assistance data is transmitted to indirect positioning SIB/indirect positioning SI. In step3c-23, the terminal transmits an LPP message called requestAssistanceData requesting assistance data to the base station. The LPP message is delivered to the LMF through the base station. requestAssistanceData is transmitted to the base station through SRB2/DCCH. RequestAssistanceData contains the fields below.1: PCI of PCell. The LMF identifies the cell in which the terminal is located by referring to the PCI of the PCell and determines the assistance data valid for the cell and the adjacent area.2: Type of required assistance information. It indicates the type of assistance data requested by the terminal. This field indicates the relevant positioning method. For example, if this field indicates GNSS, the LMF determines that the terminal requests to provide GNSS-related assistance data.3: Identifier of conditional assistance data1 requiring activation. It is an identifier of conditional assistance data1 for which a terminal is desired to be activated among conditional assistance data1 obtained through positioning SIB or the like. The terminal indicates the assistance data id2d-07of the desired conditional assistance data2d-05among the plurality of conditional assistance data2d-05included in the conditional assistance data set2d-01.4: Information indicating that the required (or requested) assistance data is conditional assistance data. The terminal includes this field if the conditional assistance data 1 received from the base station does not include the conditional assistance data for the positioning method it wants. In step3c-25, the LMF transmits an LPP message called ProvideAssistanceData that provides assistance data to the terminal. ProvideAssistanceData contains the fields below.1: immediate assistance data. Among the immediate assistance data requested by the terminal, this is the immediate assistance data that the LMF can provide.2: Activated conditional assistance data id. The activated conditional assistance data is indicated among the conditional assistance data1 for which the terminal has requested activation. It is indicated by the assistance data id.3: conditional assistance data2. Among the conditional assistance data requested by the terminal, this is the conditional assistance data that the LMF can provide. When a predetermined condition is met, the terminal performs positioning measurement by applying conditional assistance data2 and reports the positioning measurement result to the LMF.4: inactive positioning. Information indicating whether the terminal should perform positioning-related operations in the inactive state. It may be at least one of the following three pieces of information.4-1: positioning measurement continuation indicator: 1-bit information indicating whether to continue the currently performed positioning measurement operation after transitioning to the inactive state.4-2: conditional assistance data based positioning measurement: 1-bit information instructing to perform positioning measurement by applying available conditional assistance data when transitioning to an inactive state. The available conditional assistance data may be a plurality of conditional assistance data included in conditional assistance data1 and a plurality of conditional assistance data included in conditional assistance data2.4-3: inactive positioning measurement method list: A list of positioning measurement methods to be performed by the terminal when transitioning to inactive state. It may be composed of a bitmap in which each bit is mapped with a predetermined positioning measurement method. The terminal may perform positioning measurement by measuring PRSs indicated in immediate assistance data and PRSs indicated in activated conditional assistance data1. The terminal reports the PCI to the LMF in requestAssistanceData. The LMF may provide conditional assistance data validity information composed of multiple NR CGIs to the terminal in ProvideAssistanceData. Alternatively, the LMF may provide conditional assistance data validity information composed of a plurality of CellIdentity to the terminal in ProvideAssistanceData. Alternatively, the LMF may provide conditional assistance data validity information composed of a plurality of cell identities and a plurality of base station identifier (gNB identifier) length information to the terminal in ProvideAssistanceData. LMF considers PCI and determines which cell's assistance data to provide to the terminal. The terminal determines in which cell the assistance data is valid by considering the cell identifier provided by the LMF. The NR CGI consists of MCC (Mobile Country Code) and MNC (Mobile Network Code), which are information indicating the PLMN, and Cell Identity, which is information indicating the cell. Cell Identity has a size of 36 bits, and the leftmost n bits are the base station indicator (gNB identifier). The n has a variable size between 22 and 32 and may be known to the terminal as separate information called base station identifier length information. PCI is an integer between 0 and 1007. PCI is an indicator that specifies a cell within a relatively narrow area, NR CGI is an indicator that specifies a cell globally, and Cell Identity is an indicator that specifies a cell within one PLMN. FIG.3Dis a diagram illustrating an uplink positioning process of an inactive terminal. In the uplink positioning process, the terminal in the RRC connected state receives the SRS configuration from the base station and transmits the SRS, the base station measures the SRS and reports the measurement result to the LMF, and the LMF calculates the terminal's position based on the measurement result. Although the SRS measurement can be performed by several base stations, only one base station is illustrated inFIG.3Dfor convenience. In step3d-01, the terminal receives an RRCReconfiguration message including SRS configuration from the base station. The SRS configuration may be provided for each UL BWP, and the SRS configuration consists of one or more SRS-PosResourceSet (hereinafter, SRS positioning resource set). One SRS positioning resource set consists of one or more SRS-PosResource (hereinafter, SRS positioning resource). The SRS positioning resource is defined by srs-PosResourceId (SRS positioning resource identifier), startPosition, nrofSymbols, freqDomainShift, freqHopping, periodicityAndOffset-sp, spatialRelationInfoPos, and the like. StartPosition and nrofSymbols indicate the start position of a symbol in which SRS is transmitted and the number of symbols in which SRS is transmitted in the positioning SRS slot. FreqDomainShift and freqHopping define the frequency resource through which the SRS is transmitted in relation to the frequency domain of the corresponding BWP. PeriodicityAndOffset-sp indicates the periodicity and the slot at which the positioning SRS slot starts. The positioning SRS slot means a slot in which a positioning SRS resource is configured or a slot in which a positioning SRS is transmitted. SpatialRelationInfoPos defines a spatial domain transmission filter to be applied to positioning SRS transmission and may be set to a downlink reference signal index of a serving cell, an SSB index of a neighboring cell, and the like. SRS positioning resource set consists of SRS positioning resource set identifier, SRS positioning resource identifier list, ResourceType, alpha, p0, pathlossReferenceRS-Pos. SRS positioning resource identifier list is the list of SRS positioning resource identifiers composing the SRS positioning resource set. ResourceType indicates one of “periodic” and “semi-persistent” and “aperiodic”. In the present disclosure, a semi-persistent SRS positioning resource set will be described as an example. For SRS positioning resource set of which ResourceType is indicated as semi-persistent, SRS transmission of SRS positioning resource set starts only after a specific control message instructs transmission. Alpha, p0 and pathlossReferenceRS-Pos are parameters for transmission power control of positioning SRS. alpha and p0 are power offsets that are added when determining positioning SRS transmission power, and pathlossReferenceRS-Pos provides path loss when determining positioning SRS transmission power. is the reference signal. In step3d-03, the terminal receives a Positioning SRS Activation/Deactivation MAC CE instructing to start transmission of a specific SRS positioning resource set from the base station. The Positioning SRS Activation/Deactivation MAC CE consists of an A/D field, a Cell ID field, a BWP ID field, a SUL field, and a Positioning SRS Resource Set ID. The A/D field indicates whether to activate or deactivate the indicated SRS positioning resource set. The Cell ID field indicates the identifier of the serving cell to which the SRS positioning resource set to be activated/deactivated belongs. The BWP ID field indicates the identifier of the BWP to which the SRS positioning resource set to be activated/deactivated belongs. The SUL field indicates whether the MAC CE is applied to a NUL carrier configuration or a SUL carrier configuration. Or it indicates whether the activated or deactivated SRS positioning resource set is an SRS positioning resource set of SUL or an SRS positioning resource set of NUL. The Positioning SRS Resource Set ID field is an identifier of the SRS positioning resource set to be activated or deactivated. NUL is normal uplink and SUL is supplementary uplink. One serving cell may have only NUL or may have NUL and SUL. The SUL is configured in the low frequency band comparing to the NUL to increase the uplink coverage of the cell. In step3d-05, the terminal transmits a positioning SRS in the activated SRS positioning resource set. The terminal transmits the positioning SRS from SRS positioning resources belonging to the SRS positioning resource set by applying the transmission power control parameter of the SRS resource set. The SRS positioning resources are periodically generated according to periodicityAndOffset-sp. In step3d-07, the terminal receives the RRCRelease message from the base station. The base station may change the state of the terminal to RRC_INACTIVE or RRC_IDLE in consideration of the terminal's traffic condition, cell load condition, and RRM condition of the terminal. If the uplink positioning has not yet been completed, the base station instructs the terminal to transition to the RRC_INACTIVE state while continuing to transmit the positioning SRS. The base station transmits an RRCRelease message including inactive SRS IE, stop condition IE, and SuspendConfig IE to the terminal. The terminal stores the SRS configuration in the Inactive Access Stratum Context. The terminal receiving the message performs cell selection. At this time, the terminal preferentially selects the first cell if it is possible to select the first cell. If the reference signal received quality of the first cell is higher than a predetermined threshold, the terminal preferentially selects the first cell and camps on it. The first cell is one of a serving cell in which the terminal receives the RRCRelease message, a PCell at the time point when the terminal receives the RRCRelease message, or a serving cell in which the SRS positioning resource set is activated. Alternatively, the first cell may be a cell belonging to the first cell list. The first cell list includes a plurality of cell information, and each cell information includes PCI and an Absolute Radio Frequency Channel Number (ARFCN). The first cell list may be included in the RRCRelease message and transmitted to the terminal. ARFCN is defined in specification 38.101, and each AFRCN corresponds to a specific frequency. In step3d-09, the terminal determines whether to continue to perform positioning SRS transmission, if so, which SRS positioning resource set to transmit. The terminal determines whether to transmit the positioning SRS in consideration of the inactive SRS IE and whether the newly reselected cell is the first cell. The inactive SRS IE includes one of an inactive SRS transmission continuation indicator, the first SRS resource set IE and the second SRS resource set IE. The inactive SRS IE may also include an SRS transmission stop condition IE. The inactive SRS IE may also include an SRS transmission condition IE. The inactive SRS transmission continuation indicator is an indicator supporting that the SRS positioning resource set of NUL continues to transmit and the SRS positioning resource set of SUL stops transmission among the currently active SRS positioning resource sets. The terminal performs the above operation if the indicator is included. The first SRS resource set IE consists of an identifier of an SRS positioning resource set, a cell identifier, a BWP identifier, and the like. After the terminal transitions to the inactive state, it transmits the positioning SRS by activating the SRS positioning resource set specified by the cell identifier, the BWP identifier, and the SRS positioning resource set identifier. The SRS positioning resource set to be activated is limited to the SRS positioning resource set in the BWP of the NUL. In other words, when the NUL BWP and the SUL BWP having the same BWP identifier exist, the SRS positioning resource set identifier is an identifier indicating the SRS positioning resource set in the NUL BWP. The identifier of the SRS positioning resource set indicates a specific SRS positioning resource set of a specific BWP of the NUL of a specific serving cell, and the SRS positioning resource set corresponding to the SRS positioning resource set identifier is defined in the SRS configuration provided for the specific BWP. Alternatively, the first SRS resource set IE may include an identifier of an SRS positioning resource set, a cell identifier, a BWP identifier, and a SUL indicator. If the SUL indicator is not included in the first SRS resource set IE, the inactive state terminal transmits a positioning SRS in the NUL, and when the SUL indicator is included in the first SRS resource set IE, the inactive state terminal transmits a positioning SRS in the SUL. The second SRS resource set IE consists of an SRS positioning resource set IE, a cell identifier, a BWP identifier, and the like. After transitioning to the inactive state, the terminal transmits the positioning SRS in the SRS positioning resource specified by the SRS positioning resource set IE in the frequency domain indicated by the cell identifier and the BWP identifier. At this time, if there are two BWPs corresponding to the BWP identifier, a BWP of NUL is selected. Alternatively, the second SRS resource set IE may include an SRS positioning resource set IE, a cell identifier, a BWP identifier, a SUL indicator, and the like. If the SUL indicator is not included in the second SRS resource set IE, the inactive state terminal transmits a positioning SRS in the NUL, and when the SUL indicator is included in the second SRS resource set IE, the inactive state terminal transmits a positioning SRS in the SUL. The SRS transmission stop condition IE defines a condition for stopping the transmission of the positioning SRS, which the terminal was transmitting in the inactive state. The SRS transmission stop condition may be the number of positioning SRS transmissions, a time point to stop positioning SRS transmission, and the like. The SRS transmission condition IE defines the conditions that must be satisfied in order for the terminal to transmit the positioning SRS in the inactive state. The SRS transmission condition may be defined as the first time point and the second time point. The terminal starts transmitting positioning SRS at the first time point in the inactive state and stops transmitting positioning SRS at the second time point. The first time point and the second time point may be indicated by the SFN and subframe number of the first cell. The first time point and the second time point can be expressed in absolute times such as UTC. If the newly selected cell is the first cell and the inactive SRS IE exists, the terminal transmits the positioning SRS as described above even in the inactive state. If the newly selected cell is not the first cell, the terminal removes the SRS configuration from the inactive AS context and does not transmit the positioning SRS in the inactive state. In step3d-11, the terminal periodically transmits the positioning SRS in the inactive state. The terminal continues to transmit the previously activated SRS positioning resource set. Or the terminal deactivates the previously activated SRS positioning resource set, activates the SRS positioning resource set indicated in the first SRS resource set IE and transmits the SRS positioning resource set. Or the terminal deactivates the previously activated SRS positioning resource set, activates the SRS positioning resource set indicated in the second SRS resource set IE and transmits the SRS positioning resource set The base station collects location-related measurement information by receiving the positioning SRS transmitted by the terminal in the inactive state. In step3d-13, the base station transmits a MEASUREMENT RESPONSE message including the SRS measurement result to the LMF. The LMF calculates the position of the terminal using the measurement result. When positioning of the terminal is completed, the LMF notifies the base station that positioning is complete. In step3d-15, the base station receives the message POSITIONING DEACTIVATION from the LMF and recognizes that the uplink positioning has been completed. In step3d-17, the base station transmits a downlink control message to stop transmitting the positioning SRS of the terminal. The downlink control message may be, for example, a paging message. The base station may include the terminal's I-RNTI (inactive wireless network temporary identifier) and positioning SRS transmission stop information in the paging message. The I-RNTI is assigned in the RRCRelease message. The RRCRelease message allocates two I-RNTIs: a full I-RNTI and a short I-RNTI. The terminal determines whether an I-RNTI matching its full I-RNTI is included in the paging. Upon receiving the paging message including its I-RNTI, the terminal determines whether information related to SRS transmission stop, for example, positioning SRS transmission stop information, is included in the paging message. The terminal performs one of the following actions according to its judgment.1: If the paging message including its I-RNTI does not contain information related to SRS stop and inactive SRS transmission is being performed, the terminal stops SRS transmission and initiates the RRC connection resumption procedure.2: If the paging message including its I-RNTI does not include information related to SRS stop and inactive SRS transmission is not being performed, the terminal initiates the RRC connection resumption procedure.3: If information related to SRS stop is included in the paging message including its I-RNTI and inactive SRS transmission is being performed, the terminal stops SRS transmission and does not initiate the RRC connection resumption procedure.4: If the paging message including its I-RNTI includes information related to SRS stop and inactive SRS transmission is not being performed, the terminal ignores the paging message and does not initiate the RRC connection resumption procedure. The terminal performs random access to perform a resumption procedure and transmits a predetermined uplink RRC control message. In step3d-19, the terminal stops inactive SRS transmission or initiates a resumption procedure with reference to the information included in the paging message. A terminal in inactive state stops transmitting positioning SRS in the following cases.1: The cell selected after receiving the RRCRelease message is not the first cell.2: Reselect another cell from the first cell.3: SRS transmission stop condition is satisfied.4: The resumption procedure is started.5: Receives a paging message indicating to stop inactive SRS transmission. One paging message includes a plurality of pagingRecords, and each pagingRecord among the plurality of pagingRecords includes one terminalidentifier field and one second information field. Among the plurality of pagingRecords, in each pagingRecord, the terminalidentifier field is mandatory present and the second information field is optionally present. The terminalidentifier field is set to full I-RNTI and the second information field is enumerated with a single value indicating an SRS stop. Optionally present IE being enumerated with a single value means that the single value is applied if the IE is present and the single value is not applied if the IE is not present. FIG.3Eis a diagram illustrating a downlink positioning process of an inactive terminal. A terminal that has obtained immediate assistance data, conditional assistance data1, and conditional assistance data2 through steps3c-13to3c-25performs an operation related to downlink positioning by using the assistance data. An operation related to downlink positioning is, for example, measuring the reception time difference of PRSs transmitted from a plurality of TRPs and reporting the result to the LMF, or measuring the received power of PRSs transmitted from a plurality of TRPs and reporting to the LMF, etc. In step3e-01, the terminal generates an RRC control message called UEAssistanceInformation to report to the base station that downlink positioning should be performed even in the RRC_INACTIVE state and transmits it to the base station. The control message may include an inactive positioning2 IE indicating the type of positioning method that the terminal can perform in an inactive state. the control message can include a information requesting to configure small data transfer via SRB2. The control message may include time pattern information of PRS for positioning. The terminal performs steps3e-03if the inactive positioning IE is included in the ProvideAssistanceData received in steps3c-25. In step3e-03, the base station sends an RRCRelease message to the terminal. The base station may change the state of the terminal to RRC_INACTIVE or RRC_IDLE in consideration of the terminal's traffic condition, cell load condition, and RRM condition of the terminal. If the base station determines that the terminal needs to measure positioning in the inactive state, the base station may provide information related to downlink positioning measurement while instructing the terminal to transition to the RRC_INACTIVE state. Information related to downlink positioning measurement may include, for example, offset information for moving the paging monitoring period of the terminal so that the paging monitoring time interval of the terminal does not overlap with the PRS measurement period. The base station can configure small data transfer through SRB2 to the terminal. The small data transfer configuration may consist of a list of data bearers for which small data transfer is configured, and 1-bit information indicating whether small data transfer can be configured to SRB2. When small data transfer is applied to SRB2, the terminal can transmit the data of SRB2 to the base station through the small data transfer procedure. The small data transfer procedure is a procedure in which the RRC_INACTIVE terminal transmits small data through the RRC connection resumption procedure without transitioning to RRC_CONNECTED. Upon receiving the RRCRelease message including information related to downlink positioning measurement, the terminal performs cell selection. At this time, if the reference signal received power of the second cell is greater than or equal to a predetermined threshold, the terminal preferentially selects the second cell to camp on. The second cell may be a serving cell receiving the RRCRelease message or a PCell at a time point receiving the RRCRelease message. In step3e-05, the terminal selecting the new cell monitors whether the assistance data validity is met. If the newly selected cell is the second cell, the terminal considers both conditional assistance data1 and conditional assistance data2. The terminal considers only conditional assistance data2 if the newly selected cell is not the second cell. The terminal monitors if at least one assistance data validity is fulfilled among the assistance data validity of which data status is broadcast included in either conditional assistance data1 or conditional assistance data2 In step3e-06, if it is determined that the assistance data of the conditional assistance data for which the assistance data validity is satisfied is determined to be valid, the terminal starts measuring the downlink PRSs specified in the assistance data. The terminal measures the arrival time difference of PRSs transmitted by a plurality of TRPs. When the PRS measurement is completed, the terminal generates an LPP ProvideLocatinoInformation message including the measurement result. The terminal initiates a small data transfer procedure to transmit the LPP message. If necessary, the ProvideLocatinoInformation message can be segmented into a plurality of segments and transmitted. The ProvideLocatinoInformation message includes information on arrival time difference of PRSs transmitted by a plurality of TRPs, one assistance data identifier and a plurality of downlink positioning reference signal identifiers (DL-PRS id). The downlink positioning reference signal identifier is an identifier of the measured PRSs, and the assistance data identifier is an identifier of assistance data providing the configuration of the measured PRSs. If the PRS measurement is made based on the first type assistance data, the ProvideLocatinoInformation message includes a plurality of measurement results and a plurality of downlink positioning reference signal identifiers. If the RRS measurement is made based on the second type assistance data, the ProvideLocatinoInformation message includes a plurality of measurement results and a plurality of downlink positioning reference signal identifiers and one assistance data id. In step3e-07, the terminal transmits a ResumeRequest, an LPP segment message, and a MAC PDU including a Buffer Status Report (BSR) to the base station. The LPP segment message includes the first segment of the LPP ProvideLocatinoInformation message. The BSR includes information on the size of the remaining segments of the LPP ProvideLocatinoInformation message. The ResumeRequest belongs to SRB0 and the LPP segment message belongs to SRB2. The ResumeRequest of SRB0 is not ciphered, the LPP segment message of SRB2 is ciphered, and the BSR is not ciphered. The ciphering is performed with a new security key calculated through the value of NCC received by the terminal in the RRCRelease message and the security key stored by the terminal. In principle, all RRC messages are ciphered, but the RRC message of SRB0 is not ciphered because it is a message that the base station must process without prior information. Since BSR is information processed by the MAC layer of the base station, it is not ciphered. As a result, the MAC PDU transmitted to report the positioning measurement result in the inactive state includes three MAC subPDUs, the first MAC subPDU and the third MAC subPDU include an unciphered payload, and the second MAC subPDU includes a ciphered payload. The terminal reports the amount of data available for transmission through the BSR. The RRC_CONNECTED terminal determines the BSR format in consideration of the number of logical channel groups in which data available for transmission exists. That is, the RRC_CONNECTED terminal uses the first BSR if the number of logical channel groups in which data available for transmission is one and uses the second BSR if it is more than one. The RRC_INACTIVE terminal determines the BSR format without considering the number of logical channel groups in which data available for transmission exists. That is, the RRC_INACTIVE terminal uses the first BSR even if the number of logical channel groups in which data available for transmission exists is more than one. The RRC_INACTIVE terminal sets the identifier of a logical channel group with the highest priority among logical channel groups in which data available for transmission exists in the logical channel group identifier field2h-01, and sets in the first buffer size field2h-03the first buffer size index corresponding to the amount of data available for transmission across all the logical channels. The RRC_INACTIVE terminal uses the logical channel group identifier predefined in the specification instead of the logical channel group identifier configured in the RRC_CONNECTED state. In the RRC_INACTIVE state, the terminal uses the preconfigured configuration instead of the terminal specific configuration because the base station does not know the terminal's buffer status reporting configuration. The RRC_CONNECTED terminal determines the buffer size index to be set in buffer size field of the BSR by considering only the data of the PDCP layer and the data of the RLC layer. If RRC_INACTIVE terminal operates in the same manner, remaining LPP segments stored in LPP layer is not considered. To overcome this problem, the RRC_INACTIVE terminal determines the buffer size index to be set in buffer size field by considering the amount of data of PDCP layer and data of RLC layer and data of LPP layer (or upper layers of PDCP layer or upper layers of RRC layer). That is, a buffer size index corresponding to the sum of all the data amounts is selected. In step3e-09, the base station transmits a locationInformation segment to the LMF. In step3e-11, the terminal transmits the MAC PDU including the LPP segment message and information indicating no more data for transmission. The LPP segment message includes the last segment of the LPP ProvideLocatinoInformation message. Information indicating no more data for transmission may be the first BSR in which buffer size index 0 is set. In step3e-13, the base station transmits a locationInformation segment to the LMF. After receiving the last segment, the LMF assembles the segments to make a location information message and determines the location of the terminal by referring to the positioning measurement result of the location information message. In steps3e-15, the terminal monitors whether the assistance data validity is met. In step3e-16, when it is determined that the assistance data of the conditional assistance data for which the assistance data validity is satisfied is determined to be valid, the terminal starts measuring the downlink PRSs specified in the assistance data. In step3e-17, the terminal transmits the MAC PDU including ResumeRequest, LPP segment message and BSR (Buffer Status Report) to the base station. In step3e-19, the base station transmits a locationInformation segment to the LMF. In step3e-21, the terminal transmits the MAC PDU including the LPP segment message and information indicating no more data for transmission. In step3e-23, the base station transmits an LPP segment message to the LMF. After receiving the last segment, the LMF assembles the segments to generate a location information message and determines the location of the terminal by referring to the positioning measurement result of the locationInformation message. If the terminal transitions to RRC_IDLE or RRC_CONNECTED or the assistance data validity is not met, the terminal stops measuring the downlink PRS for location measurement and reporting the measurement result. FIG.4is a flow diagram illustrating an operation of a terminal. In4a-01, UE transmits to a base station a UECapabilityInformation including a first information indicating support of SRS transmission for positioning in RRC_INACTIVE and a second information indicating support of SRS transmission for positioning in RRC_CONNECTED In4a-03, UE receives form a base station a RRCReconfiguration including a first SRS configuration of a first BWP of a first cell In4a-05, UE receives from the base station a Positioning SRS MAC CE activating the first SRS configuration of the first BWP of the first cell In4a-07, UE performs SRS transmission for positioning in the first BWP of the first cell according to the first SRS configuration In4a-09, UE receives, from the base station, a RRCRelease including a configuration for RRC_INACTIVE and a 3rd information for SRS transmission in RRC_INACTIVE. In4a-11, UE enter into RRC_INACTIVE. In4a-13, UE performs SRS transmission in RRC_INACTIVE In4a-15, UE receives a Paging, addressed by P-RNTI (common identity), including ue-Identity field matching UE's stored I-RNTI In4a-17, UE stops SRS transmission in RRC_INACTIVE without initiating RRC resume procedure if the Paging include a PagingRecord including ue-Identity field matching UE's stored I-RNTI and a fourth information for SRS transmission In4a-19, UE stops SRS transmission in RRC_INACTIVE and initiating RRC connection resume procedure if the Paging include a PagingRecord including ue-Identity field matching UE's stored I-RNTI and not including the fourth information for SRS transmission The paging includes a one or more paging records, each of the one or more paging records includes a UE identifier and a fourth information, the UE identifier is mandatorily present and the fourth information is optionally present in each of the one or more paging records, the UE identifier is set to full I-RNTI and the fourth information is enumerated with a single value indicating SRS transmission stop. The third information is enumerated with a single value indicating UE to perform SRS transmission in INACTIVE state. FIG.5Ais a block diagram illustrating the internal structure of a UE to which the disclosure is applied. Referring to the diagram, the UE includes a controller5a-01, a storage unit5a-02, a transceiver5a-03, a main processor5a-04and I/O unit5a-05. The controller5a-01controls the overall operations of the UE in terms of mobile communication. For example, the controller5a-01oreceives/transmits signals through the transceiver5a-03. In addition, the controller5a-01records and reads data in the storage unit5a-02. To this end, the controller5a-01includes at least one processor. For example, the controller5a-01may include a communication processor (CP) that performs control for communication and an application processor (AP) that controls the upper layer, such as an application program. The controller controls storage unit and transceiver such that UE operations illustrated inFIG.2AandFIG.2BandFIG.3Aare performed. The storage unit5a-02stores data for operation of the UE, such as a basic program, an application program, and configuration information. The storage unit5a-02provides stored data at a request of the controller5a-01. The transceiver5a-03consists of a RF processor, a baseband processor and plurality of antennas. The RF processor performs functions for transmitting/receiving signals through a wireless channel, such as signal band conversion, amplification, and the like. Specifically, the RF processor up-converts a baseband signal provided from the baseband processor into an RF band signal, transmits the same through an antenna, and down-converts an RF band signal received through the antenna into a baseband signal. The RF processor may include a transmission filter, a reception filter, an amplifier, a mi10r, an oscillator, a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), and the like. The RF processor may perform MIMO and may receive multiple layers when performing the MIMO operation. The baseband processor performs a function of conversion between a baseband signal and a bit string according to the physical layer specification of the system. For example, during data transmission, the baseband processor encodes and modulates a transmission bit string, thereby generating complex symbols. In addition, during data reception, the baseband processor demodulates and decodes a baseband signal provided from the RF processor, thereby restoring a reception bit string. The main processor5a-04controls the overall operations other than mobile operation. The main processor5a-04process user input received from I/O unit5a-05, stores data in the storage unit5a-02, controls the controller5a-01for required mobile communication operations and forward user data to I/O unit (905). I/O unit5a-05consists of equipment for inputting user data and for outputting user data such as a microphone and a screen. I/O unit5a-05performs inputting and outputting user data based on the main processor's instruction. FIG.5Bis a block diagram illustrating the configuration of a base station according to the disclosure. As illustrated in the diagram, the base station includes a controller5b-01, a storage unit5b-02, a transceiver5b-03and a backhaul interface unit5b-04. The controller5b-01controls the overall operations of the main base station. For example, the controller5b-01receives/transmits signals through the transceiver5b-03, or through the backhaul interface unit5b-04. In addition, the controller5b-01records and reads data in the storage unit5b-02. To this end, the controller5b-01may include at least one processor. The controller controls transceiver, storage unit and backhaul interface such that base station operation illustrated inFIG.2AandFIG.2Bare performed. The storage unit5b-02stores data for operation of the main base station, such as a basic program, an application program, and configuration information. Particularly, the storage unit5b-02may store information regarding a bearer allocated to an accessed UE, a measurement result reported from the accessed UE, and the like. In addition, the storage unit5b-02may store information serving as a criterion to deter mine whether to provide the UE with multi-connection or to discontinue the same. In addition, the storage unit5b-02provides stored data at a request of the controller5b-01. The transceiver5b-03consists of a RF processor, a baseband processor and plurality of antennas. The RF processor performs functions for transmitting/receiving signals through a wireless channel, such as signal band conversion, amplification, and the like. Specifically, the RF processor up-converts a baseband signal provided from the baseband processor into an RF band signal, transmits the same through an antenna, and down-converts an RF band signal received through the antenna into a baseband signal. The RF processor may include a transmission filter, a reception filter, an amplifier, a mi10r, an oscillator, a DAC, an ADC, and the like. The RF processor may perform a down link MIMO operation by transmitting at least one layer. The baseband processor performs a function of conversion between a baseband signal and a bit string according to the physical layer specification of the first radio access technology. For example, during data transmission, the baseband processor encodes and modulates a transmission bit string, thereby generating complex symbols. In addition, during data reception, the baseband processor demodulates and decodes a baseband signal provided from the RF processor, thereby restoring a reception bit string. The backhaul interface unit5b-04provides an interface for communicating with other nodes inside the network. The backhaul interface unit5b-04converts a bit string transmitted from the base station to another node, for example, another base station or a core network, into a physical signal, and converts a physical signal received from the other node into a bit string.
102,483
11863485
DESCRIPTION OF EXEMPLARY EMBODIMENTS In the present specification, “A or B” may mean “only A”, “only B” or “both A and B”. In other words, in the present specification, “A or B” may be interpreted as “A and/or B”. For example, in the present specification, “A, B, or C” may mean “only A”, “only B”, “only C”, or “any combination of A, B, C”. A slash (/) or comma used in the present specification may mean “and/or”. For example, “A/B” may mean “A and/or B”. Accordingly, “A/B” may mean “only A”, “only B”, or “both A and B”. For example, “A, B, C” may mean “A, B, or C”. In the present specification, “at least one of A and B” may mean “only A”, “only B”, or “both A and B”. In addition, in the present specification, the expression “at least one of A or B” or “at least one of A and/or B” may be interpreted as “at least one of A and B”. In addition, in the present specification, “at least one of A, B, and C” may mean “only A”, “only B”, “only C”, or “any combination of A, B, and C”. In addition, “at least one of A, B, or C” or “at least one of A, B, and/or C” may mean “at least one of A, B, and C”. In addition, a parenthesis used in the present specification may mean “for example”. Specifically, when indicated as “control information (EHT-signal)”, it may denote that “EHT-signal” is proposed as an example of the “control information”. In other words, the “control information” of the present specification is not limited to “EHT-signal”, and “EHT-signal” may be proposed as an example of the “control information”. In addition, when indicated as “control information (i.e., EHT-signal)”, it may also mean that “EHT-signal” is proposed as an example of the “control information”. Technical features described individually in one figure in the present specification may be individually implemented, or may be simultaneously implemented. The following example of the present specification may be applied to various wireless communication systems. For example, the following example of the present specification may be applied to a wireless local area network (WLAN) system. For example, the present specification may be applied to the IEEE 802.11a/g/n/ac standard or the IEEE 802.11ax standard. In addition, the present specification may also be applied to the newly proposed EHT standard or IEEE 802.11be standard. In addition, the example of the present specification may also be applied to a new WLAN standard enhanced from the EHT standard or the IEEE 802.11be standard. In addition, the example of the present specification may be applied to a mobile communication system. For example, it may be applied to a mobile communication system based on long term evolution (LTE) depending on a 3rdgeneration partnership project (3GPP) standard and based on evolution of the LTE. In addition, the example of the present specification may be applied to a communication system of a 5G NR standard based on the 3GPP standard. Hereinafter, in order to describe a technical feature of the present specification, a technical feature applicable to the present specification will be described. FIG.1shows an example of a transmitting apparatus and/or receiving apparatus of the present specification. In the example ofFIG.1, various technical features described below may be performed.FIG.1relates to at least one station (STA). For example, STAs110and120of the present specification may also be called in various terms such as a mobile terminal, a wireless device, a wireless transmit/receive unit (WTRU), a user equipment (UE), a mobile station (MS), a mobile subscriber unit, or simply a user. The STAs110and120of the present specification may also be called in various terms such as a network, a base station, a node-B, an access point (AP), a repeater, a router, a relay, or the like. The STAs110and120of the present specification may also be referred to as various names such as a receiving apparatus, a transmitting apparatus, a receiving STA, a transmitting STA, a receiving device, a transmitting device, or the like. For example, the STAs110and120may serve as an AP or a non-AP. That is, the STAs110and120of the present specification may serve as the AP and/or the non-AP. The STAs110and120of the present specification may support various communication standards together in addition to the IEEE 802.11 standard. For example, a communication standard (e.g., LTE, LTE-A, 5G NR standard) or the like based on the 3GPP standard may be supported. In addition, the STA of the present specification may be implemented as various devices such as a mobile phone, a vehicle, a personal computer, or the like. In addition, the STA of the present specification may support communication for various communication services such as voice calls, video calls, data communication, and self-driving (autonomous-driving), or the like. The STAs110and120of the present specification may include a medium access control (MAC) conforming to the IEEE 802.11 standard and a physical layer interface for a radio medium. The STAs110and120will be described below with reference to a sub-figure (a) ofFIG.1. The first STA110may include a processor111, a memory112, and a transceiver113. The illustrated process, memory, and transceiver may be implemented individually as separate chips, or at least two blocks/functions may be implemented through a single chip. The transceiver113of the first STA performs a signal transmission/reception operation. Specifically, an IEEE 802.11 packet (e.g., IEEE 802.11a/b/g/n/ac/ax/be, etc.) may be transmitted/received. For example, the first STA110may perform an operation intended by an AP. For example, the processor111of the AP may receive a signal through the transceiver113, process a reception (RX) signal, generate a transmission (TX) signal, and provide control for signal transmission. The memory112of the AP may store a signal (e.g., RX signal) received through the transceiver113, and may store a signal (e.g., TX signal) to be transmitted through the transceiver. For example, the second STA120may perform an operation intended by a non-AP STA. For example, a transceiver123of a non-AP performs a signal transmission/reception operation. Specifically, an IEEE 802.11 packet (e.g., IEEE 802.11a/b/g/n/ac/ax/be packet, etc.) may be transmitted/received. For example, a processor121of the non-AP STA may receive a signal through the transceiver123, process an RX signal, generate a TX signal, and provide control for signal transmission. A memory122of the non-AP STA may store a signal (e.g., RX signal) received through the transceiver123, and may store a signal (e.g., TX signal) to be transmitted through the transceiver. For example, an operation of a device indicated as an AP in the specification described below may be performed in the first STA110or the second STA120. For example, if the first STA110is the AP, the operation of the device indicated as the AP may be controlled by the processor111of the first STA110, and a related signal may be transmitted or received through the transceiver113controlled by the processor111of the first STA110. In addition, control information related to the operation of the AP or a TX/RX signal of the AP may be stored in the memory112of the first STA110. In addition, if the second STA120is the AP, the operation of the device indicated as the AP may be controlled by the processor121of the second STA120, and a related signal may be transmitted or received through the transceiver123controlled by the processor121of the second STA120. In addition, control information related to the operation of the AP or a TX/RX signal of the AP may be stored in the memory122of the second STA120. For example, in the specification described below, an operation of a device indicated as a non-AP (or user-STA) may be performed in the first STA110or the second STA120. For example, if the second STA120is the non-AP, the operation of the device indicated as the non-AP may be controlled by the processor121of the second STA120, and a related signal may be transmitted or received through the transceiver123controlled by the processor121of the second STA120. In addition, control information related to the operation of the non-AP or a TX/RX signal of the non-AP may be stored in the memory122of the second STA120. For example, if the first STA110is the non-AP, the operation of the device indicated as the non-AP may be controlled by the processor111of the first STA110, and a related signal may be transmitted or received through the transceiver113controlled by the processor111of the first STA110. In addition, control information related to the operation of the non-AP or a TX/RX signal of the non-AP may be stored in the memory112of the first STA110. In the specification described below, a device called a (transmitting/receiving) STA, a first STA, a second STA, a STA1, a STA2, an AP, a first AP, a second AP, an AP1, an AP2, a (transmitting/receiving) terminal, a (transmitting/receiving) device, a (transmitting/receiving) apparatus, a network, or the like may imply the STAs110and120ofFIG.1. For example, a device indicated as, without a specific reference numeral, the (transmitting/receiving) STA, the first STA, the second STA, the STA1, the STA2, the AP, the first AP, the second AP, the AP1, the AP2, the (transmitting/receiving) terminal, the (transmitting/receiving) device, the (transmitting/receiving) apparatus, the network, or the like may imply the STAs110and120ofFIG.1. For example, in the following example, an operation in which various STAs transmit/receive a signal (e.g., a PPDU) may be performed in the transceivers113and123ofFIG.1. In addition, in the following example, an operation in which various STAs generate a TX/RX signal or perform data processing and computation in advance for the TX/RX signal may be performed in the processors111and121ofFIG.1. For example, an example of an operation for generating the TX/RX signal or performing the data processing and computation in advance may include: 1) an operation of determining/obtaining/configuring/computing/decoding/encoding bit information of a sub-field (SIG, STF, LTF, Data) included in a PPDU; 2) an operation of determining/configuring/obtaining a time resource or frequency resource (e.g., a subcarrier resource) or the like used for the sub-field (SIG, STF, LTF, Data) included the PPDU; 3) an operation of determining/configuring/obtaining a specific sequence (e.g., a pilot sequence, an STF/LTF sequence, an extra sequence applied to SIG) or the like used for the sub-field (SIG, STF, LTF, Data) field included in the PPDU; 4) a power control operation and/or power saving operation applied for the STA; and 5) an operation related to determining/obtaining/configuring/decoding/encoding or the like of an ACK signal. In addition, in the following example, a variety of information used by various STAs for determining/obtaining/configuring/computing/decoding/decoding a TX/RX signal (e.g., information related to a field/subfield/control field/parameter/power or the like) may be stored in the memories112and122ofFIG.1. The aforementioned device/STA of the sub-figure (a) ofFIG.1may be modified as shown in the sub-figure (b) ofFIG.1. Hereinafter, the STAs110and120of the present specification will be described based on the sub-figure (b) ofFIG.1. For example, the transceivers113and123illustrated in the sub-figure (b) ofFIG.1may perform the same function as the aforementioned transceiver illustrated in the sub-figure (a) ofFIG.1. For example, processing chips114and124illustrated in the sub-figure (b) ofFIG.1may include the processors111and121and the memories112and122. The processors111and121and memories112and122illustrated in the sub-figure (b) ofFIG.1may perform the same function as the aforementioned processors111and121and memories112and122illustrated in the sub-figure (a) ofFIG.1. A mobile terminal, a wireless device, a wireless transmit/receive unit (WTRU), a user equipment (UE), a mobile station (MS), a mobile subscriber unit, a user, a user STA, a network, a base station, a Node-B, an access point (AP), a repeater, a router, a relay, a receiving unit, a transmitting unit, a receiving STA, a transmitting STA, a receiving device, a transmitting device, a receiving apparatus, and/or a transmitting apparatus, which are described below, may imply the STAs110and120illustrated in the sub-figure (a)/(b) ofFIG.1, or may imply the processing chips114and124illustrated in the sub-figure (b) ofFIG.1. That is, a technical feature of the present specification may be performed in the STAs110and120illustrated in the sub-figure (a)/(b) ofFIG.1, or may be performed only in the processing chips114and124illustrated in the sub-figure (b) ofFIG.1. For example, a technical feature in which the transmitting STA transmits a control signal may be understood as a technical feature in which a control signal generated in the processors111and121illustrated in the sub-figure (a)/(b) ofFIG.1is transmitted through the transceivers113and123illustrated in the sub-figure (a)/(b) ofFIG.1. Alternatively, the technical feature in which the transmitting STA transmits the control signal may be understood as a technical feature in which the control signal to be transferred to the transceivers113and123is generated in the processing chips114and124illustrated in the sub-figure (b) ofFIG.1. For example, a technical feature in which the receiving STA receives the control signal may be understood as a technical feature in which the control signal is received by means of the transceivers113and123illustrated in the sub-figure (a) ofFIG.1. Alternatively, the technical feature in which the receiving STA receives the control signal may be understood as the technical feature in which the control signal received in the transceivers113and123illustrated in the sub-figure (a) ofFIG.1is obtained by the processors111and121illustrated in the sub-figure (a) ofFIG.1. Alternatively, the technical feature in which the receiving STA receives the control signal may be understood as the technical feature in which the control signal received in the transceivers113and123illustrated in the sub-figure (b) ofFIG.1is obtained by the processing chips114and124illustrated in the sub-figure (b) ofFIG.1. Referring to the sub-figure (b) ofFIG.1, software codes115and125may be included in the memories112and122. The software codes115and126may include instructions for controlling an operation of the processors111and121. The software codes115and125may be included as various programming languages. The processors111and121or processing chips114and124ofFIG.1may include an application-specific integrated circuit (ASIC), other chipsets, a logic circuit and/or a data processing device. The processor may be an application processor (AP). For example, the processors111and121or processing chips114and124ofFIG.1may include at least one of a digital signal processor (DSP), a central processing unit (CPU), a graphics processing unit (GPU), and a modulator and demodulator (modem). For example, the processors111and121or processing chips114and124ofFIG.1may be SNAPDRAGON™ series of processors made by Qualcomm®, EXYNOS™ series of processors made by Samsung®, A series of processors made by Apple®, HELIO™ series of processors made by MediaTek®, ATOM™ series of processors made by Intel® or processors enhanced from these processors. In the present specification, an uplink may imply a link for communication from a non-AP STA to an SP STA, and an uplink PPDU/packet/signal or the like may be transmitted through the uplink. In addition, in the present specification, a downlink may imply a link for communication from the AP STA to the non-AP STA, and a downlink PPDU/packet/signal or the like may be transmitted through the downlink. FIG.2is a conceptual view illustrating the structure of a wireless local area network (WLAN). An upper part ofFIG.2illustrates the structure of an infrastructure basic service set (BSS) of institute of electrical and electronic engineers (IEEE) 802.11. Referring the upper part ofFIG.2, the wireless LAN system may include one or more infrastructure BSSs200and205(hereinafter, referred to as BSS). The BSSs200and205as a set of an AP and a STA such as an access point (AP)225and a station (STA1)200-1which are successfully synchronized to communicate with each other are not concepts indicating a specific region. The BSS205may include one or more STAs205-1and205-2which may be joined to one AP230. The BSS may include at least one STA, APs providing a distribution service, and a distribution system (DS)210connecting multiple APs. The distribution system210may implement an extended service set (ESS)240extended by connecting the multiple BSSs200and205. The ESS240may be used as a term indicating one network configured by connecting one or more APs225or230through the distribution system210. The AP included in one ESS240may have the same service set identification (SSID). A portal220may serve as a bridge which connects the wireless LAN network (IEEE 802.11) and another network (e.g., 802.X). In the BSS illustrated in the upper part ofFIG.2, a network between the APs225and230and a network between the APs225and230and the STAs200-1,205-1, and205-2may be implemented. However, the network is configured even between the STAs without the APs225and230to perform communication. A network in which the communication is performed by configuring the network even between the STAs without the APs225and230is defined as an Ad-Hoc network or an independent basic service set (IBSS). A lower part ofFIG.2illustrates a conceptual view illustrating the IBSS. Referring to the lower part ofFIG.2, the IBSS is a BSS that operates in an Ad-Hoc mode. Since the IBSS does not include the access point (AP), a centralized management entity that performs a management function at the center does not exist. That is, in the IBSS, STAs 250-1, 250-2, 250-3, 255-4, and 255-5 are managed by a distributed manner. In the IBSS, all STAs 250-1, 250-2, 250-3, 255-4, and 255-5 may be constituted by movable STAs and are not permitted to access the DS to constitute a self-contained network. FIG.3illustrates a general link setup process. In S310, a STA may perform a network discovery operation. The network discovery operation may include a scanning operation of the STA. That is, to access a network, the STA needs to discover a participating network. The STA needs to identify a compatible network before participating in a wireless network, and a process of identifying a network present in a particular area is referred to as scanning. Scanning methods include active scanning and passive scanning. FIG.3illustrates a network discovery operation including an active scanning process. In active scanning, a STA performing scanning transmits a probe request frame and waits for a response to the probe request frame in order to identify which AP is present around while moving to channels. A responder transmits a probe response frame as a response to the probe request frame to the STA having transmitted the probe request frame. Here, the responder may be a STA that transmits the last beacon frame in a BSS of a channel being scanned. In the BSS, since an AP transmits a beacon frame, the AP is the responder. In an IBSS, since STAs in the IBSS transmit a beacon frame in turns, the responder is not fixed. For example, when the STA transmits a probe request frame via channel 1 and receives a probe response frame via channel 1, the STA may store BSS-related information included in the received probe response frame, may move to the next channel (e.g., channel 2), and may perform scanning (e.g., transmits a probe request and receives a probe response via channel 2) by the same method. Although not shown inFIG.3, scanning may be performed by a passive scanning method. In passive scanning, a STA performing scanning may wait for a beacon frame while moving to channels. A beacon frame is one of management frames in IEEE 802.11 and is periodically transmitted to indicate the presence of a wireless network and to enable the STA performing scanning to find the wireless network and to participate in the wireless network. In a BSS, an AP serves to periodically transmit a beacon frame. In an IBSS, STAs in the IBSS transmit a beacon frame in turns. Upon receiving the beacon frame, the STA performing scanning stores information related to a BSS included in the beacon frame and records beacon frame information in each channel while moving to another channel. The STA having received the beacon frame may store BSS-related information included in the received beacon frame, may move to the next channel, and may perform scanning in the next channel by the same method. After discovering the network, the STA may perform an authentication process in S320. The authentication process may be referred to as a first authentication process to be clearly distinguished from the following security setup operation in S340. The authentication process in S320may include a process in which the STA transmits an authentication request frame to the AP and the AP transmits an authentication response frame to the STA in response. The authentication frames used for an authentication request/response are management frames. The authentication frames may include information related to an authentication algorithm number, an authentication transaction sequence number, a status code, a challenge text, a robust security network (RSN), and a finite cyclic group. The STA may transmit the authentication request frame to the AP. The AP may determine whether to allow the authentication of the STA based on the information included in the received authentication request frame. The AP may provide the authentication processing result to the STA via the authentication response frame. When the STA is successfully authenticated, the STA may perform an association process in S330. The association process includes a process in which the STA transmits an association request frame to the AP and the AP transmits an association response frame to the STA in response. The association request frame may include, for example, information related to various capabilities, a beacon listen interval, a service set identifier (SSID), a supported rate, a supported channel, RSN, a mobility domain, a supported operating class, a traffic indication map (TIM) broadcast request, and an interworking service capability. The association response frame may include, for example, information related to various capabilities, a status code, an association ID (AID), a supported rate, an enhanced distributed channel access (EDCA) parameter set, a received channel power indicator (RCPI), a received signal-to-noise indicator (RSNI), a mobility domain, a timeout interval (association comeback time), an overlapping BSS scanning parameter, a TIM broadcast response, and a QoS map. In S340, the STA may perform a security setup process. The security setup process in S340may include a process of setting up a private key through four-way handshaking, for example, through an extensible authentication protocol over LAN (EAPOL) frame. FIG.4illustrates an example of a PPDU used in an IEEE standard. As illustrated, various types of PHY protocol data units (PPDUs) are used in IEEE a/g/n/ac standards. Specifically, an LTF and a STF include a training signal, a SIG-A and a SIG-B include control information for a receiving STA, and a data field includes user data corresponding to a PSDU (MAC PDU/aggregated MAC PDU). FIG.4also includes an example of an HE PPDU according to IEEE 802.11ax. The HE PPDU according toFIG.4is an illustrative PPDU for multiple users. An HE-SIG-B may be included only in a PPDU for multiple users, and an HE-SIG-B may be omitted in a PPDU for a single user. As illustrated inFIG.4, the HE-PPDU for multiple users (MUs) may include a legacy-short training field (L-STF), a legacy-long training field (L-LTF), a legacy-signal (L-SIG), a high efficiency-signal A (HE-SIG A), a high efficiency-signal-B (HE-SIG B), a high efficiency-short training field (HE-STF), a high efficiency-long training field (HE-LTF), a data field (alternatively, an MAC payload), and a packet extension (PE) field. The respective fields may be transmitted for illustrated time periods (i.e., 4 or 8 μs). Hereinafter, a resource unit (RU) used for a PPDU is described. An RU may include a plurality of subcarriers (or tones). An RU may be used to transmit a signal to a plurality of STAs according to OFDMA. Further, an RU may also be defined to transmit a signal to one STA. An RU may be used for an STF, an LTF, a data field, or the like. FIG.5illustrates a layout of resource units (RUs) used in a band of 20 MHz. As illustrated inFIG.5, resource units (RUs) corresponding to different numbers of tones (i.e., subcarriers) may be used to form some fields of an HE-PPDU. For example, resources may be allocated in illustrated RUs for an HE-STF, an HE-LTF, and a data field. As illustrated in the uppermost part ofFIG.5, a 26-unit (i.e., a unit corresponding to 26 tones) may be disposed. Six tones may be used for a guard band in the leftmost band of the 20 MHz band, and five tones may be used for a guard band in the rightmost band of the 20 MHz band. Further, seven DC tones may be inserted in a center band, that is, a DC band, and a 26-unit corresponding to 13 tones on each of the left and right sides of the DC band may be disposed. A 26-unit, a 52-unit, and a 106-unit may be allocated to other bands. Each unit may be allocated for a receiving STA, that is, a user. The layout of the RUs inFIG.5may be used not only for a multiple users (MUs) but also for a single user (SU), in which case one 242-unit may be used and three DC tones may be inserted as illustrated in the lowermost part ofFIG.5. AlthoughFIG.5proposes RUs having various sizes, that is, a 26-RU, a 52-RU, a 106-RU, and a 242-RU, specific sizes of RUs may be extended or increased. Therefore, the present embodiment is not limited to the specific size of each RU (i.e., the number of corresponding tones). FIG.6illustrates a layout of RUs used in a band of 40 MHz. Similarly toFIG.5in which RUs having various sizes are used, a 26-RU, a 52-RU, a 106-RU, a 242-RU, a 484-RU, and the like may be used in an example ofFIG.6. Further, five DC tones may be inserted in a center frequency, 12 tones may be used for a guard band in the leftmost band of the 40 MHz band, and 11 tones may be used for a guard band in the rightmost band of the 40 MHz band. As illustrated inFIG.6, when the layout of the RUs is used for a single user, a 484-RU may be used. The specific number of RUs may be changed similarly toFIG.5. FIG.7illustrates a layout of RUs used in a band of 80 MHz. Similarly toFIG.5andFIG.6in which RUs having various sizes are used, a 26-RU, a 52-RU, a 106-RU, a 242-RU, a 484-RU, a 996-RU, and the like may be used in an example ofFIG.7. Further, seven DC tones may be inserted in the center frequency, 12 tones may be used for a guard band in the leftmost band of the 80 MHz band, and 11 tones may be used for a guard band in the rightmost band of the 80 MHz band. In addition, a 26-RU corresponding to 13 tones on each of the left and right sides of the DC band may be used. As illustrated inFIG.7, when the layout of the RUs is used for a single user, a 996-RU may be used, in which case five DC tones may be inserted. The RU described in the present specification may be used in uplink (UL) communication and downlink (DL) communication. For example, when UL-MU communication which is solicited by a trigger frame is performed, a transmitting STA (e.g., an AP) may allocate a first RU (e.g., 26/52/106/242-RU, etc.) to a first STA through the trigger frame, and may allocate a second RU (e.g., 26/52/106/242-RU, etc.) to a second STA. Thereafter, the first STA may transmit a first trigger-based PPDU based on the first RU, and the second STA may transmit a second trigger-based PPDU based on the second RU. The first/second trigger-based PPDU is transmitted to the AP at the same (or overlapped) time period. For example, when a DL MU PPDU is configured, the transmitting STA (e.g., AP) may allocate the first RU (e.g., 26/52/106/242-RU, etc.) to the first STA, and may allocate the second RU (e.g., 26/52/106/242-RU, etc.) to the second STA. That is, the transmitting STA (e.g., AP) may transmit HE-STF, HE-LTF, and Data fields for the first STA through the first RU in one MU PPDU, and may transmit HE-STF, HE-LTF, and Data fields for the second STA through the second RU. Information related to a layout of the RU may be signaled through HE-SIG-B. FIG.8illustrates a structure of an HE-SIG-B field. As illustrated, an HE-SIG-B field810includes a common field820and a user-specific field830. The common field820may include information commonly applied to all users (i.e., user STAs) which receive SIG-B. The user-specific field830may be called a user-specific control field. When the SIG-B is transferred to a plurality of users, the user-specific field830may be applied only any one of the plurality of users. As illustrated inFIG.8, the common field820and the user-specific field830may be separately encoded. The common field820may include RU allocation information of N*8 bits. For example, the RU allocation information may include information related to a location of an RU. For example, when a 20 MHz channel is used as shown inFIG.5, the RU allocation information may include information related to a specific frequency band to which a specific RU (26-RU/52-RU/106-RU) is arranged. An example of a case in which the RU allocation information consists of 8 bits is as follows. TABLE 18 bits indices(B7 B6 B5 B4NumberB3 B2 B1 B0)#1#2#3#4#5#6#7#8#9of entries0000000026262626262626262610000000126262626262626521000000102626262626522626100000011262626262652521000001002626522626262626100000101262652262626521000001102626522652262610000011126265226525210000100052262626262626261 As shown the example ofFIG.5, up to nine 26-RUs may be allocated to the 20 MHz channel. When the RU allocation information of the common field820is set to “00000000” as shown in Table 1, the nine 26-RUs may be allocated to a corresponding channel (i.e., 20 MHz). In addition, when the RU allocation information of the common field820is set to “00000001” as shown in Table 1, seven 26-RUs and one 52-RU are arranged in a corresponding channel That is, in the example ofFIG.5, the 52-RU may be allocated to the rightmost side, and the seven 26-RUs may be allocated to the left thereof. The example of Table 1 shows only some of RU locations capable of displaying the RU allocation information. For example, the RU allocation information may include an example of Table 2 below. TABLE 28 bits indicesNumber(B7 B6 B5 B4ofB3 B2 B1 B0)#1#2#3#4#5#6#7#8#9entries01000y2y1y01062626262626801001y2y1y0106262626528 “01000y2y1y0” relates to an example in which a 106-RU is allocated to the leftmost side of the 20 MHz channel, and five 26-RUs are allocated to the right side thereof. In this case, a plurality of STAs (e.g., user-STAs) may be allocated to the 106-RU, based on a MU-MIMO scheme. Specifically, up to 8 STAs (e.g., user-STAs) may be allocated to the 106-RU, and the number of STAs (e.g., user-STAs) allocated to the 106-RU is determined based on 3-bit information (y2y1y0). For example, when the 3-bit information (y2y1y0) is set to N, the number of STAs (e.g., user-STAs) allocated to the 106-RU based on the MU-MIMO scheme may be N+1. In general, a plurality of STAs (e.g., user STAs) different from each other may be allocated to a plurality of RUs. However, the plurality of STAs (e.g., user STAs) may be allocated to one or more RUs having at least a specific size (e.g., 106 subcarriers), based on the MU-MIMO scheme. As shown inFIG.8, the user-specific field830may include a plurality of user fields. As described above, the number of STAs (e.g., user STAs) allocated to a specific channel may be determined based on the RU allocation information of the common field820. For example, when the RU allocation information of the common field820is “00000000”, one user STA may be allocated to each of nine 26-RUs (e.g., nine user STAs may be allocated). That is, up to 9 user STAs may be allocated to a specific channel through an OFDMA scheme. In other words, up to 9 user STAs may be allocated to a specific channel through a non-MU-MIMO scheme. For example, when RU allocation is set to “01000y2y1y0”, a plurality of STAs may be allocated to the 106-RU arranged at the leftmost side through the MU-MIMO scheme, and five user STAs may be allocated to five 26-RUs arranged to the right side thereof through the non-MU MIMO scheme. This case is specified through an example ofFIG.9. FIG.9illustrates an example in which a plurality of user STAs are allocated to the same RU through a MU-MIMO scheme. For example, when RU allocation is set to “01000010” as shown inFIG.9, a 106-RU may be allocated to the leftmost side of a specific channel, and five 26-RUs may be allocated to the right side thereof. In addition, three user STAs may be allocated to the 106-RU through the MU-MIMO scheme. As a result, since eight user STAs are allocated, the user-specific field830of HE-SIG-B may include eight user fields. The eight user fields may be expressed in the order shown inFIG.9. In addition, as shown inFIG.8, two user fields may be implemented with one user block field. The user fields shown inFIG.8andFIG.9may be configured based on two formats. That is, a user field related to a MU-MIMO scheme may be configured in a first format, and a user field related to a non-MIMO scheme may be configured in a second format. Referring to the example ofFIG.9, a user field1to a user field3may be based on the first format, and a user field4to a user field8may be based on the second format. The first format or the second format may include bit information of the same length (e.g., 21 bits). Each user field may have the same size (e.g., 21 bits). For example, the user field of the first format (the first of the MU-MIMO scheme) may be configured as follows. For example, a first bit (i.e., B0-B10) in the user field (i.e., 21 bits) may include identification information (e.g., STA-ID, partial AID, etc.) of a user STA to which a corresponding user field is allocated. In addition, a second bit (i.e., B11-B14) in the user field (i.e., 21 bits) may include information related to a spatial configuration. Specifically, an example of the second bit (i.e., B11-B14) may be as shown in Table 3 and Table 4 below. TABLE 3NSTSNSTSNSTSNSTSNSTSNSTSNSTSNSTSTotalNumber ofNuserB3 . . . B0[1][2][3][4][5][6][7][8]NSTSentries20000-00111-412-5100100-01102-424-60111-10003-436-7100144830000-00111-4113-6130100-01102-4215-70111-10003-4317-81001-10112-4226-81100332840000-00111-41114-7110100-01102-42116-80111331181000-10012-32217-8101022228 TABLE 4NSTSNSTSNSTSNSTSNSTSNSTSNSTSNSTSTotalNumber ofNuserB3 . . . B0[1][2][3][4][5][6][7][8]NSTSentries50000-00111-411115-870100-01012-321117-8011022211860000-00101-3111116-840011221111870000-00011-21111117-82800001111111181 As shown in Table 3 and/or Table 4, the second bit (e.g., B11-B14) may include information related to the number of spatial streams allocated to the plurality of user STAs which are allocated based on the MU-MIMO scheme. For example, when three user STAs are allocated to the 106-RU based on the MU-MIMO scheme as shown inFIG.9, N_user is set to “3”. Therefore, values of N_STS[1], N_STS[2], and N_STS[3] may be determined as shown in Table 3. For example, when a value of the second bit (B11-B14) is “0011”, it may be set to N_STS[1]=4, N_STS[2]=1, N_STS[3]=1. That is, in the example ofFIG.9, four spatial streams may be allocated to the user field1, one spatial stream may be allocated to the user field1, and one spatial stream may be allocated to the user field3. As shown in the example of Table 3 and/or Table 4, information (i.e., the second bit, B11-B14) related to the number of spatial streams for the user STA may consist of 4 bits. In addition, the information (i.e., the second bit, B11-B14) on the number of spatial streams for the user STA may support up to eight spatial streams. In addition, the information (i.e., the second bit, B11-B14) on the number of spatial streams for the user STA may support up to four spatial streams for one user STA. In addition, a third bit (i.e., B15-18) in the user field (i.e., 21 bits) may include modulation and coding scheme (MCS) information. The MCS information may be applied to a data field in a PPDU including corresponding SIG-B. An MCS, MCS information, an MCS index, an MCS field, or the like used in the present specification may be indicated by an index value. For example, the MCS information may be indicated by an index 0 to an index 11. The MCS information may include information related to a constellation modulation type (e.g., BPSK, QPSK, 16-QAM, 64-QAM, 256-QAM, 1024-QAM, etc.) and information related to a coding rate (e.g., 1/2, 2/3, 3/4, 5/6e, etc.). Information related to a channel coding type (e.g., LCC or LDPC) may be excluded in the MCS information. In addition, a fourth bit (i.e., B19) in the user field (i.e., 21 bits) may be a reserved field. In addition, a fifth bit (i.e., B20) in the user field (i.e., 21 bits) may include information related to a coding type (e.g., BCC or LDPC). That is, the fifth bit (i.e., B20) may include information related to a type (e.g., BCC or LDPC) of channel coding applied to the data field in the PPDU including the corresponding SIG-B. The aforementioned example relates to the user field of the first format (the format of the MU-MIMO scheme). An example of the user field of the second format (the format of the non-MU-MIMO scheme) is as follows. A first bit (e.g., B0-B10) in the user field of the second format may include identification information of a user STA. In addition, a second bit (e.g., B11-B13) in the user field of the second format may include information related to the number of spatial streams applied to a corresponding RU. In addition, a third bit (e.g., B14) in the user field of the second format may include information related to whether a beamforming steering matrix is applied. A fourth bit (e.g., B15-B18) in the user field of the second format may include modulation and coding scheme (MCS) information. In addition, a fifth bit (e.g., B19) in the user field of the second format may include information related to whether dual carrier modulation (DCM) is applied. In addition, a sixth bit (i.e., B20) in the user field of the second format may include information related to a coding type (e.g., BCC or LDPC). FIG.10illustrates an operation based on UL-MU. As illustrated, a transmitting STA (e.g., an AP) may perform channel access through contending (e.g., a backoff operation), and may transmit a trigger frame1030. That is, the transmitting STA may transmit a PPDU including the trigger frame1030. Upon receiving the PPDU including the trigger frame, a trigger-based (TB) PPDU is transmitted after a delay corresponding to SIFS. TB PPDUs1041and1042may be transmitted at the same time period, and may be transmitted from a plurality of STAs (e.g., user STAs) having AIDs indicated in the trigger frame1030. An ACK frame1050for the TB PPDU may be implemented in various forms. A specific feature of the trigger frame is described with reference toFIG.11toFIG.13. Even if UL-MU communication is used, an orthogonal frequency division multiple access (OFDMA) scheme or a MU MIMO scheme may be used, and the OFDMA and MU-MIMO schemes may be simultaneously used. FIG.11illustrates an example of a trigger frame. The trigger frame ofFIG.11allocates a resource for uplink multiple-user (MU) transmission, and may be transmitted, for example, from an AP. The trigger frame may be configured of a MAC frame, and may be included in a PPDU. Each field shown inFIG.11may be partially omitted, and another field may be added. In addition, a length of each field may be changed to be different from that shown in the figure. A frame control field1110ofFIG.11may include information related to a MAC protocol version and extra additional control information. A duration field1120may include time information for NAV configuration or information related to an identifier (e.g., AID) of a STA. In addition, an RA field1130may include address information of a receiving STA of a corresponding trigger frame, and may be optionally omitted. A TA field1140may include address information of a STA (e.g., an AP) which transmits the corresponding trigger frame. A common information field1150includes common control information applied to the receiving STA which receives the corresponding trigger frame. For example, a field indicating a length of an L-SIG field of an uplink PPDU transmitted in response to the corresponding trigger frame or information for controlling content of a SIG-A field (i.e., HE-SIG-A field) of the uplink PPDU transmitted in response to the corresponding trigger frame may be included. In addition, as common control information, information related to a length of a CP of the uplink PPDU transmitted in response to the corresponding trigger frame or information related to a length of an LTF field may be included. In addition, per user information fields1160#1to1160#N corresponding to the number of receiving STAs which receive the trigger frame ofFIG.11are preferably included. The per user information field may also be called an “allocation field”. In addition, the trigger frame ofFIG.11may include a padding field1170and a frame check sequence field1180. Each of the per user information fields1160#1to1160#N shown inFIG.11may include a plurality of subfields. FIG.12illustrates an example of a common information field of a trigger frame. A subfield ofFIG.12may be partially omitted, and an extra subfield may be added. In addition, a length of each subfield illustrated may be changed. A length field1210illustrated has the same value as a length field of an L-SIG field of an uplink PPDU transmitted in response to a corresponding trigger frame, and a length field of the L-SIG field of the uplink PPDU indicates a length of the uplink PPDU. As a result, the length field1210of the trigger frame may be used to indicate the length of the corresponding uplink PPDU. In addition, a cascade identifier field1220indicates whether a cascade operation is performed. The cascade operation implies that downlink MU transmission and uplink MU transmission are performed together in the same TXOP. That is, it implies that downlink MU transmission is performed and thereafter uplink MU transmission is performed after a pre-set time (e.g., SIFS). During the cascade operation, only one transmitting device (e.g., AP) may perform downlink communication, and a plurality of transmitting devices (e.g., non-APs) may perform uplink communication. A CS request field1230indicates whether a wireless medium state or a NAV or the like is necessarily considered in a situation where a receiving device which has received a corresponding trigger frame transmits a corresponding uplink PPDU. An HE-SIG-A information field1240may include information for controlling content of a SIG-A field (i.e., HE-SIG-A field) of the uplink PPDU in response to the corresponding trigger frame. A CP and LTF type field1250may include information related to a CP length and LTF length of the uplink PPDU transmitted in response to the corresponding trigger frame. A trigger type field1260may indicate a purpose of using the corresponding trigger frame, for example, typical triggering, triggering for beamforming, a request for block ACK/NACK, or the like. It may be assumed that the trigger type field1260of the trigger frame in the present specification indicates a trigger frame of a basic type for typical triggering. For example, the trigger frame of the basic type may be referred to as a basic trigger frame. FIG.13illustrates an example of a subfield included in a per user information field. A user information field1300ofFIG.13may be understood as any one of the per user information fields1160#1to1160#N mentioned above with reference toFIG.11. A subfield included in the user information field1300ofFIG.13may be partially omitted, and an extra subfield may be added. In addition, a length of each subfield illustrated may be changed. A user identifier field1310ofFIG.13indicates an identifier of a STA (i.e., receiving STA) corresponding to per user information. An example of the identifier may be the entirety or part of an association identifier (AID) value of the receiving STA. In addition, an RU allocation field1320may be included. That is, when the receiving STA identified through the user identifier field1310transmits a TB PPDU in response to the trigger frame, the TB PPDU is transmitted through an RU indicated by the RU allocation field1320. In this case, the RU indicated by the RU allocation field1320may be an RU shown inFIG.5,FIG.6, andFIG.7. The subfield ofFIG.13may include a coding type field1330. The coding type field1330may indicate a coding type of the TB PPDU. For example, when BCC coding is applied to the TB PPDU, the coding type field1330may be set to ‘1’, and when LDPC coding is applied, the coding type field1330may be set to ‘0’. In addition, the subfield ofFIG.13may include an MCS field1340. The MCS field1340may indicate an MCS scheme applied to the TB PPDU. For example, when BCC coding is applied to the TB PPDU, the coding type field1330may be set to ‘1’, and when LDPC coding is applied, the coding type field1330may be set to ‘0’. Hereinafter, a UL OFDMA-based random access (UORA) scheme will be described. FIG.14describes a technical feature of the UORA scheme. A transmitting STA (e.g., an AP) may allocate six RU resources through a trigger frame as shown inFIG.14. Specifically, the AP may allocate a 1st RU resource (AID 0, RU 1), a 2nd RU resource (AID 0, RU 2), a 3rd RU resource (AID 0, RU 3), a 4th RU resource (AID 2045, RU 4), a 5th RU resource (AID 2045, RU 5), and a 6th RU resource (AID 3, RU 6). Information related to the AID 0, AID 3, or AID 2045 may be included, for example, in the user identifier field1310ofFIG.13. Information related to the RU 1 to RU 6 may be included, for example, in the RU allocation field1320ofFIG.13. AID=0 may imply a UORA resource for an associated STA, and AID=2045 may imply a UORA resource for an un-associated STA. Accordingly, the 1st to 3rd RU resources ofFIG.14may be used as a UORA resource for the associated STA, the 4th and 5th RU resources ofFIG.14may be used as a UORA resource for the un-associated STA, and the 6th RU resource ofFIG.14may be used as a typical resource for UL MU. In the example ofFIG.14, an OFDMA random access backoff (OBO) of a STA1 is decreased to 0, and the STA1 randomly selects the 2nd RU resource (AID 0, RU 2). In addition, since an OBO counter of a STA2/3 is greater than 0, an uplink resource is not allocated to the STA2/3. In addition, regarding a STA4 inFIG.14, since an AID (e.g., AID=3) of the STA4 is included in a trigger frame, a resource of the RU 6 is allocated without backoff. Specifically, since the STA1 ofFIG.14is an associated STA, the total number of eligible RA RUs for the STA1 is 3 (RU 1, RU 2, and RU 3), and thus the STA1 decreases an OBO counter by 3 so that the OBO counter becomes 0. In addition, since the STA2 ofFIG.14is an associated STA, the total number of eligible RA RUs for the STA2 is 3 (RU 1, RU 2, and RU 3), and thus the STA2 decreases the OBO counter by 3 but the OBO counter is greater than 0. In addition, since the STA3 ofFIG.14is an un-associated STA, the total number of eligible RA RUs for the STA3 is 2 (RU 4, RU 5), and thus the STA3 decreases the OBO counter by 2 but the OBO counter is greater than 0. FIG.15illustrates an example of a channel used/supported/defined within a 2.4 GHz band. The 2.4 GHz band may be called in other terms such as a first band. In addition, the 2.4 GHz band may imply a frequency domain in which channels of which a center frequency is close to 2.4 GHz (e.g., channels of which a center frequency is located within 2.4 to 2.5 GHz) are used/supported/defined. A plurality of 20 MHz channels may be included in the 2.4 GHz band. 20 MHz within the 2.4 GHz may have a plurality of channel indices (e.g., an index 1 to an index 14). For example, a center frequency of a 20 MHz channel to which a channel index 1 is allocated may be 2.412 GHz, a center frequency of a 20 MHz channel to which a channel index 2 is allocated may be 2.417 GHz, and a center frequency of a 20 MHz channel to which a channel index N is allocated may be (2.407+0.005*N) GHz. The channel index may be called in various terms such as a channel number or the like. Specific numerical values of the channel index and center frequency may be changed. FIG.15exemplifies 4 channels within a 2.4 GHz band. Each of 1st to 4th frequency domains1510to1540shown herein may include one channel. For example, the 1st frequency domain1510may include a channel 1 (a 20 MHz channel having an index 1). In this case, a center frequency of the channel 1 may be set to 2412 MHz. The 2nd frequency domain1520may include a channel 6. In this case, a center frequency of the channel 6 may be set to 2437 MHz. The 3rd frequency domain1530may include a channel 11. In this case, a center frequency of the channel 11 may be set to 2462 MHz. The 4th frequency domain1540may include a channel 14. In this case, a center frequency of the channel 14 may be set to 2484 MHz. FIG.16illustrates an example of a channel used/supported/defined within a 5 GHz band. The 5 GHz band may be called in other terms such as a second band or the like. The 5 GHz band may imply a frequency domain in which channels of which a center frequency is greater than or equal to 5 GHz and less than 6 GHz (or less than 5.9 GHz) are used/supported/defined. Alternatively, the 5 GHz band may include a plurality of channels between 4.5 GHz and 5.5 GHz. A specific numerical value shown inFIG.16may be changed. A plurality of channels within the 5 GHz band include an unlicensed national information infrastructure (UNII)-1, a UNII-2, a UNII-3, and an ISM. The INII-1 may be called UNII Low. The UNII-2 may include a frequency domain called UNII Mid and UNII-2Extended. The UNII-3 may be called UNII-Upper. A plurality of channels may be configured within the 5 GHz band, and a bandwidth of each channel may be variously set to, for example, 20 MHz, 40 MHz, 80 MHz, 160 MHz, or the like. For example, 5170 MHz to 5330 MHz frequency domains/ranges within the UNII-1 and UNII-2 may be divided into eight 20 MHz channels. The 5170 MHz to 5330 MHz frequency domains/ranges may be divided into four channels through a 40 MHz frequency domain. The 5170 MHz to 5330 MHz frequency domains/ranges may be divided into two channels through an 80 MHz frequency domain. Alternatively, the 5170 MHz to 5330 MHz frequency domains/ranges may be divided into one channel through a 160 MHz frequency domain. FIG.17illustrates an example of a channel used/supported/defined within a 6 GHz band. The 6 GHz band may be called in other terms such as a third band or the like. The 6 GHz band may imply a frequency domain in which channels of which a center frequency is greater than or equal to 5.9 GHz are used/supported/defined. A specific numerical value shown inFIG.17may be changed. For example, the 20 MHz channel ofFIG.17may be defined starting from 5.940 GHz. Specifically, among 20 MHz channels ofFIG.17, the leftmost channel may have an index 1 (or a channel index, a channel number, etc.), and 5.945 GHz may be assigned as a center frequency. That is, a center frequency of a channel of an index N may be determined as (5.940+0.005*N) GHz. Accordingly, an index (or channel number) of the 2 MHz channel ofFIG.17may be 1, 5, 9, 13, 17, 21, 25, 29, 33, 37, 41, 45, 49, 53, 57, 61, 65, 69, 73, 77, 81, 85, 89, 93, 97, 101, 105, 109, 113, 117, 121, 125, 129, 133, 137, 141, 145, 149, 153, 157, 161, 165, 169, 173, 177, 181, 185, 189, 193, 197, 201, 205, 209, 213, 217, 221, 225, 229, 233. In addition, according to the aforementioned (5.940+0.005*N) GHz rule, an index of the 40 MHz channel ofFIG.17may be 3, 11, 19, 27, 35, 43, 51, 59, 67, 75, 83, 91, 99, 107, 115, 123, 131, 139, 147, 155, 163, 171, 179, 187, 195, 203, 211, 219, 227. Although 20, 40, 80, and 160 MHz channels are illustrated in the example ofFIG.17, a 240 MHz channel or a 320 MHz channel may be additionally added. Hereinafter, a PPDU transmitted/received in a STA of the present specification will be described. FIG.18illustrates an example of a PPDU used in the present specification. The PPDU ofFIG.18may be called in various terms such as an EHT PPDU, a TX PPDU, an RX PPDU, a first type or N-th type PPDU, or the like. For example, in the present specification, the PPDU or the EHT PPDU may be called in various terms such as a TX PPDU, a RX PPDU, a first type or N-th type PPDU, or the like. In addition, the EHT PPDU may be used in an EHT system and/or a new WLAN system enhanced from the EHT system. The PPDU ofFIG.18may indicate the entirety or part of a PPDU type used in the EHT system. For example, the example ofFIG.18may be used for both of a single-user (SU) mode and a multi-user (MU) mode. In other words, the PPDU ofFIG.18may be a PPDU for one receiving STA or a plurality of receiving STAs. When the PPDU ofFIG.18is used for a trigger-based (TB) mode, the EHT-SIG ofFIG.18may be omitted. In other words, a STA which has received a trigger frame for uplink-MU (UL-MU) may transmit the PPDU in which the EHT-SIG is omitted in the example ofFIG.18. InFIG.18, an L-STF to an EHT-LTF may be called a preamble or a physical preamble, and may be generated/transmitted/received/obtained/decoded in a physical layer. A subcarrier spacing of the L-STF, L-LTF, L-SIG, RL-SIG, U-SIG, and EHT-SIG fields ofFIG.18may be determined as 312.5 kHz, and a subcarrier spacing of the EHT-STF, EHT-LTF, and Data fields may be determined as 78.125 kHz. That is, a tone index (or subcarrier index) of the L-STF, L-LTF, L-SIG, RL-SIG, U-SIG, and EHT-SIG fields may be expressed in unit of 312.5 kHz, and a tone index (or subcarrier index) of the EHT-STF, EHT-LTF, and Data fields may be expressed in unit of 78.125 kHz. In the PPDU ofFIG.18, the L-LTE and the L-STF may be the same as those in the conventional fields. The L-SIG field ofFIG.18may include, for example, bit information of 24 bits. For example, the 24-bit information may include a rate field of 4 bits, a reserved bit of 1 bit, a length field of 12 bits, a parity bit of 1 bit, and a tail bit of 6 bits. For example, the length field of 12 bits may include information related to a length or time duration of a PPDU. For example, the length field of 12 bits may be determined based on a type of the PPDU. For example, when the PPDU is a non-HT, HT, VHT PPDU or an EHT PPDU, a value of the length field may be determined as a multiple of 3. For example, when the PPDU is an HE PPDU, the value of the length field may be determined as “a multiple of 3”+1 or “a multiple of 3”+2. In other words, for the non-HT, HT, VHT PPDI or the EHT PPDU, the value of the length field may be determined as a multiple of 3, and for the HE PPDU, the value of the length field may be determined as “a multiple of 3”+1 or “a multiple of 3”+2. For example, the transmitting STA may apply BCC encoding based on a 1/2 coding rate to the 24-bit information of the L-SIG field. Thereafter, the transmitting STA may obtain a BCC coding bit of 48 bits. BPSK modulation may be applied to the 48-bit coding bit, thereby generating 48 BPSK symbols. The transmitting STA may map the 48 BPSK symbols to positions except for a pilot subcarrier{subcarrier index −21, −7, +7, +21} and a DC subcarrier{subcarrier index 0}. As a result, the 48 BPSK symbols may be mapped to subcarrier indices −26 to −22, −20 to −8, −6 to −1, +1 to +6, +8 to +20, and +22 to +26. The transmitting STA may additionally map a signal of {−1, −1, −1, 1} to a subcarrier index{−28, −27, +27, +28}. The aforementioned signal may be used for channel estimation on a frequency domain corresponding to −28, −27, +27, +281. The transmitting STA may generate an RL-SIG generated in the same manner as the L-SIG. BPSK modulation may be applied to the RL-SIG. The receiving STA may know that the RX PPDU is the HE PPDU or the EHT PPDU, based on the presence of the RL-SIG. A universal SIG (U-SIG) may be inserted after the RL-SIG ofFIG.18. The U-SIB may be called in various terms such as a first SIG field, a first SIG, a first type SIG, a control signal, a control signal field, a first (type) control signal, or the like. The U-SIG may include information of N bits, and may include information for identifying a type of the EHT PPDU. For example, the U-SIG may be configured based on two symbols (e.g., two contiguous OFDM symbols). Each symbol (e.g., OFDM symbol) for the U-SIG may have a duration of 4 us. Each symbol of the U-SIG may be used to transmit the 26-bit information. For example, each symbol of the U-SIG may be transmitted/received based on 52 data tomes and 4 pilot tones. Through the U-SIG (or U-SIG field), for example, A-bit information (e.g., 52 un-coded bits) may be transmitted. A first symbol of the U-SIG may transmit first X-bit information (e.g., 26 un-coded bits) of the A-bit information, and a second symbol of the U-SIB may transmit the remaining Y-bit information (e.g., 26 un-coded bits) of the A-bit information. For example, the transmitting STA may obtain 26 un-coded bits included in each U-SIG symbol. The transmitting STA may perform convolutional encoding (i.e., BCC encoding) based on a rate of R=1/2 to generate 52-coded bits, and may perform interleaving on the 52-coded bits. The transmitting STA may perform BPSK modulation on the interleaved 52-coded bits to generate 52 BPSK symbols to be allocated to each U-SIG symbol. One U-SIG symbol may be transmitted based on 65 tones (subcarriers) from a subcarrier index −28 to a subcarrier index +28, except for a DC index 0. The 52 BPSK symbols generated by the transmitting STA may be transmitted based on the remaining tones (subcarriers) except for pilot tones, i.e., tones −21, −7, +7, +21. For example, the A-bit information (e.g., 52 un-coded bits) generated by the U-SIG may include a CRC field (e.g., a field having a length of 4 bits) and a tail field (e.g., a field having a length of 6 bits). The CRC field and the tail field may be transmitted through the second symbol of the U-SIG. The CRC field may be generated based on 26 bits allocated to the first symbol of the U-SIG and the remaining 16 bits except for the CRC/tail fields in the second symbol, and may be generated based on the conventional CRC calculation algorithm. In addition, the tail field may be used to terminate trellis of a convolutional decoder, and may be set to, for example, “000000”. The A-bit information (e.g., 52 un-coded bits) transmitted by the U-SIG (or U-SIG field) may be divided into version-independent bits and version-dependent bits. For example, the version-independent bits may have a fixed or variable size. For example, the version-independent bits may be allocated only to the first symbol of the U-SIG, or the version-independent bits may be allocated to both of the first and second symbols of the U-SIG. For example, the version-independent bits and the version-dependent bits may be called in various terms such as a first control bit, a second control bit, or the like. For example, the version-independent bits of the U-SIG may include a PHY version identifier of 3 bits. For example, the PHY version identifier of 3 bits may include information related to a PHY version of a TX/RX PPDU. For example, a first value of the PHY version identifier of 3 bits may indicate that the TX/RX PPDU is an EHT PPDU. In other words, when the transmitting STA transmits the EHT PPDU, the PHY version identifier of 3 bits may be set to a first value. In other words, the receiving STA may determine that the RX PPDU is the EHT PPDU, based on the PHY version identifier having the first value. For example, the version-independent bits of the U-SIG may include a UL/DL flag field of 1 bit. A first value of the UL/DL flag field of 1 bit relates to UL communication, and a second value of the UL/DL flag field relates to DL communication. For example, the version-independent bits of the U-SIG may include information related to a TXOP length and information related to a BSS color ID. For example, when the EHT PPDU is divided into various types (e.g., various types such as an EHT PPDU related to an SU mode, an EHT PPDU related to a MU mode, an EHT PPDU related to a TB mode, an EHT PPDU related to extended range transmission, or the like), information related to the type of the EHT PPDU may be included in the version-dependent bits of the U-SIG. For example, the U-SIG may include: 1) a bandwidth field including information related to a bandwidth; 2) a field including information related to an MCS scheme applied to EHT-SIG; 3) an indication field including information regarding whether a dual subcarrier modulation (DCM) scheme is applied to EHT-SIG; 4) a field including information related to the number of symbol used for EHT-SIG; 5) a field including information regarding whether the EHT-SIG is generated across a full band; 6) a field including information related to a type of EHT-LTF/STF; and 7) information related to a field indicating an EHT-LTF length and a CP length. Preamble puncturing may be applied to the PPDU ofFIG.18. The preamble puncturing implies that puncturing is applied to part (e.g., a secondary 20 MHz band) of the full band. For example, when an 80 MHz PPDU is transmitted, a STA may apply puncturing to the secondary 20 MHz band out of the 80 MHz band, and may transmit a PPDU only through a primary 20 MHz band and a secondary 40 MHz band. For example, a pattern of the preamble puncturing may be configured in advance. For example, when a first puncturing pattern is applied, puncturing may be applied only to the secondary 20 MHz band within the 80 MHz band. For example, when a second puncturing pattern is applied, puncturing may be applied to only any one of two secondary 20 MHz bands included in the secondary 40 MHz band within the 80 MHz band. For example, when a third puncturing pattern is applied, puncturing may be applied to only the secondary 20 MHz band included in the primary 80 MHz band within the 160 MHz band (or 80+80 MHz band). For example, when a fourth puncturing is applied, puncturing may be applied to at least one 20 MHz channel not belonging to a primary 40 MHz band in the presence of the primary 40 MHz band included in the 80 MHz band within the 160 MHz band (or 80+80 MHz band). Information related to the preamble puncturing applied to the PPDU may be included in U-SIG and/or EHT-SIG. For example, a first field of the U-SIG may include information related to a contiguous bandwidth, and second field of the U-SIG may include information related to the preamble puncturing applied to the PPDU. For example, the U-SIG and the EHT-SIG may include the information related to the preamble puncturing, based on the following method. When a bandwidth of the PPDU exceeds 80 MHz, the U-SIG may be configured individually in unit of 80 MHz. For example, when the bandwidth of the PPDU is 160 MHz, the PPDU may include a first U-SIG for a first 80 MHz band and a second U-SIG for a second 80 MHz band. In this case, a first field of the first U-SIG may include information related to a 160 MHz bandwidth, and a second field of the first U-SIG may include information related to a preamble puncturing (i.e., information related to a preamble puncturing pattern) applied to the first 80 MHz band. In addition, a first field of the second U-SIG may include information related to a 160 MHz bandwidth, and a second field of the second U-SIG may include information related to a preamble puncturing (i.e., information related to a preamble puncturing pattern) applied to the second 80 MHz band. Meanwhile, an EHT-SIG contiguous to the first U-SIG may include information related to a preamble puncturing applied to the second 80 MHz band (i.e., information related to a preamble puncturing pattern), and an EHT-SIG contiguous to the second U-SIG may include information related to a preamble puncturing (i.e., information related to a preamble puncturing pattern) applied to the first 80 MHz band. Additionally or alternatively, the U-SIG and the EHT-SIG may include the information related to the preamble puncturing, based on the following method. The U-SIG may include information related to a preamble puncturing (i.e., information related to a preamble puncturing pattern) for all bands. That is, the EHT-SIG may not include the information related to the preamble puncturing, and only the U-SIG may include the information related to the preamble puncturing (i.e., the information related to the preamble puncturing pattern). The U-SIG may be configured in unit of 20 MHz. For example, when an 80 MHz PPDU is configured, the U-SIG may be duplicated. That is, four identical U-SIGs may be included in the 80 MHz PPDU. PPDUs exceeding an 80 MHz bandwidth may include different U-SIGs. The EHT-SIG ofFIG.18may include control information for the receiving STA. The EHT-SIG may be transmitted through at least one symbol, and one symbol may have a length of 4 us. Information related to the number of symbols used for the EHT-SIG may be included in the U-SIG. The EHT-SIG may include a technical feature of the HE-SIG-B described with reference toFIG.8andFIG.9. For example, the EHT-SIG may include a common field and a user-specific field as in the example ofFIG.8. The common field of the EHT-SIG may be omitted, and the number of user-specific fields may be determined based on the number of users. As in the example ofFIG.8, the common field of the EHT-SIG and the user-specific field of the EHT-SIG may be individually coded. One user block field included in the user-specific field may include information for two users, but a last user block field included in the user-specific field may include information for one user. That is, one user block field of the EHT-SIG may include up to two user fields. As in the example ofFIG.9, each user field may be related to MU-MIMO allocation, or may be related to non-MU-MIMO allocation. As in the example ofFIG.8, the common field of the EHT-SIG may include a CRC bit and a tail bit. A length of the CRC bit may be determined as 4 bits. A length of the tail bit may be determined as 6 bits, and may be set to ‘000000’. As in the example ofFIG.8, the common field of the EHT-SIG may include RU allocation information. The RU allocation information may imply information related to a location of an RU to which a plurality of users (i.e., a plurality of receiving STAs) are allocated. The RU allocation information may be configured in unit of 8 bits (or N bits), as in Table 1. The example of Table 5 to Table 7 is an example of 8-bit (or N-bit) information for various RU allocations. An index shown in each table may be modified, and some entries in Table 5 to Table 7 may be omitted, and entries (not shown) may be added. The example of Table 5 to Table 7 relates to information related to a location of an RU allocated to a 20 MHz band. For example, ‘an index 0’ of Table 5 may be used in a situation where nine 26-RUs are individually allocated (e.g., in a situation where nine 26-RUs shown inFIG.5are individually allocated). Meanwhile, a plurality or RUs may be allocated to one STA in the EHT system. For example, regarding ‘an index 60’ of Table 6, one 26-RU may be allocated for one user (i.e., receiving STA) to the leftmost side of the 20 MHz band, one 26-RU and one 52-RU may be allocated to the right side thereof, and five 26-RUs may be individually allocated to the right side thereof. TABLE 5NumberIndices#1#2#3#4#5#6#7#8#9of entries026262626262626262611262626262626265212262626262652262613262626262652521426265226262626261526265226262652162626522652262617262652265252185226262626262626195226262626265211052262626522626111522626265252112525226262626261135252262626521145252265226261155252265252116262626262610611726265226106118522626261061195252261061 TABLE 6NumberIndices#1#2#3#4#5#6#7#8#9of entries20106262626262612110626262652122106265226261231062652521245252—5252125242-tone RU empty (with zero users)12610626106127-34242835-42484843-50996851-582 * 996859262626262652 + 26261602626 + 5226262626261612626 + 52262626521622626 + 52265226261632626522652 + 26261642626 + 522652 + 26261652626 + 522652521 TABLE 7665226262652 + 262616752522652 + 26261685252 + 2652521692626262626 + 1061702626 + 522610617126265226 + 1061722626 + 5226 + 10617352262626 + 106174525226 + 106175106 + 2626262626176106 + 26262652177106 + 265226261781062652 + 2626179106 + 2652 + 2626180106 + 265252181106 + 2610618210626 + 1061 A mode in which the common field of the EHT-SIG is omitted may be supported. The mode in which the common field of the EHT-SIG is omitted may be called a compressed mode. When the compressed mode is used, a plurality of users (i.e., a plurality of receiving STAs) may decode the PPDU (e.g., the data field of the PPDU), based on non-OFDMA. That is, the plurality of users of the EHT PPDU may decode the PPDU (e.g., the data field of the PPDU) received through the same frequency band. Meanwhile, when a non-compressed mode is used, the plurality of users of the EHT PPDU may decode the PPDU (e.g., the data field of the PPDU), based on OFDMA. That is, the plurality of users of the EHT PPDU may receive the PPDU (e.g., the data field of the PPDU) through different frequency bands. The EHT-SIG may be configured based on various MCS schemes. As described above, information related to an MCS scheme applied to the EHT-SIG may be included in U-SIG. The EHT-SIG may be configured based on a DCM scheme. For example, among N data tones (e.g., 52 data tones) allocated for the EHT-SIG, a first modulation scheme may be applied to half of contiguous tones, and a second modulation scheme may be applied to the remaining half of the contiguous tones. That is, a transmitting STA may use the first modulation scheme to modulate specific control information through a first symbol and allocate it to half of the contiguous tones, and may use the second modulation scheme to modulate the same control information by using a second symbol and allocate it to the remaining half of the contiguous tones. As described above, information (e.g., a 1-bit field) regarding whether the DCM scheme is applied to the EHT-SIG may be included in the U-SIG. An HE-STF ofFIG.18may be used for improving automatic gain control estimation in a multiple input multiple output (MIMO) environment or an OFDMA environment. An HE-LTF ofFIG.18may be used for estimating a channel in the MIMO environment or the OFDMA environment. The EHT-STF ofFIG.18may be set in various types. For example, a first type of STF (e.g., 1×STF) may be generated based on a first type STF sequence in which a non-zero coefficient is arranged with an interval of 16 subcarriers. An STF signal generated based on the first type STF sequence may have a period of 0.8 μs, and a periodicity signal of 0.8 μs may be repeated 5 times to become a first type STF having a length of 4 μs. For example, a second type of STF (e.g., 2×STF) may be generated based on a second type STF sequence in which a non-zero coefficient is arranged with an interval of 8 subcarriers. An STF signal generated based on the second type STF sequence may have a period of 1.6 μs, and a periodicity signal of 1.6 μs may be repeated 5 times to become a second type STF having a length of 8 μs. Hereinafter, an example of a sequence for configuring an EHT-STF (i.e., an EHT-STF sequence) is proposed. The following sequence may be modified in various ways. The EHT-STF may be configured based on the following sequence M. M={−1,−1,−1,1,1,1,−1,1,1,1,−1,1,1,−1,1}   <Equation 1> The EHT-STF for the 20 MHz PPDU may be configured based on the following equation. The following example may be a first type (i.e., 1×STF) sequence. For example, the first type sequence may be included in not a trigger-based (TB) PPDU but an EHT-PPDU. In the following equation, (a:b:c) may imply a duration defined as b tone intervals (i.e., a subcarrier interval) from a tone index (i.e., subcarrier index) ‘a’ to a tone index ‘c’. For example, the equation 2 below may represent a sequence defined as 16 tone intervals from a tone index −112 to a tone index 112. Since a subcarrier spacing of 78.125 kHz is applied to the EHT-STR, the 16 tone intervals may imply that an EHT-STF coefficient (or element) is arranged with an interval of 78.125*16=1250 kHz. In addition, * implies multiplication, and sqrt( ) implies a square root. In addition, j implies an imaginary number. EHT-STF(−112:16:112)={M}*(1+j)/sqrt(2)   <Equation 2> EHT-STF(0)=0 The EHT-STF for the 40 MHz PPDU may be configured based on the following equation. The following example may be the first type (i.e., 1×STF) sequence. EHT-STF(−240:16:240)={M,0,−M}*(1+j)/sqrt(2)   <Equation 3> The EHT-STF for the 80 MHz PPDU may be configured based on the following equation. The following example may be the first type (i.e., 1×STF) sequence. EHT-STF(−496:16:496)={M,1,—M,0,—M,1,—M}*(1+j)/sqrt(2)   <Equation 4> The EHT-STF for the 160 MHz PPDU may be configured based on the following equation. The following example may be the first type (i.e., 1×STF) sequence. EHT-STF(−1008:16:1008)={M,1,—M,0,—M,1,—M,0,—M,−1,M,0,—M,1,—M}*(1+j)/sqrt(2)   <Equation 5> In the EHT-STF for the 80+80 MHz PPDU, a sequence for lower 80 MHz may be identical to Equation 4. In the EHT-STF for the 80+80 MHz PPDU, a sequence for upper 80 MHz may be configured based on the following equation. EHT-STF(−496:16:496)={−M,−1,M,0,−M,1,−M}*(1+j)/sqrt(2)   <Equation 6> Equation 7 to Equation 11 below relate to an example of a second type (i.e., 2×STF) sequence. EHT-STF(−120:8:120)={M,0,−M}*(1+j)/sqrt(2)   <Equation 7> The EHT-STF for the 40 MHz PPDU may be configured based on the following equation. EHT-STF(−248:8:248)={M,−1,—M,0,M,−1,M}*(1+j)/sqrt(2)   <Equation 8> EHT-STF(−248)=0 EHT-STF(248)=0 The EHT-STF for the 80 MHz PPDU may be configured based on the following equation. EHT-STF(−504:8:504)={M,−1,M,−1,−M,−1,M,0,−M,1,M,1,−M,1,−M}*(1+j)/sqrt(2)   <Equation 9> The EHT-STF for the 160 MHz PPDU may be configured based on the following equation. EHT-STF(−1016:16:1016)={M,−1,M,−1,−M,−1,M,0,−M,1,M,1,−M,1,−M,0,−M,1,−M,1,M,1,−M,0,−M,1,M,1,−M,1,−M}*(1+j)/sqrt(2)   <Equation 10> EHT-STF(−8)=0, EHT-STF(8)=0, EHT-STF(−1016)=0, EHT-STF(1016)=0 In the EHT-STF for the 80+80 MHz PPDU, a sequence for lower 80 MHz may be identical to Equation 9. In the EHT-STF for the 80+80 MHz PPDU, a sequence for upper 80 MHz may be configured based on the following equation. EHT-STF(−504:8:504)={−M,1,−M,1,M,1,−M,0,−M,1,M,1,−M,1,−M}*(1+j)/sqrt(2)   <Equation 11> EHT-STF(−504)=0, EHT-STF(504)=0 The EHT-LTF may have first, second, and third types (i.e., 1×, 2×, 4×LTF). For example, the first/second/third type LTF may be generated based on an LTF sequence in which a non-zero coefficient is arranged with an interval of 4/2/1 subcarriers. The first/second/third type LTF may have a time length of 3.2/6.4/12.8 μs. In addition, a GI (e.g., 0.8/1/6/3.2 μs) having various lengths may be applied to the first/second/third type LTF. Information related to a type of STF and/or LTF (information related to a GI applied to LTF is also included) may be included in a SIG-A field and/or SIG-B field or the like ofFIG.18. A PPDU (e.g., EHT-PPDU) ofFIG.18may be configured based on the example ofFIG.5andFIG.6. For example, an EHT PPDU transmitted on a 20 MHz band, i.e., a 20 MHz EHT PPDU, may be configured based on the RU ofFIG.5. That is, a location of an RU of EHT-STF, EHT-LTF, and data fields included in the EHT PPDU may be determined as shown inFIG.5. An EHT PPDU transmitted on a 40 MHz band, i.e., a 40 MHz EHT PPDU, may be configured based on the RU ofFIG.6. That is, a location of an RU of EHT-STF, EHT-LTF, and data fields included in the EHT PPDU may be determined as shown inFIG.6. Since the RU location ofFIG.6corresponds to 40 MHz, a tone-plan for 80 MHz may be determined when the pattern ofFIG.6is repeated twice. That is, an 80 MHz EHT PPDU may be transmitted based on a new tone-plan in which not the RU ofFIG.7but the RU ofFIG.6is repeated twice. When the pattern ofFIG.6is repeated twice, 23 tones (i.e., 11 guard tones+12 guard tones) may be configured in a DC region. That is, a tone-plan for an 80 MHz EHT PPDU allocated based on OFDMA may have 23 DC tones. Unlike this, an 80 MHz EHT PPDU allocated based on non-OFDMA (i.e., a non-OFDMA full bandwidth 80 MHz PPDU) may be configured based on a 996-RU, and may include 5 DC tones, 12 left guard tones, and 11 right guard tones. A tone-plan for 160/240/320 MHz may be configured in such a manner that the pattern ofFIG.6is repeated several times. The PPDU ofFIG.18may be determined (or identified) as an EHT PPDU based on the following method. A receiving STA may determine a type of an RX PPDU as the EHT PPDU, based on the following aspect. For example, the RX PPDU may be determined as the EHT PPDU: 1) when a first symbol after an L-LTF signal of the RX PPDU is a BPSK symbol; 2) when RL-SIG in which the L-SIG of the RX PPDU is repeated is detected; and 3) when a result of applying “modulo 3” to a value of a length field of the L-SIG of the RX PPDU is detected as “0”. When the RX PPDU is determined as the EHT PPDU, the receiving STA may detect a type of the EHT PPDU (e.g., an SU/MU/Trigger-based/Extended Range type), based on bit information included in a symbol after the RL-SIG ofFIG.18. In other words, the receiving STA may determine the RX PPDU as the EHT PPDU, based on: 1) a first symbol after an L-LTF signal, which is a BPSK symbol; 2) RL-SIG contiguous to the L-SIG field and identical to L-SIG; 3) L-SIG including a length field in which a result of applying “modulo 3” is set to “0”; and 4) a 3-bit PHY version identifier of the aforementioned U-SIG (e.g., a PHY version identifier having a first value). For example, the receiving STA may determine the type of the RX PPDU as the EHT PPDU, based on the following aspect. For example, the RX PPDU may be determined as the HE PPDU: 1) when a first symbol after an L-LTF signal is a BPSK symbol; 2) when RL-SIG in which the L-SIG is repeated is detected; and 3) when a result of applying “modulo 3” to a value of a length field of the L-SIG is detected as “1” or “2”. For example, the receiving STA may determine the type of the RX PPDU as a non-HT, HT, and VHT PPDU, based on the following aspect. For example, the RX PPDU may be determined as the non-HT, HT, and VHT PPDU: 1) when a first symbol after an L-LTF signal is a BP SK symbol; and 2) when RL-SIG in which L-SIG is repeated is not detected. In addition, even if the receiving STA detects that the RL-SIG is repeated, when a result of applying “modulo 3” to the length value of the L-SIG is detected as “0”, the RX PPDU may be determined as the non-HT, HT, and VHT PPDU. In the following example, a signal represented as a (TX/RX/UL/DL) signal, a (TX/RX/UL/DL) frame, a (TX/RX/UL/DL) packet, a (TX/RX/UL/DL) data unit, (TX/RX/UL/DL) data, or the like may be a signal transmitted/received based on the PPDU ofFIG.18. The PPDU ofFIG.18may be used to transmit/receive frames of various types. For example, the PPDU ofFIG.18may be used for a control frame. An example of the control frame may include a request to send (RTS), a clear to send (CTS), a power save-poll (PS-poll), BlockACKReq, BlockAck, a null data packet (NDP) announcement, and a trigger frame. For example, the PPDU ofFIG.18may be used for a management frame. An example of the management frame may include a beacon frame, a (re-)association request frame, a (re-)association response frame, a probe request frame, and a probe response frame. For example, the PPDU ofFIG.18may be used for a data frame. For example, the PPDU ofFIG.18may be used to simultaneously transmit at least two or more of the control frame, the management frame, and the data frame. FIG.19illustrates an example of a modified transmission device and/or receiving device of the present specification. Each device/STA of the sub-figure (a)/(b) ofFIG.1may be modified as shown inFIG.19. A transceiver630ofFIG.19may be identical to the transceivers113and123ofFIG.1. The transceiver630ofFIG.19may include a receiver and a transmitter. A processor610ofFIG.19may be identical to the processors111and121ofFIG.1. Alternatively, the processor610ofFIG.19may be identical to the processing chips114and124ofFIG.1. A memory620ofFIG.19may be identical to the memories112and122ofFIG.1. Alternatively, the memory620ofFIG.19may be a separate external memory different from the memories112and122ofFIG.1. Referring toFIG.19, a power management module611manages power for the processor610and/or the transceiver630. A battery612supplies power to the power management module611. A display613outputs a result processed by the processor610. A keypad614receives inputs to be used by the processor610. The keypad614may be displayed on the display613. A SIM card615may be an integrated circuit which is used to securely store an international mobile subscriber identity (IMSI) and its related key, which are used to identify and authenticate subscribers on mobile telephony devices such as mobile phones and computers. Referring toFIG.19, a speaker640may output a result related to a sound processed by the processor610. A microphone641may receive an input related to a sound to be used by the processor610. A 20 MHz-band 1× HE-LTF specified in the existing 802.11ax, i.e., HE, is as follows.HELTF−122.122={0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, 0, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0,−1.0, 0, 0, +1, 0.0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0.0, −1, 0, 0, 0, −1, 0.0} A 40 MHz-band 1×HELTF specified in the existing 802.11ax, i.e., HE, is as follows.HELTF−244.244={+1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, +1, 0, 0, 0,+1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1} An 80 MHz-band 1×HELTF specified in the existing 802.11ax, i.e., HE, is as follows. 80 NHz:HELTF−500.500={−1, 0, 0, 0, −1, 0, 0, −0, +1, 0, 0, −0, +1, 0, 0, 0, −1, 0, 0, 0, −+1, 0, −0, 0, +1, 0, 0, 0, −−1, −0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, 0, 0, 0, −1, 0, 0, 0, 1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, −0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +0, 0, 0, +0, 0, 0, −1, 0, 0, 0, +0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, +0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, 1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, 0, 0, 0, 0, −1, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0,0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 1, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 1, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +−1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +0, 1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, 1, 0, 0, 0, +1} A 160 MHz-band 1× HE-LTF specified in the existing 802.11ax, i.e., HE, is as follows. 160 MHz:HELTF−1012,1012={LTF80MHz_lower_1x, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, LTF80MHz_upper_1x}LTF80MHz_lower_1x={LTF80MHz_left_1x. 0. LTF80MHz_right_1x} shall be used in the lower 80 MHz frequency segmentLTF80MHz_upper_1x={LTF80MHz_left_1x. 0. LTF80MHz_right_1x} shall be used in the upper 80 MHz frequency segmentLTF80MHz_left_1x={−1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, +1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 1, −1, 0, 1, 0, −1, 0, 0, 0, +1, 0, 0, 0, 1, 0, +1, −1, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, −1, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, +1, 0, +1, 0, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, 0, 0, 0, 0, 0, −1, 0, 0, 0, 1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, −1, 0, 0, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, 1, 0, 0, 0, −1, 0, 0, +1, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, +1, 0, −1, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −−1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, −1, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +, 1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, +1, −1, 0, 0, 0, 1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, +1, 0, 0, +1, 0, +1, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, +1, −1, 0, 0, 0, +1, 0, 1, 1, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0}LTF80MHz_right_1x={0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, +1, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, 1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, +1, 0, +1, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, −1, 0, 0, 0, 0, 0, +1, 1, 1, 0, 0, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, +1, 0, −1, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, −1, +1, 0, 0, 0, 1, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, 1, 0, −1, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, −1, 0, 0, 0, +1, 0, 0, 0, +1} In case of 80+80 MHz transmission using the 1× HE-LTF, a lower 80 MHz frequency segment shall use the 80 MHz 1×HE-LTF sequence of HELTF-−500,500-=LTF80MHz_lower_1x, and an upper 80 MHz frequency segment shall use the 80 MHz 1×HE-LTF sequence of HELTF-−500,500-=LTF80MHz_upper_1x. A 20 MHz-band 2× HE-LTF specified in the existing 802.11ax, i.e., HE, is as follows.HELTF−122.122={−1, 6, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, 0, −1, 0, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −−1, 0, −1, 0, +1, 0−1, 0, +1, 0, −1, 0, −1, 0, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, 0, 0, +1, 0, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1} A 40 MHz-band 2×HE-LTF specified in the existing 802.11ax, i.e., HE, is as follows.HELTF−244.244={+1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, 0, +1, 0, −1, 0, −1, 0, 0, −1, 0, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, 0, +1, 0, +1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, −1, 0, 0, 0, 0, 0, 0, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1, 0, −−1, 0, +1, 0, +1, 0, −−1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1} An 80 MHz-band 2×HE-LTF specified in the existing 802.11ax, i.e., HE, is as follows.HELTF−500.500={+1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, 1, 0, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, 1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, 1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, 0, +1, 0, −1, 0, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, 1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, 0, +1, 0, −1, 0, −1, 0, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0,+1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, 0, 0, 0, 0, 0, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, 1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, 1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, 0, −1, 0, 1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, 1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, 1, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +, 1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1} A 160 MHz-band 2×HE-LTF specified in the existing 802.11ax, i.e., HE, is as follows.HELTF−1012,1012={LTF80MHz_lower_2x, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, LTF80MHz_upper_2x}LTF80MHz_lower_2x={LTF80MHz_part1_2x, LTF80MHz_part2_2x, LTF80MHz_part3_2x, LTF80MHz_part4_2x, LTF80MHz_part5_2x} shall be used in the lower 80 MHz frequency subblockLTF80MHz_upper_2x={LTF80MHz_part1_2x, LTF80MHz_part2_2x, LTF80MHz_part3_2x, LTF80MHz_part4_2x, LTF80MHz_part5_2x} shall be used in the upper 80 MHz frequency subblockLTF80MHz_part1_2x={+1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, 1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −+1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, +1, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0}LTF80MHz_part2_2x={+1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −+1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, 1, 0, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, −1, +1, 0, +1, 0, +1, 0}LTF80MHz_part3_2x={+1, 0, −1, 0, −1, 0, −1, 0, 0, +1, 0, +1, 0, 0, 0, 0, 0, 0, 0, +1, 0, −1, 0, 0, −1, 0, +1, 0, −1, 0, +1}LTF80MHz_part4_2x={0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, 1, 0, +−1, 0, +1, 0, +1, 0, +1, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, 0, −1, 0, −1, 0, 1, 0, +1, 0, +1, 0, −1}LTF80MHz_part5_2x={0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, 1, 0, −1, 0, 0, −1, 0, +1, 0, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +0, −1, 0, +1, 0, +1, 0, −1, 0, −1, 0, +1, 0, −1, 0, −1, 0, −1, 0, +1, 0, +1, 0, +1, 0, +1, 0, −1, 0, +1, 0, +1, 0, 0,−1, 0, +1, 0, +1} In case of 80+80 MHz transmission using the 2×HE-LTF, a lower 80 MHz frequency segment shall use the 80 MHz 2×HE-LTF sequence of HELTF-−500,500-=LTF80MHz_lower_2x, and an upper 80 MHz frequency segment shall use the 80 MHz 2×HE-LTF sequence of HELTF-−500,500-=LTF80MHz_upper_2x. A 20 MHz-band 4×HE-LTF specified in the existing 802.11ax, i.e., HE, is as follows.HELTF−122.122={−1, −1, +1, −1, −1, +1, +1, +1, −1, +−+1, +1, −1, +1, −, −1, −1, −1, +1, +1, −1, −1, −1, −1, +1, +1, −1, +1, −1, +1, +1, +1, +1, −1, +1, −1, −1, +1, +1, −1, +1, +1, +1, +1, −1, −1, +1, −1, −1, −1, +1, +1, +1, +1, −1, +1, +1, −1, −1, −1, −1, +1, −1, −1, +1, +1, −1, +1, −1, −1, −1, −1, 1, −1, +1, −1, −1, −1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, +1, −1, −1, +1, +1, +1, −1, +1, +1, +1, −1, +1, −1, +1, −1, −1, −1, −1, −1, +1, +1, +1, −1, −1, −1, +1, −1, +1, +1, +1, 0, 0, 0, −1, +1, −1, +1, −1, +1, +1, −1, +1, +1, +1, −1, −1, +1, −1, −1, +1, −1, +1, −1, +1, +1, +1, −1, +1, +1, +1, −1, −1, +1, −1, −1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, −1, +1, −1, +1, −1, −1, −1, −1, +1, −1, +1, +1, −1, −1, +1, −1, −1, −1, −1, +1, +1, −1, +1, +1, +1, +1, +1, +1, +1, −1, +1, +1, −1, −1, −1, −1, +1, −1, −1, +1, +1, −1, −1, −1, −1, −1, +1, −1, +1, −1, −1, +1, +1, +1, +1, −1, −1, +1, +1, +1, +1, +1, −1, +1+1, −1, −1, −1, +1, −1, −1, −1, +1, −1, +1, −1, +1+1} A 40 MHz-band 4×HE-LTF specified in the existing 802.11ax, i.e., HE, is as follows.HELTF−244,244={+1, −1, −1, −1, −1, +1, −1, −1, +1, +1, −1, +1, −1, +1, −1, +1, +1, −1, +1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, +1, −1, −1, +1, +1, −1, +1, −1, −1, −1, −1, +1, −1, +1, +1, +1, −1, −1, +1, +1, +1, −1, −1, +1, +1, +1, +1, −1, +1, −1, +1, −1, +1, −1, +1, −1, −1, +1, −1, +1, +1, +1, −1, −1, +1, +1, +1, −1, −1, −1, −1, +1, −1, −1, +1, +1, −1, +1, −1, −1, −1, −1, +1, −1, +1, −1, −1, −1, −1, +1+1, +1, +1, +1, −1, −1, +1, +1, −1, −1, +1, −1, +1, +1, +1, +1, +1, −1, +1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, −1, −1, −1, +1, −1, −1, +1, +1, −1, +1, −1, −1, +1, −1, +1, +1, −1, +1, −1, −1, +1, +1, −1, −1, −1, −1, −1, −1, −1, +1, −1, −1, +1, +1, −1, +1, −1, −1, −1, −1, −1, +1, −1, +1, +1, +1, −1, −1, +1, +1, +1, −1, −1, −1, −1, −1, +1, −1, −1, +1, +1, −1, +1, −1, +1, +1, +1, −1, +1, −1, −1, −1, +1, +1, −1, −1, −1, +1, +1, +1, +1, −1+1, +1, −1, −1, +1, −1, +1, +1, +1, +1, +1, −1, +1, −1, −1, −1, +1, +1, −1, −1, −1, +1, 0, 0, 0, 0, 0, −1, +1, −1, +1, +1, −1, +1, +1, −1, −1, +1, −1, +1, −1, +1, −1, −1, +1, −1, +1, +1, +1, −1, −1, +1, +1, +1, +1, +1, +1, +1, +1, +1, −1, −1, +1, −1, +1, +1, +1, +1, −1, +1, −1, −1, +1, +1, −1, −1, +1, +1, −1, −1, −1, −1, +1, −1, +1, +1, −1, +1, −1, +1, −1, +1, +1, +1, −1, −1, −1, +1, +1, −1, −1, −1, +1, +1, +1, +1, −1, +1, +1, −1, −1, +1, +1, +1, +1, +1, +1, −1, −1, −1, −1, −1, +1, +1, −1, −1, −1, +1, −1, −1, −1, −1, +1, −1, −1, +1, +1, −1, +1, −1, +1, −1, +1, +1, −1, +1, −1, −1, +1, +1, −1, −1, −1, +1, −1, −1, −1−1, +1, −1, −1, −1, +1, −1, +1, −1, +1, −1, +1, +1, −1, +1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, +1, −1, −1, +1, +1, −1, +1, −1, −1, −1, +1, −1, +1, +1, −1, −1, +1, +1, +1, −1, +1, −1, −1, −1, −1, +1, −1, −1, +1, +1, −1, +1, −1, +1, −1, +1, +1, −1, +1, −1, −1, +1, +1, −1, −1, −1, +1, +1, +1, −−1, +1, +1, −1, +1, −1, +1, +1, ++1, +1, −1, +1, −1, −1, −1, +1, +1, −1, −1, −1, −1} An 80 MHz-band 4×HE-LTF specified in the existing 802.11ax, i.e., HE, is as follows.HELTF−500,500={+1, −1, +1, −1, +1, −1, −1, −1, +1, −1, −1, −1, +1, +1, −1, −1, +1, −1, −−1, +1, −1, −1, +1, +1, +1, −1, +1, −1, +1, −1, −1, +1, +1, −1, +1, +1, +1, −1, −1, +1, −1, −1, −1, −1, +1+1, +1, −1, −1, −1, −1, +1, +1, +1, +1, +1, +1, −1, +1, +1, +1, −1, +1, +1, −1, −1−1, +1, −1, +1, −1, −1, +1, +1, −1, +1, −1, +1, +1, +1, +1, +1, −1, −1, +1, +1, +1, −1, 1, +1, −1, −1, −1, +1, −1, +1, +1, −1, +1, +1, −1, +1, −1, −1, +1, −1, +1, +1, −1, −1, +1, +1, +1, +1, +1, −1, +1, +1, −1, −1, −1, +1, −1, −1, −1, +1, −1, +1, −1, +1, +1, −1, +1, −1, +1, −1, +1, +1, +1, −1, +1, +1, +1, −1, −1, +1, −1, −1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, +1, −1, +1, +1, −1, −1, +1, −1, −1, −1, +1, +1, −1, +1, +1, +1, +1, −1, −1, −1, +1, +1, +1, +1, −1, +1, +1, +1, +1, +1, +1, +1, −1, +1, +1, +1, −1, +1, +1, −1, +1, −1, +1, −1, −1+1, +1, −1, +1, −1, +1, +1, +1, +1, +1, −1, −1, +1, +1, +1, −1, +1, +1, −1, −1, −1, +1, −1, +1, +1, −1, +1, +1, −1, +1, −1, −1, +1, −1, +1, −1, +1, −1, +1, +1, +1, −1, +1, +1, +1, −1, −1, +1, −1, −1, −1, −1, +1, +1, −1, −1, −1, −1, +1, −1, +1, −1, +1, −1, −1, +1, −1, −1, −1, +1, +1, −1, +1, +1, +1, +1, −1, −1, −1, +1, +1, +1, +1, −1, +1, −1, −1, −1, −1, −1, −1, +1, −1, −1, −1, +1, −1, −1, +1, +1, +1, −1, +1, −1, +1, +1, −1, −1, +1, −1, +1, −1, −1, −1, −1, −1, +1, +1, −1, −1, −1, +1, −1, −1, +1, +1, +1, +1, −1, −1, +1, −1, −1, +1, −1, +1, +1, +1, +1, +1, +1, −1, −1, +1, +1, +1, +1, +1, −1, +1, +1, −1, −1, −1, +1, −1, −1, +1, −1, +1, −1, +1, +1, −1, +1, −1, +1, −1, +1, +1, +1, −1, +1, +1, +1, −1, −1, +1, −1, −1, −1, −1, −1, +1, +1, −1, −1, −1, −1, +1, −1, +1, −1, +1, +1, −1, −1, +1, −1, −1, −1, +1, +1, −1, +1, +1, +1, +1, −1, −1, −1, +1, +1, +1, +1, −1, −1, +1, +1, +1, +1, +1, +1, −1, +1, +1, +1, −1, +1, +1, −1, −1, −1, +1, −1, +1, −1, −1, +1, +1, −1, +1, −1, +1, +1, +1, +1, +1, +1, +1, +1, −1, +1, +1, −1, −1, −1, +1, −1, +1, +1, −1, +1, +1, −1, +1, −1, −1, −1, +1, −1, +1, −1, −1, −1, −1, +1, +1, +1, −1, −1, +1, 0, 0, 0, 0, 0, +1, −1, −1, −1, −1.−1, −1, +1, +1, +1, −1, −1, +1, +1, −1, +1, −1, +1, +1, −1, −1, +1, +1, −1, −1, −1, +1, +1, −1, +1, +1, +1, −1, +1, +1, +1, +1, +1, +1, 41, −1, +1, −1, −1, +1, −1, −1, +1, −1, +1, +1, +1, −1, −1, +1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, +1, −1, −1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, +1, −1, −1, +1, +1, +1, −1, +1, +1, +1, −1, +1, +1, −1, −1, −1, −1, −1, +1, +1, +1, 1, −1, −1, −1, +1, −1, −1, +1, +1, +1, −1, +1, +1, −1, −1, +1, −1, +1, −1, −1, −1, −1, −1, −1, −1, +1, +, 1, −1, −1, −1, +1, −1, −1, 41, +1, +1, −1, +1, −1, −1, +1, −1, −1, +1, −1, +1, +1, +1, −1, +1, −1, +1, +1, −1, +1, −1, +1, +1, +1, −1, −1, +1, −1, −1, +1, −1, −1, −1, −1, −1, −1, −1, +1, −1, +1, +, 1, −1, +1, +1, −1, +−1, +1, +1, +1, +1, +1, +1, +1, +1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, +1, −1, +1, +1, +1, −1, +1, +1, +1, +1, −1, −1, −1, −1, −1, +1, +1, −1, −1, −1, +1, +1, +1, −1, +1, +1, −1, −1, +1, −1, +1, −1, +1, +1, +1, −1, +1, −1, −1, +1, +1, −1, +1, −1, +1, +1, +1, −1, −1, +1, −1, −1, +1, −1, −1, −1, −1, −1, −1, −1, +1, −1, +1, +1, −1, +1, +1, −1, +1, −1, −1, −1, +1, +1, −1, −−1, +1, +1, −1, −1, +1, +1, +1, +1, +1, −1, +1, +1, +1, +1, 1, −1, −1, +1, +1, +1, +1, +1, +1, +1, −1, −1, −1, +1, −1, −1, −1, +1, −1, +1−1, +1, +1, +1, +1, −1, −1, −1, +1, +1, +1, +1, −1, +1, −1, −1, −1, −1, +1, −1, −1, +1, +1, −1, +1, −1, +1, −1, −1, −1, −1, −1, −1, +1, +1, −1, −1, −1, +1, −1, −1, +1, +1, +1, −1, +1, −1, −1, +1, −1, −1, +1, +1, −1, +1, −1, +1, −1, −1, +1, +1, −1, +1, +1, +1, +1, −1, −1, +1, −1, −1, −1, +1, −1, −1, −1, −1, −1, −1, −1, +1, −1, +1, +1, +1, +1, −1, +1, −1, −1, −1, +1, −1, +1, +1, +1, −1, −1, +1, +1, +1, +1, +1, −1, +1, −1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, +1, −1, +1, +1, +1, −1, +1, +1, +1, −1, +1, −1, +1, −1, −1, −1, −1, −1, +1, +1, +1, −1, −1, −1, −1, +1, −1, −1, +1, +−1, +1, −1, +1, +1, −1, −1, +1, −1, +1, −1, +1} A 160 MHz-band 4×HE-LTF specified in the existing 802.11ax, i.e., HE, is as follows.HELTF−1012,1012={LTF80MHz_lower_4x, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, LTF80MHz_upper_4x}LTF80MHz_lower_4x={LTF80MHz_left_4x, 0, LTF80MHz_right_4x} shall be used in the lower 80 MHz frequency segmentLTF80MHz_upper_4x={LTF80MHz_left_4x, 0, LTF80MHz_right_4x} shall be used in the upper 80 MHz frequency segmentLTF80MHz_left_4x={+1, +1, −1, +1, −1, +1, −1, −1, −1, −1, −1, −1, +1, +1, −1, 1, −1, −1, +1, +1, +1, +1, −1, +1, −1, +1, −1, −1, +1, −1, +1, +1, +1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, +1, +1, +1, −1, −1, −1, −1, −1, −1, −1, −1, +1, −1, −1, −1, −1, −1, +1, +1, +1, +1, −1, −1, +1, +1, +1, −1, −1, −1, −1, −1, −1, −1, −1, −−1, −1, −1, +1, −1, −1, −1, −1, +1, +1, −1, −1, −1, +1, +1, −1, −1, 1, −1, −1, −1, −1, −−1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, +1, −1, −1, −1, −1, −1, +1, −1, −1, −1, −1, +1, −1, −1, +1, +1, −1, −1, −1, −1, −1, −1, +1, −1, −1, +1, −1, +1, +1, −1, −1, −1, +1, +1, −1, +1, −1, +1, +1, +1, −1, −1, −1, −1, +1, −1, −1, −1, −1, −1, −1, +1, −1, −1, −1, +1, −1, −1, +1, +1, +1, 1, +1, −1, −1, −1, +1, +1, −1, −1, −1, +1, −1, +1, +1, −1, −1, +1, −1, −1, −1, −1, −1, −1, +1, +1, +1, −1, −1, −1, +1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, −1, −1, +1, +1, −1, −1, −1, +1, +1, +1, +1, −1, −1, −1, −1, −1, −1, −1, −1, +1, −1, −1, −1, +1, −1, −1, +1, −1, −1, +1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, +1, −1, −1, −1, 1, −1, −1, +1, +1, +1, −1, +1, −1, −1, −1, −1, −1, +1, −1, +1, −1, −1, −1, −1, +1, +1, +1, +1, +1, −1, +1, −1, −1, −1, −1, −1, −1, −1, −1, −1, +1, +1, +1, −1, −1, −1, +1, −1, +1, +1, +1, −1, +1, +1, −1, −1, −1, −1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, +1, 1−1, −1, −1, −1, +1, −1, +1, +1, −1, −1, −1, −1, −1, +1, +1, −1, −1, +1, +1, +1, +1, +1, −1, −1, +1, −−1, −1, −1, −1, −1, +1, −1, +1, 1, −1, −1, +1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, +1, 0, 0}LTF80MHz_right_4x={0, 0, −1, −1, −1, −1, −1, −1, −1, +1, 1, −1, −1, −1, +1, +1, −1, +1, −1, +1, +1, −1, −1, +1, −1, +1, −1, −1, −1, +1, +1, −1, −1, −1, +1, +1, −1, +1, +1, −1, −1, +1, −1, −1, +1, −1, +1, +1, +1, −1, −1, +1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, +1, −1, −1, +1, +1, +1, −1, +1, +1, +1, −1, +1, −1, −1, −1, −1, −1, −1, −1, +1, +1, +1, −1, −1, −1, +1, −1, −1, +1, +1, −1, +1, −1, +1, −1, −1, −1, −1, −1, −1, −1, +1, +1, −1, −1, −1, +1, −1, −1, +1, +1, +1, −1, +1, −1, −1, −1, −1, +1, −1, +1, +1, +1, −1, +1−−1, −1, +1, +1, −1, +1, −1, +1, +1, +1, −1, −1, +1, −1, −1, −1, +1, −1, −1, −1, −1, −1, −1, +1, −1, +1, +1, −1, +1, +1, −1, +1, −1, −1, −1, +1, +1, −1, +1, 1, +1, +1, +1, +1, +1, −1, +1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, +1, −1, −1, +1, +1, +1, −1, +1, −1, +1, −1, +1, −1, +1, −1, −1, −1, −1, −1, +1, +1, +1, −1, −1, −1, −1, +1, −1, −1, +1, +1, +1, −1, +1, −1, −1, +1, −1, +1, −1, +1, +1, +1, −1, −1, −1, +1, +1, −1, +1, −1, +1, +1, +1, −1, −1, +1, −1, −1, −1, +1, −1, −1, −1, −1, −1, −1, −1, +1, −1, +1, +1, −1, +1, +1, −1, +1, −1, −1, −1, +1, +1, −1, +1, +1, +1, −1, −1, +1, +1, +1, +1, −1, +1, +1, −1, +1, +1, −1, −1, +1, +1, +1, +1, +1, −1, +1, +1, −1, −1, −1, +1, −1, −1, −1, +1, −1, 1, −1, +1, +1, +1, −1, −1, −1, +1, +1, +1, +1, −1, +1, +1, −1, −1, −1, +1, −1, −1, +1, +1, −1, +1, −1, +1, −1, −1, −1, −1, −1, −1, +1, −1, −1, −1, −1, +1, +1, +1, +1, −1, +1, −1, −1, +1, −1, −1, +1, −1, +1, −1, +1, −1, +1, −1, −1, −1, +1, −1, +1, −1, +1, +1, +1, −1, −1, +1, −1, −1, −1, +1, −1, −1, −1, −1, −1, −1, −1, +1, −1, +1, +1, −1, +1, +1, 1, +1, −1, −1, −1, +1, +1, −1, −1, +1, +1, −1, −1, +1, +1, +1, +1, +1, −1, +1, −1, −1, −1, −1, +1, +1, −1, −1, −1, −1, −1, +1, −1, −1, +1, +1, +1, −1, +1, +1, +1, −1, +1, −1, +1, −1, −1, −1, −1, −1, +1, +1, +1, −1, −1, −1, −1, +1, −1, −1, +1, +1, +1, −1, +1, +1, −1, −1, −1, −1, +1, −1, +1} In case of 80+80 MHz transmission using the 4×HE-LTF, a lower 80 MHz frequency segment shall use the 80 MHz 4×HE-LTF sequence of HELTF-−500,500-=LTF80MHz_lower_4x, and an upper 80 MHz frequency segment shall use the 80 MHz 4×HE-LTF sequence of HELTF-−500,500-=LTF80MHz_upper_4x. FIG.20is a diagram illustrating an embodiment of an 80 MHz OFDMA tone plan. Referring toFIG.20, an 80 MHz OFDMA tone plan may be configured by duplicating a 40 MHz OFDMA tone plan and shifting each +/−20 MHz. For example, 160 MHz, 240 MHz, and 320 MHz tone plans may be configured by duplicating the 80 MHz tone plan. The 80 MHz OFDMA tone plan may be configured as follows: {−256+[−244:−3 3:244], 256+[−244:−3 3:244]}=[−500:−259, −253:−12, 12:253, 259:500] Relative to 11ax, the new tone plan substantially has relatively shifted the ‘−253:−12’ and ‘12:253’ parts and small RUs. The 484RU can be similarly modified to have 5 empty tones in the middle. The 80 MHz OFDMA is configured by duplicating a 40 MHz, and 484 tone RUs in Table 8 are shifted by 256 tones to the right and left respectively. TABLE 8RU typeRU index and subcarrier range26-tone RURU 1RU 2RU 3RU 4RU 5[−243: −218][−217: −192][−189: −164][−163: −138][−136: −111]RU 6RU 7RU 8RU 9[−109: −84][−83: −58][−55: −30][−29: −4]RU 10RU 11RU 12RU 13RU 14[4: 29][30: 55][58: 33][84: 109][111: 136]RU 15RU 16RU 17RU 18[138: 163][164: 189][192: 217][218: 243]52-tone RURU 1RU 2RU 3RU4[−243: −192][−189: −138][−109: −58][−55: −4]RU 5RU 6RU 7RU 8[4: 55][58: 109][138: 189][192: 243]106-tone RURU 1RU 2RU 3RU 4[−243: −138][−109: −4][4: 109][138: 243]242-tone RURU 1RU 2[−244: −3][3: 244]484-tone RURU 1[−244: −3, 3: 244] When the tone plan is designed according to the proposed new tone plan, the location of the pilot subcarrier may be different. For example, in the case of 80 MHz configuration, four 242 tones can be included, and the second and third 242 tones are shifted by 5 tones toward DC tone(s). If the pilot subcarrier is also shifted by 5 tones like this, it can be located in an odd tone, which can cause a problem. Therefore, the present disclosure proposes a method for changing the position of the pilot. First, the existing tone plan for 80 MHz is shown in Table 9. TABLE 9Data and pilot subcarrier indices for RUs in a 80 MHz HE PPDUand in a non-OFDMA 80 MHz HE PPDU26-toneRU1(−499:−474), RU2(−473:−448), RU3(−445:−420),RURU4(−419:−394), RU5(−392:−367), RU6(−365:−340),RU7(−339:−314), RU8(−311:−286), RU9(−285:−260),RU10(−257:−232), RU11(−231:−206), RU12(−203:−178),RU13(−177:−152), RU14(−150:−125), RU15(−123:−98),RU16(−97:−72), RU17(−69:−44), RU18(−43:−18),RU19(−16:−4, 4:16), RU20(18:43), . . . , RU37(474:499)52-toneRU1(−499:−448), RU2(−445:−394), RU3(−365:−314),RURU4(−311:−260), RU5(−257:−206), RU6(−203:−152),RU7(−123:−72), RU8(−69:−18), RU9(18:69), . . . ,RU16(448:499)106-toneRU1(−499:−394), RU2(−365:−260), RU3(−257:−152),RURU4(−123:−18), RU5(18:123), . . . , RU8(394:499)242-toneRU1(−500:−259), RU2(−258:−17), RU3(17:258),RURU4(259:500)484-toneRU1(−500:−17), RU2(17:500)RU996-toneRU1(−500:−3, 3:500)RU An example of a new tone plan in 80 MHz is shown in Table 10. TABLE 10Data and pilot subcarrier indices for RUs in a 80 MHz HE PPDUand in a non-OFDMA 80 MHz HE PPDU26-toneRU1(−499:−474), RU2(−473:−448), RU3(−445:−420),RURU4(−419:−394), RU5(−392:−367), RU6(−365:−340),RU7(−339:−314), RU8(−311:−286), RU9(−285:−260),RU10(−252:−227), RU11(−226:−201), RU12(−198,−173),RU13(−172:−147), RU14(−145:−120), RU15(−118:−93),RU16(−92:−67), RU17(−64:−39), RU18(−38:−13),RU19(13:38), . . . , RU36(474:499)52-toneRU1(−499:−448), RU2(−445:−394), RU3(−365:−314),RURU4(−311:−260), RU5(−252:−201), RU6(−198:−147),RU7(−118:−67), RU8(−64:−13), RU9(13:64), . . . ,RU16(448:499)106-toneRU1(−499:−394), RU2(−365:−260), RU3(−252:−147),RURU4(−118:−13), RU5(13,118), . . . , RU8(394:499)242-toneRU1(−500:−259), RU2(−253:−12), RU3(12:253),RURU4(259:500)484-toneRU1(−500:−259 −253:−12), RU2(12:253 259:500)RU996-toneRU1(−500:−3, 3:500)RU According to the new tone plan, the position of the pilot subcarrier should be changed, but if the existing method is maintained, it will be mapped to an odd tone. In this case, when the STF/LTF is mapped only to even tones, it may cause a problem if the pilot is mapped to an odd tone. TABLE 11pilot subcarrier indices for RUs in a 80 MHz HE PPDUand in a non-OFDMA 80 MHZ HE PPDU26-tone{−494, −480}, {−468, −454}, {−440, −426}, {−414, −400},RU{−386, −372}, {−360, −346}, {−334, −320}, {−306, −292},{−280, −266}, {−246, −232}, {−220, −206}, {−192, −178},{−166, −152}, {−138, −124}, {−112, −98}, {−86, −72},{−58, −44}, {−32, −18}, {18, 32}, {44, 58}, {72, 86},{98, 112}, {124, 138}, {152, 166}, {178, 192}, {206, 220},{232, 246}, {266, 280}, {292, 306}, {320, 334}, {346, 360},{372, 386}, {400, 414}, {426, 440}, {454, 468}, {480, 494}52-tone{−494, −480, −468, −454}, {−440, −426, −414, −400},RU{−360, −346, −334, −320}, {−306, −292, −280, −266},{−246, −232, −220, −206}, {−192, − 178, −166, −152},{−112, −98, −86, −72}, {−58, −44, −32, −18},{18, 32, 44, 58}, {72, 86, 98, 112}, {152, 166, 178, 192},{206, 220, 232, 246}, {266, 280, 292, 306},{320, 334, 346, 360}, {400, 414, 426, 440}, {454, 468, 480,494}106-tone{−494, −468, −426, −400}, {−360, −334, −292, −266},RU{−246, −220, −178, −152}, {−112, −86, −44, −18}, {18, 44,86, 112}, {152, 178, 220, 246}, {266, 292, 334, 360}, {400,426, 468, 494}242-tone{−494, −468, −426, −400, −360, −334, −292, −266, −246},RU{−220, −178, −152, −112, −86, −44, −18}, {18, 44, 86, 112,152, 178, 220, 246}, {266, 292, 334, 360, 400, 426, 468, 494}484-tone{−494, −468, −426, −400, −360, −334, −292, −266, −246,RU−220, −178, −152, −112, −86, −44, −18}, {18, 44, 86, 112,152, 178, 220, 246, 266, 292, 334, 360, 400, 426, 468, 494}996-tone{−468, −400, −334, −266, −220, −152, −86, −18, 18, 86,RU152, 220, 266, 334, 400, 468} Here, the pilots for the 14thRU and 23rd26RU may use {440, −126} and {126, 140} instead of {438, −124} and {124, 138}. This is to align the pilot tone with the positions of the 6thto 7thtones or the 20thto 21sttones among 1stto 26thtones within a 26-tone RU. Also, in the case of a new tone plan, since no change has made for the 996 RU, when the 996 RU or a RU that is a multiple of the 996 RU is used, the position of the existing pilot may be maintained as it is. If the above pilot tones are referred to as [80_Pilot_idx], in the case of 160/240/320 MHz, the pilot tones can be expressed as follows. For 160 MHz: [80_Pilot_idx]-512, [80_Pilot_idx]+512 For 240 MHz: [80_Pilot_idx]-1024, [80_Pilot_idx], [80_Pilot_jdx]+1024 For 320 MHz: [80_Pilot_idx]-1536, [80_Pilot_idx]-512, [80_Pilot_idx]+512, [80_Pilot_idx]+1536 New aggregated RUs adapted in 11be (hereinafter referred to as MRUs) or MRUs that may be added are as follows. For 80 MHz: 26+52 MRU, 26+106 MRU, 484+242 MRU For 160 MHz: 26+52 MRU, 26+106 MRU, 484+996 MRU, 242+484+996 MRU For 240 MHz: 26+52 MRU, 26+106 MRU, 2*996 MRU, 2*996+484 MRU, 996+484 MRU For 320 MHz: 26+52 MRU, 26+106 MRU, 3*996 MRU, 3*996+484 MRU, 484+996 MRU For these MRUs, the pilot tone for the new tone plan can be defined in the following two methods. Method 1: In case of ‘X+Y’ MRU, pilot tones for each of X RU and Y RU are used. For example, in the case of 26+52 tone, pilot indexes for the 26-tone RU and the pilot indexes for the 52-tone RU defined in the above table may be applied, respectively. In the case of 242+484+996 MRU, pilot indexes for the 242-tone RU, the 484-tone RU, and the 996-tone RU can be used, respectively. Even when a plurality of 996 RUs are included, the pilot indexes for each 996-tone RU can be used. Method 2: In the case of ‘X+Y’ MRU, pilot tones for the smallest RU among RUs greater than the ‘X+Y’ value may be used. That is, in the case of 26+52 MRU, pilot indexes for a 106-tone RU are used, and pilot index(es) belonging to a 26-tone that is not included among the 106-tone RU is not included. In case of 26+106 MRU, pilot indexes of a 242-tone RU are used and pilot index(es) belonging to tone indexes not included in the 242-tone RU is not included. In the case of 484+242 MRU, pilot indexes of the 996-tone RU are used, but pilot index(es) belonging to tone indexes that are not included in the 996-tone RU is not included. However, if it is larger than the 996-tone RU, pilot indexes belonging to each 996-tone RU may be used. For example, in the case of 3*996+484 MRU in the case of 320 MHz, pilot indexes belonging to 320 MHz are used and the pilot indexes not included in a corresponding 996-tone RU are not used. FIG.21is a diagram illustrating an embodiment of a tone plan. Referring toFIG.21, with respect to the definition (hatched portion) of 26+52 MRUs for 80 MHz, the pilot tones of Methods 1 and 2 are illustrated in Table 12. TABLE 1226 + 52−360, −346, −334, −320, −306, −292, −220, −206,MRU−192, −178, −166, −152, 152, 166, 178, 192, 206,(Method 1)220, 292, 306, 320, 334, 346, 36026 + 52−360, −334, −292, −220, −178, −152, 152, 178,MRU220, 292, 334, 360(Method 2) If the example pilot tones are referred to as [80_Pilot_idx], in the case of 160/240/320 MHz, the pilot tones can be expressed as follows. For 160 MHz: [80_Pilot_idx]-512, [80_Pilot_idx]+512 For 240 MHz: [80_Pilot_idx]-1024, [80_Pilot_idx], [80_Pilot_idx]+1024 For 320 MHz: [80_Pilot_idx]-1536, [80_Pilot_idx]-512, [80_Pilot_idx]+512, [80_Pilot_idx]+1536 An example of a new tone plan for 80 MHz is shown in Table 13. TABLE 13Data and pilot subcarrier indices for RUs in a 80 MHz HE PPDU andin a non-OFDMA 80 MHZ HE PPDU26-toneRU1(−499:−474), RU2(−473:−448), RU3(−445:−420),RURU4(−419:−394), RU5(−392:−367), RU6(−365:−340),RU7(−339:−314), RU8(−311:−286), RU9(−285:−260),RU10(−252:−227), RU11(−226:−201), RU12(−198,−173),RU13(−172:−147), RU14(−145:−120), RU15(−118:−93),RU16(−92:−67), RU17(−64:−39), RU18(−38:−13),RU19(13:38), . . . , RU36(474:499)52-toneRU1(−499:−448), RU2(−445:−394), RU3(−365:−314),RURU4(−311:−260), RU5(−252:−201), RU6(−198:−147),RU7(−118:−67), RU8(−64:−13), RU9(13:64), . . . ,RU16(448:499)106-toneRU1(−499:−394), RU2(−365:−260), RU3(−252:−147),RURU4(−118:−13), RU5(13,118), . . . , RU8(394:499)242-toneRU1(−500:−259), RU2(−253:−12), RU3(12:253),RURU4(259:500)484-toneRU1(−500:−259 −253:−12), RU2(12:253 259:500)RU996-toneRU1(−500:−259 −253:−12 12:253 259:500)RU According to the new tone plan, the position of the pilot subcarrier should be changed, but if the existing method is maintained, it will be mapped to an odd tone. In this case, when the STF/LTF is mapped only to an even tone, it may cause a problem if the pilot is mapped to an odd tone. TABLE 14pilot subcarrier indices for RUs in a 80 MHz HE PPDUand in a non-OFDMA 80 MHz HE PPDU26-tone−494, −480, −468, −454, −440, −426, −414, −400, −386, −372,RU−360, −346, −334, −320, −306, −292, −280, −266, −246, −232,−220, −206, −192, −178, − 166, −152, −138, −124, −112, −98,−86, −72, −58, −44, −32, −18, 18, 32, 44, 58, 72, 86, 98, 112,124, 138, 152, 166, 178, 192, 206, 220, 232, 246, 266, 280,292, 306, 320, 334, 346, 360, 372, 386, 400, 414, 426, 440,454, 468, 480, 49452-tone−494, −480, −468, −454, −440, −426, −414, −400, −386, −372,RU−360, −346, −334, −320, −306, −292, −280, −266, −246, −232,−220, −206, −192, −178, −166, − 152, −112, −98, −86, −72,−58, −44, −32, −18, 18, 32, 44, 58, 72, 86, 98, 112, 152, 166,178, 192, 206, 220, 232, 246, 266, 280, 292, 306, 320, 334,346, 360, 372, 386, 400, 414, 426, 440, 454, 468, 480, 494106-tone−494, −468, −426, −400, −360, −334, −292, −266, −246, −220,RU−178, −152, −112, −86, −44, −18, 18, 44, 86, 112, 152, 178,220, 246, 266, 292, 334, 360, 400, 426, 468, 494242-tone−494, −468, −426, −400, −360, −334, −292, −266, −246, −220,RU−178, −152, −112, −86, −44, −18, 18, 44, 86, 112, 152, 178,220, 246, 266, 292, 334, 360, 400, 426, 468, 494484-tone−494, −468, −426, −400, −360, −334, −292, −266, −246, −220,RU−178, −152, −112, −86, −44, −18, 18, 44, 86, 112, 152, 178,220, 246, 266, 292, 334, 360, 400, 426, 468, 494996-tone−468, −400, −334, −266, −220, −152, −86, −18, 18, 86, 152,RU220, 266, 334, 400, 468 If the above pilot tones are referred to as [80_Pilot_idx], in the case of 160/240/320 MHz, the pilot tones can be expressed as follows. For 160 MHz: [80_Pilot_idx]-512, [80_Pilot_idx]+512 For 240 MHz: [80_Pilot_idx]-768, [80_Pilot_idx], [80_Pilot_idx]+768 For 320 MHz: [80_Pilot_idx]-1024, [80_Pilot_idx]-512, [80_Pilot_idx]+512, [80_Pilot_idx]+1024 Alternatively, for example, if the above pilot tones are referred to as [80_Pilot_idx], in the case of 160/240/320 MHz, the pilot tones may be expressed as follows. For 160 MHz: [80_Pilot_idx]-512, [80_Pilot_idx]+512 For 240 MHz: [80_Pilot_idx]-768, [80_Pilot_idx], [80_Pilot_idx]+768 For 320 MHz: [80_Pilot_idx]-1024, [80_Pilot_idx]-512, [80_Pilot_idx]+512, [80_Pilot_idx]+1024 Table 15 shows an embodiment of the location of the pilot subcarrier. TABLE 15pilot subcarrier indices for RUs in a 80MHz HE PPDUand in a non-OFDMA 80MHz HE PPDU26-tone−494, −480, −468, −454, −440, −426, −414, −400, −386, −372,RU−360, −346, −334, −320, −306, −292, −280, −266, −246, −232,−220, −206, −192, −178, −166, −152, −138, −124, −112, −98,−86, −72, −58, −44, −32, −18, 18, 32, 44, 58, 72, 86, 98, 112,124, 138, 152, 166, 178, 192, 206, 220, 232, 246, 266, 280,292, 306, 320, 334, 346, 360, 372, 386, 400, 414, 426, 440,454, 468, 480, 49452-tone−494, −480, −468, −454, −440, −426, −414, −400, −360, −346,RU−334, −320, −306, −292, −280, −266, −246, −232, −220, −206,−192, −178, −166, −152, −112, −98, −86, −72, −58, −44, −32,−18, 18, 32, 44, 58, 72, 86, 98, 112, 152, 166, 178, 192, 206,220, 232, 246, 266, 280, 292, 306, 320, 334, 346, 360, 400,414, 426, 440, 454, 468, 480, 494106-tone−494, −468, −426, −400, −360, −334, −292, −266, −246, −220,RU−178, −152, −112, −86, −44, −18, 18, 44, 86, 112, 152, 178,220, 246, 266, 292, 334, 360, 400, 426, 468, 494242-tone−494, −468, −426, −400, −360, −334, −292, −266, −246, −220,RU−178, −152, −112, −86, −44, −18, 18, 44, 86, 112, 152, 178,220, 246, 266, 292, 334, 360, 400, 426, 468, 494484-tone−494, −468, −426, −400, −360, −334, −292, −266, −246, −220,RU−178, −152, −112, −86, −44, −18, 18, 44, 86, 112, 152, 178,220, 246, 266, 292, 334, 360, 400, 426, 468, 494996-tone−468, −400, −334, −266, −220, −152, −86, −18, 18, 86, 152,RU220, 266, 334, 400, 468 If the above pilot tones are referred to as [80_Pilot_idx], in the case of 160/240/320 MHz, the pilot tones can be expressed as follows. For 160 MHz: [80_Pilot_idx]-512, [80_Pilot_idx]+512 For 240 MHz: [80_Pilot_idx]-768, [80_Pilot_idx], [80_Pilot_idx]+768 For 320 MHz: [80_Pilot_idx]-1024, [80_Pilot_idx]-512, [80_Pilot_idx]+512, [80_Pilot_idx]+1024 FIG.22is a diagram illustrating an embodiment of a method of operating a transmitting STA. Referring toFIG.22, the transmitting STA may generate a PPDU (S2210). For example, the transmitting STA may generate a first PPDU. For example, the first PPDU may include a first data field transmitted through a 996 tone resource unit (RU). The first data field may include first pilot subcarriers for the 996 tone RU. The indices of the first pilot subcarriers may be {−468, −400, −334, −266, −220, −152, −86, −18, 18, 86, 152, 220, 266, 334, 400, 468}. The transmitting STA may transmit a PPDU (S2220). For example, the transmitting STA may transmit the first PPDU through an 80 MHz band. For example, the transmitting STA may generate a second PPDU and transmit the second PPDU through an 80 MHz band. The second PPDU may include a second data field transmitted through the 26-tone RU. The second data field may include second pilot subcarriers for the 26-tone RU. The indices of the second pilot subcarriers may be {−494, −480}, {−468, −454}, {−440, −426}, {−414, −400}, {−386, −372}, {−360, −346}, {−334, −320}, {−306, −292}, {−280, −266}, {−246, −232}, {−220, −206}, {−192, −178}, {−166, −152}, {−140, −126}, {−112, −98}, {−86, −72}, {−58, −44}, {−32, −18}, {18, 32}, {44, 58}, {72, 86}, {98, 112}, {126, 140}, {152, 166}, {178, 192}, {206, 220}, {232, 246}, {266, 280}, {292, 306}, {320, 334}, {346, 360}, {372, 386}, {400, 414}, {426, 440}, {454, 468}, or {480, 494}. For example, the transmitting STA may generate a third PPDU and transmit the third PPDU through an 80 MHz band. The third PPDU may include a third data field transmitted through a 52-tone RU. The third data field may include third pilot subcarriers for the 52-tone RU. The indices of the third pilot subcarriers may be {−494, −480, −468, −454}, {−440, −426, −414, −400}, {−360, −346, −334, −320}, {−306, −292, −280, −266}, {−246, −232, −220, −206}, {−192, −178, −166, −152}, {−112, −98, −86, −72}, {−58, −44, −32, −18}, {18, 32, 44, 58}, {72, 86, 98, 112}, {152, 166, 178, 192}, {206, 220, 232, 246}, {266, 280, 292, 306}, {320, 334, 346, 360}, {400, 414, 426, 440}, or {454, 468, 480, 494}. For example, the transmitting STA may generate a fourth PPDU and transmit the fourth PPDU through an 80 MHz band. The fourth PPDU may include a fourth data field transmitted through the 106-tone RU. The fourth data field may include fourth pilot subcarriers for the 106-tone RU. The indices of the fourth pilot subcarriers may be {−494, −468, −426, −400}, {−360, −334, −292, −266}, {−246, −220, −178, −152}, {−112, −86, −44, −18}, {18, 44, 86, 112}, {152, 178, 220, 246}, {266, 292, 334, 360}, or {400, 426, 468, 494}. For example, the transmitting STA may generate a fifth PPDU and transmit the fifth PPDU through an 80 MHz band. The fifth PPDU may include a fifth data field transmitted through the 242-tone RU. The fifth data field may include fifth pilot subcarriers for the 242-tone RU. The indices of the fifth pilot subcarriers may be {−494, −468, −426, −400, −360, −334, −292, −2661, 1−246, −220, −178, −152, −112, −86, −44, −18}, {18, 44, 86, 112, 152, 178, 220, 246}, or {266, 292, 334, 360, 400, 426, 468, 494}. For example, the transmitting STA may generate a sixth PPDU and transmit the sixth PPDU through an 80 MHz band. The sixth PPDU may include a sixth data field transmitted through a 484 tone RU. The sixth data field may include sixth pilot subcarriers for the 484 tone RU. The indices of the sixth pilot subcarriers may be {−494, −468, −426, −400, −360, −334, −292, −266, −246, −220, −178, −152, −112, −86, −44, −18}, or {18, 44, 86, 112, 152, 178, 220, 246, 266, 292, 334, 360, 400, 426, 468, 494} FIG.23is a diagram illustrating an embodiment of a method of operating a receiving STA. Referring toFIG.23, a receiving STA may receive a PPDU (S2310). For example, the receiving STA may receive the first PPDU through an 80 MHz band. For example, the first PPDU may include a first data field transmitted through a 996 tone resource unit (RU). The first data field may include first pilot subcarriers for the 996 tone RU. The indices of the first pilot subcarriers may be {−468, −400, −334, −266, −220, −152, −86, −18, 18, 86, 152, 220, 266, 334, 400, 468}. The transmitting STA may decode the PPDU (S2320). For example, the receiving STA may decode the first PPDU. For example, the transmitting STA may receive a second PPDU through an 80 MHz band and decode the second PPDU. The second PPDU may include a second data field transmitted through the 26-tone RU. The second data field may include second pilot subcarriers for the 26-tone RU. The indices of the second pilot subcarriers may be {−494, −480}, {−468, −454}, {−440, −426}, {−414, −400}, {−386, −372}, {−360, −346}, {−334, −320}, {−306, −292}, {−280, −266}, {−246, −232}, {−220, −206}, {−192, −178}, {−166, −152}, {−140, −126}, {−112, −98}, {−86, −72}, {−58, −44}, {−32, −18}, {18, 32}, {44, 58}, {72, 86}, {98, 112}, {126, 140}, {152, 166}, {178, 192}, {206, 220}, {232, 246}, {266, 280}, {292, 306}, {320, 334}, {346, 360}, {372, 386}, {400, 414}, {426, 440}, {454, 468}, or {480, 494}. For example, the transmitting STA may receive a third PPDU through an 80 MHz band and decode the third PPDU. The third PPDU may include a third data field transmitted through a 52-tone RU. The third data field may include third pilot subcarriers for the 52-tone RU. The indices of the third pilot subcarriers may be {−494, −480, −468, −454}, {−440, −426, −414, −400}, {−360, −346, −334, −320}, {−306, −292, −280, −266}, {−246, −232, −220, −206}, {−192, −178, −166, −152}, {−112, −98, −86, −72}, {−58, −44, −32, −18}, {18, 32, 44, 58}, {72, 86, 98, 112}, {152, 166, 178, 192}, {206, 220, 232, 246}, {266, 280, 292, 306}, {320, 334, 346, 360}, {400, 414, 426, 440}, or {454, 468, 480, 494}. For example, the transmitting STA may receive a fourth PPDU through an 80 MHz band and decode the fourth PPDU. The fourth PPDU may include a fourth data field transmitted through the 106-tone RU. The fourth data field may include fourth pilot subcarriers for the 106-tone RU. The indices of the fourth pilot subcarriers may be {−494, −468, −426, −400}, {−360, −334, −292, −266}, {−246, −220, −178, −152}, {−112, −86, −44, −18}, {18, 44, 86, 112}, {152, 178, 220, 246}, {266, 292, 334, 360}, or {400, 426, 468, 494}. For example, the transmitting STA may receive a fifth PPDU through an 80 MHz band and decode the fifth PPDU. The fifth PPDU may include a fifth data field transmitted through the 242-tone RU. The fifth data field may include fifth pilot subcarriers for the 242-tone RU. The indices of the fifth pilot subcarriers may be {−494, −468, −426, −400, −360, −334, −292, −266}, {−246, −220, −178, −152, −112, −86, −44, −18}, {18, 44, 86, 112, 152, 178, 220, 246}, or {266, 292, 334, 360, 400, 426, 468, 494}. For example, the transmitting STA may receive a sixth PPDU through an 80 MHz band and decode the sixth PPDU. The sixth PPDU may include a sixth data field transmitted through a 484 tone RU. The sixth data field may include sixth pilot subcarriers for the 484 tone RU. The indices of the sixth pilot subcarriers may be {−494, −468, −426, −400, −360, −334, −292, −266, −246, −220, −178, −152, −112, −86, −44, −18}, or {18, 44, 86, 112, 152, 178, 220, 246, 266, 292, 334, 360, 400, 426, 468, 494} Some of the detailed steps shown in the example ofFIGS.22and23may not be essential steps and may be omitted. In addition to the steps shown inFIGS.22and23, other steps may be added, and the order of the steps may vary. Some of the above steps may have their own separate technical meaning. The technical features of the present specification described above may be applied to various devices and methods. For example, the above-described technical features of the present specification may be performed/supported through the apparatus ofFIGS.1and/or19. For example, the technical features of the present specification described above may be applied only to a part ofFIGS.1and/or19. For example, the technical features of the present specification described above are implemented based on the processing chip(s)114and124ofFIG.1, or implemented based on the processor(s)111and121and the memories112and122ofFIG.1, or may be implemented based on the processor610and the memory620ofFIG.19. For example, an apparatus of the present specification may include a memory; and a processor operatively coupled to the memory, wherein the process is adapted to: generate a first physical protocol data unit (PPDU); and transmit the first PPDU through an 80 MHz band, wherein the first PPDU includes a first data field transmitted through a 996 tone resource unit (RU), wherein the first data field includes first pilot subcarriers for the 996 tone RU, and indices of the first pilot subcarriers are as follows: {−468, −400, −334, −266, −220, −152, −86, −18, 18, 86, 152, 220, 266, 334, 400, 468} The technical features of the present specification may be implemented based on a computer readable medium (CRM). For example, the CRM proposed by the present specification may be at least one computer readable medium (CRM) storing instructions that, based on being executed by at least one processor of a transmitting station (STA) in a wireless local area network (WLAN) system, perform operations comprising: generating a first physical protocol data unit (PPDU); and transmitting the first PPDU through an 80 MHz band, wherein the first PPDU includes a first data field transmitted through a 996 tone resource unit (RU), wherein the first data field includes first pilot subcarriers for the 996 tone RU, and indices of the first pilot subcarriers are as follows: {−468, −400, −334, −266, −220, −152, −86, −18, 18, 86, 152, 220, 266, 334, 400, 468}. The instructions that are stored in the CRM of the present specification may be executed by at least one processor. At least one processor being related to the CRM of the present specification may be the processor(s) (111,121) or processing chip(s) (114,124) ofFIG.1, or the processor (610) ofFIG.19. Meanwhile, the CRM of the present specification may be the memory(s) (112,122) ofFIG.1, or the memory (620) ofFIG.19, or a separate external memory/storage medium/disc, and so on. The foregoing technical features of this specification are applicable to various applications or business models. For example, the foregoing technical features may be applied for wireless communication of a device supporting artificial intelligence (AI). Artificial intelligence refers to a field of study on artificial intelligence or methodologies for creating artificial intelligence, and machine learning refers to a field of study on methodologies for defining and solving various issues in the area of artificial intelligence. Machine learning is also defined as an algorithm for improving the performance of an operation through steady experiences of the operation. An artificial neural network (ANN) is a model used in machine learning and may refer to an overall problem-solving model that includes artificial neurons (nodes) forming a network by combining synapses. The artificial neural network may be defined by a pattern of connection between neurons of different layers, a learning process of updating a model parameter, and an activation function generating an output value. The artificial neural network may include an input layer, an output layer, and optionally one or more hidden layers. Each layer includes one or more neurons, and the artificial neural network may include synapses that connect neurons. In the artificial neural network, each neuron may output a function value of an activation function of input signals input through a synapse, weights, and deviations. A model parameter refers to a parameter determined through learning and includes a weight of synapse connection and a deviation of a neuron. A hyper-parameter refers to a parameter to be set before learning in a machine learning algorithm and includes a learning rate, the number of iterations, a mini-batch size, and an initialization function. Learning an artificial neural network may be intended to determine a model parameter for minimizing a loss function. The loss function may be used as an index for determining an optimal model parameter in a process of learning the artificial neural network. Machine learning may be classified into supervised learning, unsupervised learning, and reinforcement learning. Supervised learning refers to a method of training an artificial neural network with a label given for training data, wherein the label may indicate a correct answer (or result value) that the artificial neural network needs to infer when the training data is input to the artificial neural network. Unsupervised learning may refer to a method of training an artificial neural network without a label given for training data. Reinforcement learning may refer to a training method for training an agent defined in an environment to choose an action or a sequence of actions to maximize a cumulative reward in each state. Machine learning implemented with a deep neural network (DNN) including a plurality of hidden layers among artificial neural networks is referred to as deep learning, and deep learning is part of machine learning. Hereinafter, machine learning is construed as including deep learning. The foregoing technical features may be applied to wireless communication of a robot. Robots may refer to machinery that automatically process or operate a given task with own ability thereof. In particular, a robot having a function of recognizing an environment and autonomously making a judgment to perform an operation may be referred to as an intelligent robot. Robots may be classified into industrial, medical, household, military robots and the like according uses or fields. A robot may include an actuator or a driver including a motor to perform various physical operations, such as moving a robot joint. In addition, a movable robot may include a wheel, a brake, a propeller, and the like in a driver to run on the ground or fly in the air through the driver. The foregoing technical features may be applied to a device supporting extended reality. Extended reality collectively refers to virtual reality (VR), augmented reality (AR), and mixed reality (MR). VR technology is a computer graphic technology of providing a real-world object and background only in a CG image, AR technology is a computer graphic technology of providing a virtual CG image on a real object image, and MR technology is a computer graphic technology of providing virtual objects mixed and combined with the real world. MR technology is similar to AR technology in that a real object and a virtual object are displayed together. However, a virtual object is used as a supplement to a real object in AR technology, whereas a virtual object and a real object are used as equal statuses in MR technology. XR technology may be applied to a head-mount display (HMD), a head-up display (HUD), a mobile phone, a tablet PC, a laptop computer, a desktop computer, a TV, digital signage, and the like. A device to which XR technology is applied may be referred to as an XR device. The claims recited in the present specification may be combined in a variety of ways. For example, the technical features of the method claims of the present specification may be combined to be implemented as a device, and the technical features of the device claims of the present specification may be combined to be implemented by a method. In addition, the technical characteristics of the method claim of the present specification and the technical characteristics of the device claim may be combined to be implemented as a device, and the technical characteristics of the method claim of the present specification and the technical characteristics of the device claim may be combined to be implemented by a method.
144,018
11863486
DESCRIPTION OF EMBODIMENTS The following describes the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. It should be noted that the technical solutions and features in the embodiments of the present invention may be mutually combined in the case of no conflict. In the embodiments of the present invention, “one” means an individual, but this does not indicate that “one” can only be the individual and cannot be applied to another individual. For example, in the embodiments of the present invention, “one terminal device” is described for a specific terminal device, but this does not mean that “one terminal device” can be applied only to a particular terminal device. The terms “system” and “network” may be interchangeably used in this application. In this application, “one embodiment” (or “one implementation”) or “an embodiment” (or “an implementation”) means that a particular characteristic, structure, feature, and the like that are described in combination with an embodiment are included in at least one embodiment. Therefore, “in one embodiment” or “in an embodiment” that appears throughout this specification does not represent a same embodiment. Further, in the embodiments of the present invention, the terms “and/or” and “at least one” used in cases of “A and/or B” and “at least one of A and B” include any one of three solutions: a solution in which A is included but B is excluded, a solution in which B is included but A is excluded, and a solution in which both options A and B are included. For another example, such phrases in cases of “A, B, and/or C” and “at least one of A, B, and/or C” include any one of seven solutions: a solution in which A is included but B and C are excluded, a solution in which B is included but A and C are excluded, a solution in which C is included but A and B are excluded, a solution in which A and B are included but C is excluded, a solution in which B and C are included but A is excluded, a solution in which A and C are included but B is excluded, and a solution in which all the three options A, B, and C are included. As easily understood by a person of ordinary skill in the art and a related art, all other similar descriptions can be understood in the foregoing manner in the embodiments of the present invention. FIG.1is a schematic communication diagram of a wireless device and a wireless communication system. The wireless communication system may include systems using various radio access technologies (RAT), for example, a Code Division Multiple Access (CDMA) system, a Time Division Multiple Access (TDMA) system, a Frequency Division Multiple Access (FDMA) system, an orthogonal frequency division multiple access (OFDMA) system, and a single carrier frequency division multiple access (SC-FDMA) system. For example, the wireless communication system may be a Long Term Evolution (LTE) system, a CDMA system, a Wideband Code Division Multiple Access (WCDMA) system, a Global System for Mobile Communications (GSM) system, a wireless local area network (WLAN) system, a New Radio (NR) system, various evolved or convergent systems, and a system oriented to a future communication technology. A system architecture and a service scenario that are described in the embodiments of the present invention are intended to more clearly describe the technical solutions in the embodiments of the present invention, and constitute no limitation on the technical solutions provided in the embodiments of the present invention. A person of ordinary skill in the art may learn that as the network architecture evolves and a new service scenario emerges, the technical solutions provided in the embodiments of the present invention are also applicable to a similar technical problem. For brevity,FIG.1shows communication between one network device (for example, an access network device)102and two wireless devices (for example, terminal devices)104. Generally, the wireless communication system may include any quantity of network devices and terminal devices. The wireless communication system may further include one or more core network devices, a device configured to carry a virtualized network function, or the like. The access network device102may provide services for the wireless devices by using one or more carriers. In this application, both the access network device and the terminal device are referred to as a wireless apparatus. In this application, the access network device102is an apparatus that is deployed in a radio access network to provide a wireless communication function for the terminal devices. The access network device may include a macro base station (BS), a micro base station (also referred to as a small cell), a relay node, an access point, or the like that is in various forms. A device with a radio access function may have different names in systems using different radio access technologies. For example, the device having the radio access function is referred to as an evolved NodeB (eNB or eNodeB) in an LTE system, and is referred to as a NodeB in a 3rd Generation (3G) system. For ease of description, in this application, the device having the radio access function is referred to as an access network device, and is also referred to as a base station sometimes. The wireless device in the embodiments of the present invention may include various handheld devices, in-vehicle devices, wearable devices, or computing devices that have a wireless communication function, or another processing device connected to a wireless modem. The wireless device may be referred to as a terminal device, or may be referred to as a mobile station (MS), a terminal, user equipment (UE), or the like. The wireless device may include a subscriber unit, a cellular phone, a smartphone, a wireless data card, a personal digital assistant (PDA) computer, a tablet computer, a modem or a modem processor, a handheld device, a laptop computer, a netbook, a cordless phone or a wireless local loop (WLL) station, a Bluetooth device, a machine type communication (MTC) terminal, and the like. For ease of description, these devices are referred to as a terminal device or UE in this application. The wireless device may support one or more wireless technologies for wireless communication, such as 5G, LTE, WCDMA, CDMA, 1X, Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), GSM, and 802.11. The wireless device may also support a carrier aggregation technology. A plurality of wireless devices may perform a same service or different services, for example, a mobile broadband service, an Enhanced Mobile Broadband (eMBB) service, or an ultra-reliable and low latency communication (URLLC) service for a terminal device. Further, a possible schematic structural diagram of the access network device102may be shown inFIG.2. The access network device102can perform a method provided in the embodiments of the present invention. The access network device102may include a controller or a processor201(the processor201is used as an example below for description) and a transceiver202. The controller/processor201is also referred to as a modem processor sometimes. The modem processor201may include a baseband processor (BBP) (not shown). The baseband processor processes a digitized received signal, to extract an information or data bit conveyed in the signal. In this way, as required or as expected, the BBP is usually implemented in one or more digital signal processors (DSP) in the modem processor201, or is implemented as separate integrated circuits (IC). The transceiver202may be configured to: support to receive or send information between the access network device102and the terminal devices, and support radio communication between the terminal devices. The processor201may be further configured to perform various functions of communication between the terminal device and other network devices. In an uplink, an uplink signal from the terminal device is received by using an antenna, demodulated by the transceiver202, and further processed by the processor201, to retrieve service data and/or signaling information that are/is sent by the terminal device. In a downlink, service data and/or a signaling message are/is processed by the processor201and modulated by the transceiver202, to generate a downlink signal, and the downlink signal is transmitted to the UE by using the antenna. The access network device102may further include a memory203that may be configured to store program code and/or data of the access network device102. The transceiver202may include an independent receiving circuit and an independent transmitting circuit, or may include one circuit for implementing sending and receiving functions. The access network device102may further include a communication unit204configured to support communication between the access network device102and another network entity. For example, the communication unit204is configured to support communication between the access network device102and a network device in a core network. Optionally, the access network device may further include a bus. The transceiver202, the memory203, and the communication unit204may be connected to the processor201by using the bus. For example, the bus may be a peripheral component interconnect (PCI) bus, or an extended industry standard architecture (EISA) bus. The bus may include an address bus, a data bus, a control bus, and the like. FIG.3is a possible schematic structural diagram of the terminal device in the foregoing wireless communication system. The terminal device can perform the method provided in the embodiments of the present invention. The terminal device may be either of the two terminal devices104. The terminal device includes a transceiver301, an application processor302, a memory303, and a modem processor304. The transceiver301may adjust (for example, perform analog conversion, filtering, amplification, and up-conversion on) output samples, and generate an uplink signal. The uplink signal is transmitted to the base station in the foregoing embodiment by using an antenna. In a downlink, the antenna receives a downlink signal transmitted by the access network device. The transceiver301may adjust (for example, perform filtering, amplification, down-conversion, and digitization on) the signal received from the antenna, and provide input samples. The modem processor304is also referred to as a controller or a processor sometimes, and may include a baseband processor (baseband processor, BBP) (not shown). The baseband processor processes a digitized received signal, to extract an information or data bit conveyed in the signal. As required or as expected, the BBP is usually implemented in one or more digital signal processors in the modem processor304, or is implemented as separate integrated circuits (IC). In a design, the modem processor304may include an encoder3041, a modulator3042, a decoder3043, and a demodulator3044. The encoder3041is configured to encode a to-be-sent signal. For example, the encoder3041may be configured to: receive service data and/or a signaling message that are/is to be sent in an uplink, and process (for example, format, encode, or interleave) the service data and the signaling message. The modulator3042is configured to modulate an output signal of the encoder3041. For example, the modulator may perform processing such as symbol mapping and/or modulation on the output signal (data and/or signaling) of the encoder, and provide output samples. The demodulator3044is configured to perform demodulation processing on an input signal. For example, the demodulator3044processes input samples and provides symbol estimation. The decoder3043is configured to decode a demodulated input signal. For example, the decoder3043performs processing such as de-interleaving and/or decoding on the demodulated input signal, and outputs a decoded signal (data and/or signaling). The encoder3041, the modulator3042, the demodulator3044, and the decoder3043may be implemented by the combined modem processor304. These units perform processing based on a radio access technology used in a radio access network. The modem processor304receives, from the application processor302, digitized data that may represent voice, data, or control information, and processes the digitized data for transmission. The modem processor may support one or more of a plurality of wireless communication protocols in a plurality of communication systems, such as LTE, New Radio, Universal Mobile Telecommunications System (UMTS), and High Speed Packet Access (HSPA). Optionally, the modem processor304may further include one or more memories. Optionally, the modem processor304and the application processor302may be integrated into one processor chip. The memory303is configured to store program code (also referred to as a program, an instruction, software, or the like sometimes) and/or data that are/is used to support communication of the terminal device. It should be noted that the memory203or the memory303may include one or more storage units. For example, the storage unit may be an internal storage unit of the processor201, the modem processor304, or the application processor302for storing program code, or may be an external storage unit independent of the processor201, the modem processor304, or the application processor302, or may be an internal storage unit of the processor201, the modem processor304, or the application processor302and an external storage unit independent of the processor201, the modem processor304, or the application processor302. The processor201and the modem processor304may be processors of a same type, or may be processors of different types. For example, the processor201and the modem processor304may be implemented in a central processing unit (CPU), a general processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logic device, a transistor logic device, a hardware component, another integrated circuit, or any combination thereof. The processor201and the modem processor304may implement or execute various example logic blocks, modules, and circuits described with reference to the content disclosed in the embodiments of the present invention. Alternatively, the processor may be a combination of components implementing a computing function, for example, a combination including one or more microprocessors, a combination of a DSP and a microprocessor, or a system-on-a-chip (SOC). A person skilled in the art can understand that various explanatory logical blocks, modules, circuits, and algorithms described with reference to the various aspects disclosed in this application may be implemented as electronic hardware, an instruction that is stored in a memory or another computer readable medium and that is executed by a processor or another processing device, or a combination thereof. For example, the device described in this specification may be used in any circuit, hardware component, IC, or IC chip. The memory disclosed in this application may be any type of memory in any size, and may be configured to store any type of required information. To clearly explain such interchangeability, various explanatory components, blocks, modules, circuits, and steps have been generally described above based on functionality. How to implement such functionality depends on specific applications, design selection, and/or design constraints imposed on an entire system. A person skilled in the art may use different manners to implement the described functionality for each particular application, but it should not be considered that such implementation goes beyond the scope of the present invention. Currently, an NR system already supports that channel estimation is based on DMRS for both a physical uplink control channel (PUCCH) and a physical uplink shared channel (PUSCH). In a time-domain symbol for sending a DMRS corresponding to the PUSCH, the DMRS is mapped to equally-spaced frequency-domain subcarriers. For example, a spacing may be a one-subcarrier spacing. In a time-domain symbol for sending a DMRS corresponding to the PUCCH, the DMRS is mapped to consecutive frequency-domain subcarriers. In the embodiments of the present invention, further consideration is taken for cross-correlation between a sending signal obtained by equally-spaced mapping of an existing sequence, and a sending signal obtained by continuous mapping of another sequence. An embodiment of the present invention provides a sequence group, and the sequence group includes a sequence {xn} and a sequence {ym}. In this embodiment, xnrepresents an element in the sequence {xn}, xnsatisfies xn=u·eπ·j·dn/4, where u is a non-zero complex number, and snis an element in a sequence {sn}. ymrepresents an element in the sequence {ym}, ymsatisfies ym=kq(m mod Mprime), kq(i)=e-j⁢π·q·i·(i+1)Mprime, i is an integer, 0≤i≤Mprime−1, and Mprimeis a largest prime number smaller than M. Further, a length of the sequence {xn} is N, a length of the sequence {ym} is M, n and m are integers, 0≤n≤N−1, and 0≤m≤M−1. Optionally, N=12 and M=36. When M=36, ymsatisfies ym=kq(m mod 31), kq⁢(i)=e-j⁢π·q·i⁡(i+1)31, i is an integer, and 0≤i≤30. It should be noted that the sequence group in this embodiment may further include a sequence with another length. For example, the sequence group may further include a sequence with another length that is an integer multiple of 12, such as a sequence with a length 48. A structure of this sequence may refer to a base sequence generation manner of a reference signal sequence with a corresponding length in an LTE system. Details are not described herein. Therefore, a value of M may be an integer multiple of 12. Further, M may be a positive integer that is greater than or equal to 36 and that is an integer multiple of 12. It should be noted that the sequence group may include a sequence {xn} with a length N=6 and does not include a sequence {xn} with a length N=12, 18, or 24, or may include a sequence {xn} with a length N=12 and does not include a sequence {xn} with a length N=6, 18, or 24; or may include a sequence {xn} with a length N=18 and does not include a sequence {xn} with a length N=6, 12, or 24; or may include a sequence {xn} with a length N=24 and does not include a sequence {xn} with a length N=6, 18, or 12, or may include a sequence {xn} with a length N=6 and a sequence {xn} with a length N=12 and does not include a sequence {xn} with a length N=18 or 24; or may include a sequence {xn} with a length N=6 and a sequence {xn} with a length N=18 and does not include a sequence {xn} with a length N=12 or 24; or may include a sequence {xn} with a length N=6 and a sequence {xn} with a length N=24 and does not include a sequence {xn} with a length N=12 or 18; or may include a sequence {xn} with a length N=12 and a sequence {xn} with a length N=18 and does not include a sequence {xn} with a length N=6 or 24; or may include a sequence {xn} with a length N=12 and a sequence {xn} with a length N=24 and does not include a sequence {xn} with a length N=6 or 18; or may include a sequence {xn} with a length N=12, a sequence {xn} with a length N=18, and a sequence {xn} with a length N=24, and does not include a sequence {xn} with a length N=6; or may include a sequence {xn} with a length N=6, a sequence {xn} with a length N=18, and a sequence {xn} with a length N=24, and does not include a sequence {xn} with a length N=12; or may include a sequence {xn} with a length N=6, a sequence {xn} with a length N=12, and a sequence {xn} with a length N=24, and does not include a sequence {xn} with a length N=18; or may include a sequence {xn} with a length N=6, a sequence {xn} with a length N=12, and a sequence {xn} with a length N=18, and does not include a sequence {xn} with a length N=24; or may include a sequence {xn} with a length N=6, a sequence {xn} with a length N=12, a sequence {xn} with a length N=18, and a sequence {xn} with a length N=24. Certainly, the sequence group may further include a sequence {xn} with another length. In addition, the sequence {ym} included in the sequence group may be a sequence {ym} with any length M, for example, M=60; or may be sequences {ym} with a plurality of lengths M that satisfy the foregoing condition, for example, sequences {ym} with lengths M=36, 48, and 60. This is not limited in this embodiment of the present invention. A may be a positive integer that is greater than or equal to 36 and that is an integer multiple of 12. Further, the value of M may be a value in a first set, and the first set includes a part or all of the following integers: 36, 48, 60, 72, 84, 96, 108, 120, 144, 156, 168, 180, 192, 216, 228, 240, 264, 288, 312, 336, 360, 384, 396, 408, 432, 456, 480, 504, 528, 552, 576, 624, 648, 672, 720, 768, 792, 816. Mprimeis the largest prime number smaller than M. It should be noted that a first sequence and a second sequence may be a same sequence. In the foregoing embodiment, composition of the sequence {sn} may be shown in Table 1. TABLE 1Composition of the sequence {sn} with a length N = 12Indexs(0), . . . , s(11)01−1311−1−1−113−311−1−1−1−11−3−133−1−312−31−3−3−33−3−1111−33−3313−311113−334−313−1−1−3−3−1−131−35−111−1133−1−1−31−36−3−3−1333−33−31−1−37−33−333−3−1−1331−38−3−1−3−1−1−333−1−11−39−3333−1−3−3−1−313−31013−313331−11−1311−1−33−1−3−3−3−11−11−31231313−3−1131−1−313−3−3333−3−11−331−314−3−11−31333−1−33315−3−331−3−3−3−13−11316−113−31−11−1−1−31−117−3−1−11311−11−1−3118−3−13−3−3−1−31−1−33319−3−33−3−1333−1−31−320−31−1−133−3−1−1−3−1−321−3133−1−1−333−33−322−3−1−1−3−3−1−3313−1−323−3−131−3−1−33133124−3331−33−113−33−3253−1−33−3−1333−3−1−3261−13−1−1−1−3−1111−327−331−313−1−1133328−33−33−3−33−1−113−329−331−133−31−11−11 The first column in Table 1 represents indexes of the sequence {sn}, and s(0), . . . , and s(11) represent elements in the sequence {sn}. TABLE 2Composition of the sequence {sn} with a length N = 18Indexs(0), . . . , s(17)03−33−113−3−1−3−3−1−331−13−3313−3113−11−1−1−311−133−33−12−33−1−3−1−311−3−3−1−13−31311311−1−1−3−11−3−3−31−3−1−11−131411−33313−33−111−11−3−3−135−3−31−3333−1311−3−3−33−3−1−16−13−1−331−3−13−3−1−1111−1−1−17−31−3−31−3−331−3−1−3−3−3−111381−3−1−333−1−31−3−3−1−3−113339−331−1−1−1−11−133−3−113−13−110−3−31−1−111−3−13333−1313111−3−333−313−1−31−1−33−3−1−1−1312−3−33331−31331−3−33−1−3−1113−33−1131−3−111−3133−1−3−3−314−31−3−1−131−3−3−3−1−3−3111−1−115−3−3333−1−1−3−1−1−131−3−3−13−116−3−133−13−1−3−11−1−3−1−1−133117−3−1−3−1−313−3−13331−1−33−1−318−331−1−13−3−111111−13−1−3−1193−1−31−3−3−333−11−3−13113320333−3−1−3−13−11−1−31−3−3−133213−131−3−3−11−3−333313−33−322−311−3113−3−1−3−13−33−1−1−1−323−3−1−1−31−33−1−1−333−3−13−1−1−124−3−3−31−33113−3−313−13−3−332511−3−3−3−313−3331−3−13−1−31263−1−11−3−1−3−1−3−3−1−3111−3−332731−31−333−1−3−3−1−3−33−3−11328−1−31−3−3−31133−333−3−13−3129−3−1−3−311−1−3−1−3−1−133−1313 The first column in Table 2 represents indexes of the sequence {sn}, and s(0), . . . , and s(17) represent elements in the sequence {sn}. TABLE 3Composition of the sequence {sn} with a length N = 24Indexs(0), . . . , s(23)0−1−3311−31−3−31−3−1−13−3333−3133−3−31−1−33−1313−11−3−1−3−113−3−1−3333−3−3−32−3313−11−31−31−1−3−1−3−3−3−3−1−1−111−3−333−13−11−311−3−33−3−1−1−1−1−1−3−3−111−3−341−33−1−3−1331−1113−3−1−3−3−3−13−3−1−3−353−11−13−3113−1−331−33−1−1−1−11−3−3−3−36−33−131−1−1−133111331−3−3−11−313−37−3−11−3−311−33−1−1−3131−1−3−1−31−3−3−3−38−31−31−3−31−31−3−3−3−3−31−3−311−311−3−393−3−3−133−3−131113−13−3−13−131−1−3−310−3−3−1−1−1−31−1−3−13−31−33−3331−1−11−3−311−3−3331−1−1−11−3−11−13−3−1−3−1−11−33−1−312−3−31−133−3−11−1−111−1−13−31−31−1−1−1−313−31−33−1−1−1−331−1−3−113−11−11−3−3−3−3−314−3−3−3−13−33131−3−1−1−31131−1−3313−31511−1−3−111−31−11−33−3−33−1−313−31−3−316−33−13−13311−313−33−3−3−113−3−1−1−3−317−1−3−31−1−1−313−1−3−1−1−31131−3−1−13−3−318−31−31−31131−3−3−113−1−331−1−3−3−3−3−3193−33−1−3131−1−1−3−13−33−1−133−3−33−3−320−13−3−3−13−1−11313−1−1−3131−1−31−1−3−321−31−3−1−1313−31−133−1−33−3−1−1−3−3−33−322−3−1−1−31−3−3−1−13−11−131−3−1311−1−1−3−323−31−33−31−331−1−3−1−3−3−3−313−11333−324−3−11−3−1−111133−11−11−1−1−3−3−331−1−3253−3−113−1−1−3−13−1−3−1−33−1311−33−3−3−326−313−11−13−33−1−3−1−33−1−1−1−3−1−1−333−327−33−1−3−1−1−13−1−13−3−13−33−3−1311−1−3−328−31−1−3−3−11−3−1−311−111333−11−11−1−329−13−1−133−1−1−13−1−31311−3−3−3−1−3−1−3−3 The first column in Table 3 represents indexes of the sequence {sn}, and s(0), . . . , and s(23) represent elements in the sequence {sn}. In a combination manner, when M=36, and the sequence group includes a sequence {xn} with a length N=12, a sequence {xn} with a length N=18, a sequence {xn} with a length N=24, the sequence {ym}, and a sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Mprime, i is an integer, 0≤i≤Jprime−1, and prime is a largest prime number smaller than J. The sequence {xn} with a length N=12, the sequence {xn} with a length N=18, and the sequence {xn} with a length N=24 that have a same index are in a same sequence group. A sequence {xn} corresponding to an index v and a sequence {ym} corresponding to q=v+1 are in a same sequence group, and 0≤v≤29. In this combination manner, the sequence {xn}, the sequence {ym}, and the sequence {hj} are respectively mapped to N, M, and J subcarriers. A center-frequency spacing of any two adjacent subcarriers in the N, M, or J subcarriers is t times a subcarrier spacing. A quantity of sequence pairs, namely, the sequence {xn} with a length N=12 and a sequence with another length in a different sequence group, whose cross-correlation value is greater than 0.8 and a maximum cross-correlation value are shown in Table 4. The cross-correlation value is obtained through calculation according to the first cross-correlation value calculation method in the 3GPP contribution R1-163437. TABLE 4Length-18Length-24Length-36Length-48Length-60Length-72sequencesequencesequencesequencesequencesequenceLength-1218897109sequenceMaximum cross-0.89330.8550.8720.86770.89220.8749correlation value It should be noted that all cross-correlation values mentioned in this specification are obtained through calculation according to the foregoing method, and this is not described in the following again. A quantity of sequence pairs, namely, the sequence {xn} with a length N=18 and a sequence with another length in a different sequence group, whose cross-correlation value is greater than 0.7 and a maximum cross-correlation value are shown in Table 5. TABLE 5Length-24Length-36Length-48Length-60Length-72sequencesequencesequencesequencesequenceLength-1887365sequenceMaximum0.74120.73260.85970.74770.7651cross-correlationvalue A quantity of sequence pairs, namely, the sequence {xn} with a length N=24 and a sequence with another length in a different sequence group, whose cross-correlation value is greater than 0.6 and a maximum cross-correlation value are shown in Table 6. TABLE 6Length-36Length-48Length-60Length-72sequencesequencesequencesequenceLength-186056sequenceMaximum-cross-0.69510.59630.66140.7293correlation value In this combination manner, the sequence {xn} with a length N=12 is mapped to 12 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 12 subcarriers is 2t times a subcarrier spacing. The sequence {xn} with a length N=12, the sequence {ym}, and the sequence {hj} are respectively mapped to N, M, and J subcarriers. A center-frequency spacing of any two adjacent subcarriers in the N, M, and J subcarriers is t times a subcarrier spacing. A quantity of sequence pairs, namely, the sequence {xn} with a length N=12 and a sequence with another length in a different sequence group, whose cross-correlation value is greater than 0.8 and a maximum cross-correlation value are shown in Table 7. TABLE 7Length-24Length-36Length-48Length-60Length-72sequencesequencesequencesequencesequenceLength-121499510sequenceMaximum0.85780.87190.87070.87330.9175Cross-correlationvalue It may be learned from the foregoing that, the sequence {xn} and the sequence {ym} included in the sequence group in this embodiment are respectively corresponding to the sequence {sn} and q; in other words, the sequence {xn} is corresponding to the sequence {sn}, and the sequence {ym} is corresponding to q. In a first optional implementation, when N=12, a combination of the sequence {sn} and q is at least one of the following combinations:the sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and q=1; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and q=2; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and q=3; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and q=4; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and q=5; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and q=6; orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and q=7; orthe sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and q=8; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and q=9; orthe sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and q=10; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and q=11; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and q=12; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and q=13; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and q=14; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and q=15; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and q=16; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and q=17; orthe sequence {sn} is {−1, 1, 1, −1, 1, 3, 3, −1, −1, −3, 1, −3}, and q=18; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and q=19; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and q=20; orthe sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and q=21; orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and q=22; orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and q=23; orthe sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and q=24; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and q=25; orthe sequence {sn} is {−3, 3, −3, 3, 3, −3, −1, −1, 3, 3, 1, −3}, and q=26; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and q=27; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and q=28; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and q=29; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn}, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31. A length of the sequence (hj) is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and prime is a largest prime number smaller than J. In addition, a sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is 2t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} and a sequence {ym} or a sequence {hj} in different sequence groups, whose cross-correlation value is greater than 0.8 and a maximum cross-correlation value are shown in Table 8. TABLE 8Length-36Length-48Length-60Length-72sequencesequencesequencesequenceLength-124532sequenceMaximum cross-0.86180.87070.84120.824correlation value It should be noted that all cross-correlation values mentioned in this specification are obtained through calculation according to the foregoing method, and this is not described in the following again. In a second optional implementation, when N=12, a combination of the sequence {sn} and q is at least one of the following combinations:the sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and q=1; orthe sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and q=2; orthe sequence {sn} is {−1, 1, 1, −1, 1, 3, 3, −1, −1, −3, 1, −3}, and q=3; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and q=4; orthe sequence {sn} is {−3, 3, −3, 3, 3, −3, −1, −1, 3, 3, 1, −3}, and q=5; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and q=6; orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and q=7; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and q=8; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and q=9; orthe sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and q=10; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and q=11; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and q=12; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and q=13; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and q=14; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and q=15; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and q=16; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and q=17; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and q=18; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and q=19; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and q=20; orthe sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and q=21; orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and q=22; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and q=23; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and q=24; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and q=25; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and q=26; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and q=27; orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and q=28; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and q=29; orthe sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and q=30. It should be noted that in this optional implementation, all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn}, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is 2t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} and a sequence {ym} or a sequence {hj} in different sequence groups, whose cross-correlation value is greater than 0.8 and a maximum cross-correlation value are shown in Table 9. TABLE 9Length-36Length-48Length-60Length-72sequencesequencesequencesequenceLength-123541sequenceMaximum cross-0.86180.87070.84120.8073correlation value In a third optional implementation, when N=12, a combination of the sequence {sn} and q is at least one of the following combinations:the sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and q=1; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and q=2; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and q=3; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and q=4; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and q=5; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and q=6; orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and q=7; orthe sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and q=8; orthe sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and q=9; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and q=10; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and q=11; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and q=12; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and q=13; orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and q=14; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1, and q=15; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and q=16; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and q=17; orthe sequence {sn} is {−1, 1, 1, −, 1, 3, 3, −1, −1, −3, 1, −3}, and q=18; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and q=19; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and q=20; orthe sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and q=21; orthe sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and q=22; orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and q=23; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and q=24; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and q=25; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and q=26; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and q=27; orthe sequence {sn} is {−3, 3, −3, 3, 3, −3, −1, −1, 3, 3, 1, −3}, and q=28; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and q=29; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn}, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31. A length of the sequence {hj}, is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is 2t times a subcarrier spacing. A sequence {gm} and the sequence (hj) are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} and a sequence {ym} or a sequence {hj} in different sequence groups, whose cross-correlation value is greater than 0.8 and a maximum cross-correlation value are shown in Table 10. TABLE 10Length-36Length-48Length-60Length-72sequencesequencesequencesequenceLength-123833sequenceMaximum cross-0.86180.87070.84120.8202correlation value In a fourth optional implementation, when N=12, a combination of the sequence {sn} and q is at least one of the following combinations:the sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and q=1; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and q=2; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and q=3; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and q=4; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and q=5; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and q=6; orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and q=7; orthe sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and q=8; orthe sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and q=9; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and q=10; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and q=11; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and q=12; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and q=13; orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and q=14; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and q=15; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and q=16; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and q=17; orthe sequence {sn} is {−1, 1, 1, −1, 1, 3, 3, −1, −1, −3, 1, −3}, and q=18; orthe sequence {sn} is {−3, 3, −3, 3, 3, −3, −1, −1, 3, 3, 1, −3}, and q=19; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and q=20; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and q=21; orthe sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and q=22; orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and q=23; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and q=24; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and q=25; orthe sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and q=26; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and q=27; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and q=28; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and q=29; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn}, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is 2t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} and a sequence {ym} or a sequence {hj} in different sequence groups, whose cross-correlation value is greater than 0.8 and a maximum cross-correlation value are shown in Table 11. TABLE 11Length-36Length-48Length-60Length-72sequencesequencesequencesequenceLength-121935sequenceMaximum cross-0.83120.87070.83090.9175correlation value In a fifth optional implementation, when N=6, a combination of the sequence {sn} and q is at least one of the following combinations:the sequence {sn} is {1, 1, −3, −1, 3, 1}, and q=1; orthe sequence {sn} is {1, −1, −1, 3, −1}, and q=2; orthe sequence {sn} is {−3, 1, 3, −3, −3, −3}, and q=3; orthe sequence {sn} is {−3, −3, −3, 3, 1, −3}, and q=4; orthe sequence {sn} is {−3, −3, −3, 1, −3, −1}, and q=5; orthe sequence {sn} is {1, 1, 1, −1, 3, −3}, and q=6; orthe sequence {sn} is {1, 1, −3, 1, 3, 3}, and q=7; orthe sequence {sn} is {−1, −3, 1, 3, 3, 1}, and q 8; orthe sequence {sn} is {−3, −3, −1, 1, −1, −3}, and q=9; orthe sequence {sn} is {1, 1, 3, −1, 3, 3}, and q=10; orthe sequence {sn} is {1, 1, 1, −3, −1, 3}, and q=11; orthe sequence {sn} is {−3, 1, 3, 1, −3, −3}, and q=12; orthe sequence {sn} is {−3, 3, −1, −1, 3, −3}, and q=13; orthe sequence {sn} is {1, 1, −1, 3, 1, 3}, and q=14; orthe sequence {sn} is {1, 1, −3, 3, −1, 1}, and q=15; orthe sequence {sn} is {1, 1, −3, −1, 3, 1}, and q=16; orthe sequence {sn} is {1, 1, 3, −1, 1, −1}, and q=17; orthe sequence {sn} is {−3, −1, 3, 3, −1, −3}, and q=18; orthe sequence {sn} is {1, 1, −3, −3, 1, −3}, and q=19; orthe sequence {sn} is {1, 1, 1, −3, 3, −1}, and q=20; orthe sequence {sn} is {−3, −1, −1, −1, 3, −1}, and q=21; orthe sequence {sn} is {1, 1, −3, 3, 1, 3}, and q=22; orthe sequence {sn} is {1, 3, −1, −3, −3, −1}, and q=23; orthe sequence {sn} is {1, 1, −3, 1, −1, −1}, and q=24; orthe sequence {sn} is {1, 1, 3, −1, −3, 3}, and q=25; orthe sequence {sn} is {−3, 1, −3, −3, −3, −1}, and q=26; orthe sequence {sn} is {−3, −1, 1, −3, 1, −1}, and q=27; orthe sequence {sn} is {−3, 1, −1, −3, −3, −3}, and q=28; orthe sequence {sn} is {1, 1, 3, 3, −1, 3}, and q=29; orthe sequence {sn} is {1, 1, −3, 3, −1, 1}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. In a sixth optional implementation, when N=6, a combination of the sequence {sn} and q is at least one of the following combinations:the sequence {sn} is {−3, −1, 3, 3, −1, −3}, and q=1; orthe sequence {sn} is {−3, 3, −1, −1, 3, −3}, and q=2; orthe sequence {sn} is {−3, 1, 3, −3, −3, −3}, and q=3; orthe sequence {sn} is {−3, −3, −3, 3, 1, −3}, and q=4; orthe sequence {sn} is {1, 1, −3, −1, 3, 1}, and q=5; orthe sequence {sn} is {1, 1, −3, 3, −1, 1}, and q=6; orthe sequence {sn} is {1, 1, −1, −1, 3, −1}, and q=7; orthe sequence {sn} is {−1, −3, 1, 3, 3, 1}, and q=8; orthe sequence {sn} is {1, 1, −3, 1, −1, −1}, and q=9; orthe sequence {sn} is {1, 1, 1, −3, −1, 3}, and q=10; orthe sequence {sn} is {1, 1, 3, −1, −3, 3}, and q=11; orthe sequence {sn} is {−3, 1, 3, 1, −3, −3}, and q=12; orthe sequence {sn} is {1, 1, 3, 3, −1, 3}, and q=13; orthe sequence {sn} is {1, 1, 1, −3, 3, −1}, and q=14; orthe sequence {sn} is {1, 1, 1, −1, 3, −3}, and q=15; orthe sequence {sn} is {−3, −1, −1, −1, 3, −1}, and q=16; orthe sequence {sn} is {−3, −3, −1, 1, −1, −3}, and q=17; orthe sequence {sn} is {−3, −3, −3, 1, −3, −1}, and q=18; orthe sequence {sn} is {1, 1, −3, 3, −1, 1}, and q=19; orthe sequence {sn} is {1, 1, −3, −1, 3, 1}, and q=20; orthe sequence {sn} is {−3, 1, −3, −3, −3, −1}, and q=21; orthe sequence {sn} is {1, 1, −3, 3, 1, 3}, and q=22; orthe sequence {sn} is {1, 3, −1, −3, −3, −1}, and q=23; orthe sequence {sn} is {1, 1, −3, −3, 1, −3}, and q=24; orthe sequence {sn} is {1, 1, 3, −1, 3, 3}, and q=25; orthe sequence {sn} is {1, 1, −3, 1, 3, 3}, and q=26; orthe sequence {sn} is {−3, −1, 1, −3, 1, −1}, and q=27; orthe sequence {sn} is {−3, 1, −1, −3, −3, −3}, and q=28; orthe sequence {sn} is {1, 1, 3, −1, 1, −1}, and q=29; orthe sequence {sn} is {1, 1, −1, 3, 1, 3}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. In a seventh optional implementation, when N=24, a combination of the sequence {sn} and q is at least one of the following combinations:the sequence {sn} is {−1, −3, 3, −1, 3, 1, 3, −1, 1, −3, −1, −3, −1, 1, 3, −3, −1, −3, 3, 3, 3, −3, −3, −3}, and q=1; orthe sequence {sn} is {−3, 3, 1, 3, −1, 1, −3, 1, −3, 1, −1, −3, −1, −3, −3, −3, −3, −1, −−1, 1, −1, 1, 1, −3, −3}, and q=2; orthe sequence {sn} is {−1, −3, 3, 1, 1, −3, 1, −3, −3, 1, −3, −1, −1, 3, −3, 3, 3, 3, −3, 1, 3, 3, −3, −3}, and q=3; orthe sequence {sn} is {1, −3, 3, −1, −3, −1, 3, 3, 1, −1, 1, 1, 3, −3, −1, −3, −3, −3, −1, 3, −3, −1, −3, −3}, and q=4; orthe sequence {sn} is {−1, 3, −3, −3, −1, 3, −1, −1, 1, 3, 1, 3, −, 1, −3, 1, 3, 1, −1, −3, 1, −1, −3, −3}, and q=5; orthe sequence {sn} is {−3, 1, −3, 3, −3, 1, −3, 3, 1, −1, −3, −1, −3, −3, −3, −3, 1, 3, −1, 1, 3, 3, 3, −3}, and q=6; orthe sequence {sn} is {3, −1, 1, −1, 3, −3, 1, 1, 3, −1, −3, 3, 1, −3, 3, −1, −1, −1, −1, 1, −3, −3, −3, −3}, and q=7; orthe sequence {sn} is {−3, 1, 3, −1, 1, −1, 3, −3, 3, −1, −3, −1, −3, 3, −1, −1, −1, −3, −1, −1, −3, 3, 3, −3}, and q=8; orthe sequence {sn} is {−3, 1, −3, 3, −1, −1, −1, −3, 3, 1, −1, −3, −1, 1, 3, −1, 1, −1, 1, −3, −3, −3, −3, −3}, and q=9; orthe sequence {sn} is {1, 1, −1, −3, −1, 1, 1, −3, 1, −1, 1, −3, 3, −3, −3, 3, −1, −3, 1, 3, −3, 1, −3, −3}, and q=10; orthe sequence {sn} is {−3, −3, −3, −1, 3, −3, 3, 1, 3, 1, −3, −1, −1, −3, 1, 1, 3, 1, −1, −3, 3, 1, 3, −3}, and q=11; orthe sequence {sn} is {−3, 3, −1, 3, 1, −1, −1, −1, 3, 3, 1, 1, 1, 3, 3, 1, −3, −3, −1, 1, −3, 1, 3, −3}, and q=12; orthe sequence {sn} is {3, −3, 3, −1, −3, 1, 3, 1, −1, −1, −3, −1, 3, −3, 3, −1, −1, 3, 3, −3, −3, 3, −3, −3}, and q=13; orthe sequence {sn} is {−3, 3, −1, 3, −1, 3, 3, 1, 1, −3, 1, 3, −3, 3, −3, −3, −1, 1, 3, −3, −1, −1, −3, −3}, and q=14; orthe sequence {sn} is {−3, 1, −3, −1, −1, 3, 1, 3, −3, 1, −1, 3, 3, −1, −3, 3, −3, −1, −1, −3, −3, −3, 3, −3}, and q=15; orthe sequence {sn} is {−3, −1, −1, −3, 1, −3, −3, −1, −1, 3, −1, 1, −1, 3, 1, −3, −1, 3, 1, 1, −1, −1, −3, −3}, and q=16; orthe sequence {sn} is {3, −3, −3, −1, 3, 3, −3, −1, 3, 1, 1, 1, 3, −1, 3, −3, −1, 3, −1, 3, 1, −1, −3, −3}, and q=17; orthe sequence {sn} is {3, −1, 3, −1, 1, −3, 1, 1, −3, −3, 3, −3, −1, −1, −1, −1, −1, −3, −3, −1, 1, 1, −3, −3}, and q=18; orthe sequence {sn} is {−3, 1, −3, 1, −3, −3, 1, −3, 1, −3, −3, −3, −3, −3, 1, −3, −3, 1, 1, −3, 1, 1, −3, −3}, and q=19; orthe sequence {sn} is {−3, −3, 3, 3, 1, −1, −1, −1, 1, −3, −1, 1, −1, 3, −3, −1, −3, −1, −1, 1, −3, 3, −1, −3}, and q=20; orthe sequence {sn} is {−3, −3, −1, −1, −1, −3, 1, −1, −3, −1, 3, −3, 1, −3, 3, −3, 3, 3, 1, −1, −1, 1, −3, −3}, and q=21; orthe sequence {sn} is {−3, −1, 1, −3, −1, −1, 1, 1, 1, 3, 3, −1, 1, −1, 1, −1, −1, −3, −3, −3, 3, 1, −1, −3}, and q=22; orthe sequence {sn} is {−1, 3, −1, −1, 3, 3, −1, −1, −1, 3, −1, −3, 1, 3, 1, 1, −3, −3, −3, −1, −3, −1, −3, −3}, and q=23; orthe sequence {sn} is {−1, −3, −3, 1, −1, −1, −3, 1, 3, −1, −3, −1, −1, −3, 1, 1, 3, 1, −3, −1, −1, 3, −3, −3}, and q=24; orthe sequence {sn} is {−3, −1, 1, −3, −3, 1, 1, −3, 3, −1, −1, −3, 1, 3, 1, −1, −3, −1, −3, 1, −3, −3, −3, −3}, and q=25; orthe sequence {sn} is {−3, 3, −1, −3, −1, −1, −1, 3, −1, −1, 3, −3, −1, 3, −3, 3, −3, −1, 3, 1, 1, −1, −3, −3}, and q=26; orthe sequence {sn} is {−3, 1, −1, −3, −3, −1, 1, −3, −1, −3, 1, 1, −1, 1, 1, 3, 3, 3, −1, 1, −1, 1, −1, −3}, and q=27; orthe sequence {sn} is {−3, −3, 1, −1, 3, 3, −3, −1, 1, −1, −1, 1, 1, −1, −1, 3, −3, 1, −3, 1, −1, −1, −1, −3}, and q=28; orthe sequence {sn} is {−3, 1, −3, 1, −3, 1, 1, 3, 1, −3, −3, −1, 1, 3, −1, −3, 3, 1, −1, −3, −3, −3, −3, −3}, and q=29; orthe sequence {sn} is {3, −3, −1, 1, 3, −1, −1, −3, −1, 3, −1, −3, −1, −3, 3, −1, 3, 1, 1, −3, 3, −3, −3, −3}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn}, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=[ū+½], and ū=Jprime·q/31. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} and a sequence {ym} or a sequence {hj} in different sequence groups, whose cross-correlation value is greater than 0.6 and a maximum cross-correlation value are shown in Table 12. TABLE 12Length-36Length-48Length-60Length-72sequencesequencesequencesequenceLength-241021sequenceMaximum cross-0.6790.59390.64530.6121correlation value In an eighth optional implementation, when N=24, a combination of the sequence {sn} and q is at least one of the following combinations:the sequence {sn} is {−1, −3, 3, −1, 3, 1, 3, −1, 1, −3, −1, −3, −1, 1, 3, −3, −1, −3, 3, 3, 3, −3, −3, −3}, and q=1; orthe sequence {sn} is {−1, −3, 3, 1, 1, −3, 1, −3, −3, 1, −3, −1, −1, 3, −3, 3, 3, 3, −3, 1, 3, 3, −3, −3}, and q=2; orthe sequence {sn} is {−3, 3, 1, 3, −1, 1, −3, 1, −3, 1, −1, −3, −1, −3, −3, −3, −3, −1, −1, −1, 1, 1, −3, −3}, and q=3; orthe sequence {sn} is {1, −3, 3, −1, −3, −1, 3, 3, 1, −1, 1, 1, 3, −3, −1, −3, −3, −3, −1, 3, −3, −1, −3, −3}, and q=4; orthe sequence {sn} is {3, −1, 3, −1, 1, −3, 1, 1, −3, −3, 3, −3, −1, −1, −1, −1, −1, −3, −3, −1, 1, 1, −3, −3}, and q=5; orthe sequence {sn} is {3, −1, 1, −1, 3, −3, 1, 1, 3, −1, −3, 3, 1, −3, 3, −1, −1, −1, −1, 1, −3, −3, −3, −3}, and q=6; orthe sequence {sn} is {−3, −1, 1, −3, −3, 1, 1, −3, 3, −1, −1, −3, 1, 3, 1, −1, −3, −1, −3, 1, −3, −3, −3, −3}, and q=7; orthe sequence {sn} is {−3, 1, 3, −1, 1, −1, 3, −3, 3, −1, −3, −1, −3, 3, −1, −1, −1, −3, −1, −1, −3, 3, 3, −3}, and q=8; orthe sequence {sn} is {−3, 1, −3, 1, −3, −3, 1, −3, 1, −3, −3, −3, −3, −3, 1, −3, −3, 1, 1, −3, 1, 1, −3, −3}, and q=9; orthe sequence {sn} is {1, 1, −1, −3, −1, 1, 1, −3, 1, −1, 1, −3, 3, −3, −3, 3, −1, −3, 1, 3, −3, 1, −3, −3}, and q=10; orthe sequence {sn} is {3, −3, −3, −1, 3, 3, −3, −1, 3, 1, 1, 1, 3, −1, 3, −3, −1, 3, −1, 3, 1, −1, −3, −3}, and q=11; orthe sequence {sn} is {−3, −3, −1, 3, 1, −1, −1, −1, −1, 3, 3, 1, 1, 1, 3, 3, 1, −3, −3, −1, 1, −3, 1, 3, −3}, and q=12; orthe sequence {sn} is {−3, −3, 3, 3, 1, −1, −1, −1, 1, −3, −1, 1, −1, 3, −3, −1, −3, −1, −1, 1, −3, 3, −1, −3}, and q=13; orthe sequence {sn} is {−3, −3, 1, −1, 3, 3, −3, −1, 1, −1, −1, 1, 1, −1, −1, 3, −3, 1, −3, 1, −1, −1, −1, −3}, and q=14; orthe sequence {sn} is {−3, 1, −3, −1, −1, 3, 1, 3, −3, 1, −1, 3, 3, −1, −3, 3, −3, −1, −1, −3, −3, −3, 3, −3}, and q=15; orthe sequence {sn} is {−3, −1, −1, −3, 1, −3, −3, −1, −1, 3, −1, 1, −1, 3, 1, −3, −1, 3, 1, 1, −1, −1, −3, −3}, and q=16; orthe sequence {sn} is {−3, 1, −3, 3, −1, −1, −1, −3, 3, 1, −1, −3, −1, 1, 3, −1, 1, −1, 1, −3, −3, −3, −3, −3}, and q=17; orthe sequence {sn} is {−3, −3, −3, −1, 3, −3, 3, 1, 3, 1, −3, −1, −1, −3, 1, 1, 3, 1, −1, −3, 3, 1, 3, −3}, and q=18; orthe sequence {sn} is {−3, 3, −1, 3, −1, 3, 3, 1, 1, −3, 1, 3, −3, 3, −3, −3, −1, 1, 3, −3, −1, −1, −3, −3}, and q=19; orthe sequence {sn} is {−1, −3, −3, 1, −1, −1, −3, 1, 3, −1, −3, −1, −1, −3, 1, 1, 3, 1, −3, −1, −1, 3, −3, −3}, and q=20; orthe sequence {sn} is {−3, −3, −1, −1, −1, −3, 1, −1, −3, −1, 3, −3, 1, −3, 3, −3, 3, 3, 1, −1, −1, 1, −3, −3}, and q=21; orthe sequence {sn} is {3, −3, 3, −1, −3, 1, 3, 1, −1, −1, −3, −1, 3, −3, 3, −1, −1, 3, 3, −3, −3, 3, −3, −3}, and q=22; orthe sequence {sn} is {−1, 3, −3, −3, −1, 3, −1, −1, 1, 3, 1, 3, −, 1, −3, 1, 3, 1, −1, −3, 1, −1, −3, −3}, and q=23; orthe sequence {sn} is {−3, 1, −3, 3, −3, 1, −3, 3, 1, −1, −3, −1, −3, −3, −3, −3, 1, 3, −1, 1, 3, 3, 3, −3}, and q=24; orthe sequence {sn} is {−3, −1, 1, −3, −1, −1, 1, 1, 1, 3, 3, −1, 1, −1, 1, −1, −1, −3, −3, −3, 3, 1, −1, −3}, and q=25; orthe sequence {sn} is {−3, 3, −1, −3, −1, −1, −1, 3, −1, −1, 3, −3, −1, 3, −3, 3, −3, −1, 3, 1, 1, −1, −3, −3}, and q=26; orthe sequence {sn} is {−3, 1, −1, −3, −3, −1, 1, −3, −1, −3, 1, 1, −1, 1, 1, 3, 3, 3, −1, 1, −1, 1, −1, −3}, and q=27; orthe sequence {sn} is {−1, 3, −1, −1, 3, 3, −1, −1, −1, 3, −1, −3, 1, 3, 1, 1, −3, −3, −3, −1, −3, −1, −3, −3}, and q=28; orthe sequence {sn} is {−3, 1, −3, 1, −3, 1, 1, 3, 1, −3, −3, −1, 1, 3, −1, −3, 3, 1, −1, −3, −3, −3, −3, −3}, and q=29; orthe sequence {sn} is {3, −3, −1, 1, 3, −1, −1, −3, −1, 3, −1, −3, −1, −3, 3, −1, 3, 1, 1, −3, 3, −3, −3, −3}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn}, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} and a sequence {ym} or a sequence {hj} in different sequence groups, whose cross-correlation value is greater than 0.6 and a maximum cross-correlation value are shown in Table 13. TABLE 13Length-36Length-48Length-60Length-72sequencesequencesequencesequenceLength-241021sequenceMaximum cross-0.6790.59390.64530.6121correlation value In a ninth optional implementation, when N=12, combinations of the sequence {sn} and q are a part or all of the following combinations:the sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and q=1; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and q=2; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and q=3; orthe sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and q=4; orthe sequence {sn} is {−1, 1, 1, −1, 1, 3, 3, −1, −1, −3, 1, −3}, and q=5; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and q=6; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and q=7; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and q=8; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and q=9; orthe sequence {sn} is {−3, 3, −3, 3, 3, −3, −1, −1, 3, 3, 1, −3}, and q=10; orthe sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and q=11; orthe sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and q=12; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and q=13; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and q=14; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and q=15; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and q=16; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and q=17; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and q=18; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and q=19; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and q=20; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and q=21; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and q=22; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and q=23; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and q=24; orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and q=25; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and q=26; orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and q=27; orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and q=28; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and q=29; orthe sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn}, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} and a sequence {ym} or a sequence {hj} in different sequence groups, whose cross-correlation value is greater than 0.8 and a maximum cross-correlation value are shown in Table 14. TABLE 14Length-36Length-48Length-60Length-72sequencesequencesequencesequenceLength-121111sequenceMaximum cross-0.8310.81070.80340.8074correlation value In a tenth optional implementation, when N=18, combinations of the sequence {sn} and q are a part or all of the following combinations:the sequence {sn} is {−1, 3, −1, −3, 3, 1, −3, −1, 3, −3, −1, −1, 1, 1, 1, −1, −1, −1}, and q=1; orthe sequence {sn} is {3, −3, 3, −1, 1, 3, −3, −1, −3, −3, −1, −3, 3, 1, −1, 3, −3, 3}, and q=2; orthe sequence {sn} is {−3, 3, 1, −1, −1, 3, −3, −1, 1, 1, 1, 1, 1, −1, 3, −1, −3, −1}, and q=3; orthe sequence {sn} is {−3, −3, 3, 3, −3, 1, 3, −1, −3, 1, −1, −3, 3, −3, −1, −1, −1, 3}, and q=4; orthe sequence {sn} is {1, 1, −1, −1, −3, −1, 1, −3, −3, −3, 1, −3, −1, −1, 1, −1, 3, 1, and q=5; orthe sequence {sn} is {−3, −3, 1, −1, −1, 1, 1, −3, −1, 3, 3, 3, 3, −1, 3, 1, 3, 1}, and q=6; orthe sequence {sn} is {−3, 3, −1, 1, 3, 1, −3, −1, 1, 1, −3, 1, 3, 3, −1, −3, −3, −3}, and q=7; orthe sequence {sn} is {3, −3, 1, 1, 3, −1, 1, −1, −1, −3, 1, 1, −, 3, 3, −3, 3, −1}, and q=8; orthe sequence {sn} is {−3, −1, 3, 3, −1, 3, −1, −3, −1, 1, −1, −3, −1, −1, −1, 3, 3, 1}, and q=9; orthe sequence {sn} is {3, −1, 3, 1, −3, −3, −1, 1, −3, −3, 3, 3, 3, 1, 3, −3, 3, −3}, and q=10; orthe sequence {sn} is {−3, −3, −3, 1, −3, 3, 1, 1, 3, −3, −3, 1, 3, −1, 3, −3, −3, 3}, and q=11; orthe sequence {sn} is {−3, −3, 3, 3, 3, −1, −1, −3, −1, −1, −1, 3, 1, −3, −3, −1, 3, −1}, and q=12; orthe sequence {sn} is {−3, −1, −3, −3, 1, 1, −1, −3, −1, −3, −1, −1, 3, 3, −1, 3, 1, 3}, and q=13; orthe sequence {sn} is {−3, −1, −3, −1, −3, 1, 3, −3, −1, 3, 3, 3, 1, −1, −3, 3, −1, −3}, and q=14; orthe sequence {sn} is {−3, 3, −1, −3, −1, −3, 1, 1, −3, −3, −1, −1, 3, −3, 1, 3, 1, 1}, and q=15; orthe sequence {sn} is {−3, −1, −1, −3, 1, −3, 3, −1, −, −3, 3, 3, −3, −1, 3, −1, −1, −1}, and q=16; orthe sequence {sn} is {−3, 1, −3, −3, 1, −3, −3, 3, 1, −3, −1, −3, −3, −3, −1, 1, 1, 3}, and q=17; orthe sequence {sn} is {−1, −3, 1, −3, −3, −3, 1, 1, 3, 3, −3, 3, 3, −3, −1, 3, −3, 1}, and q=18; orthe sequence {sn} is {−3, 1, −3, −1, −1, 3, 1, −3, −3, −3, −1, −3, −3, 1, 1, 1, −1, −1}, and q=19; orthe sequence {sn} is {3, 3, 3, −3, −1, −3, −1, 3, −1, 1, −1, −3, 1, −3, −3, −1, 3, 3}, and q=20; orthe sequence {sn} is {−3, −3, 3, 3, 3, 1, −3, 1, 3, 3, 1, −3, −3, 3, −1, −3, −1, 1}, and q=21; orthe sequence {sn} is {3, 1, −3, 1, −3, 3, 3, −1, −3, −3, −1, −3, −3, 3, −3, −1, 1, 3}, and q=22; orthe sequence {sn} is {3, −1, −3, 1, −3, −3, −3, 3, 3, −1, 1, −3, −1, 3, 1, 1, 3, 3}, and q=23; orthe sequence {sn} is {−3, −3, 1, −3, 3, 3, 3, −1, 3, 1, 1, −3, −3, −3, 3, −3, −1, −1}, and q=24; orthe sequence {sn} is {1, 1, −3, 3, 3, 1, 3, −3, 3, −1, 1, 1, −1, 1, −3, −3, −1, 3}, and q=25; orthe sequence {sn} is {3, −1, −1, 1, −3, −1, −3, −1, −3, −3, −1, −3, 1, 1, 1, −3, −3, 3}, and q=26; orthe sequence {sn} is {1, −3, −1, −3, 3, 3, −1, −3, 1, −3, −3, −1, −3, −1, 1, 3, 3, 3}, and q=27; orthe sequence {sn} is {1, 1, −3, −3, −3, −3, 1, 3, −3, 3, 3, 1, −3, −1, 3, −1, −3, 1}, and q=28; orthe sequence {sn} is {−3, 1, 1, −3, 1, 1, 3, −3, −1, −3, −1, 3, −3, 3, −1, −1, −1, −3}, and q=29; orthe sequence {sn} is {−3, 3, 1, −1, −1, −1, −1, 1, −1, 3, 3, −3, −1, 1, 3, −1, 3, −1}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn}, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and prime is a largest prime number smaller than J. In addition, a sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the A and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} and a sequence {ym} or a sequence {hj} in different sequence groups, whose cross-correlation value is greater than 0.7 and a maximum cross-correlation value are shown in Table 15. TABLE 15Length-36Length-48Length-60Length-72sequencesequencesequencesequenceLength-180010sequenceMaximum cross-0.69350.69780.701480.6615correlation value In an eleventh optional implementation, when N=18, combinations of the sequence {sn} and q are a part or all of the following combinations:the sequence {sn} is {3, −3, 1, 1, 3, −1, 1, −1, −1, −3, 1, 1, −1, 3, 3, −3, 3, −1}, and q=1; orthe sequence {sn} is {3, −3, 3, −1, 1, 3, −3, −1, −3, −3, −1, −3, 3, 1, −1, 3, −3, 3}, and q=2; orthe sequence {sn} is {−3, 3, 1, −1, −1, 3, −3, −1, 1, 1, 1, 1, 1, −1, 3, −1, −3, −1}, and q=3; orthe sequence {sn} is {1, 1, −1, −1, −3, −1, 1, −3, −3, −3, 1, −3, −1, −1, 1, −1, 3, 1}, and q=4; orthe sequence {sn} is {1, 1, −3, 3, 3, 1, 3, −3, 3, −1, 1, 1, −1, 1, −3, −3, −1, 3}, and q=5; orthe sequence {sn} is {−3, −3, 1, −3, 3, 3, 3, −1, 3, 1, 1, −3, −3, −3, 3, −3, −1, −1}, and q=6; orthe sequence {sn} is {−3, 3, −1, 1, 3, 1, −3, −1, 1, 1, −3, 1, 3, 3, −1, −3, −3, −3}, and q=7; orthe sequence {sn} is {−1, 3, −1, −3, 3, 1, −3, −1, 3, −3, −1, −1, 1, 1, 1, −1, −1, −1}, and q=8; orthe sequence {sn} is {−3, 1, −3, −3, 1, −3, −3, 3, 1, −3, −1, −3, −3, −3, −1, 1, 1, 3}, and q=9; orthe sequence {sn} is {3, −1, 3, 1, −3, −3, −1, 1, −3, −3, 3, 3, 3, 1, 3, −3, 3, −3}, and q=10; orthe sequence {sn} is {1, −3, −1, −3, 3, 3, −1, −3, 1, −3, −3, −1, −3, −1, 1, 3, 3, 3}, and q=11; orthe sequence {sn} is {−3, −3, 3, 3, 3, −1, −1, −3, −1, −1, −1, 3, 1, −3, −3, −1, 3, −1}, and q=12; orthe sequence {sn} is {−3, −1, −3, −3, 1, 1, −1, −3, −1, −3, −1, −1, 3, 3, −1, 3, 1, 3}, and q=13; orthe sequence {sn} is {−3, −3, 1, −1, −1, 1, 1, −3, −1, 3, 3, 3, 3, −1, 3, 1, 3, 1}, and q=14; orthe sequence {sn} is {−3, 3, −1, −3, −1, −3, 1, 1, −3, −3, −1, −1, 3, −3, 1, 3, 1, 1}, and q=15; orthe sequence {sn} is {−3, −3, 3, 3, −3, 1, 3, −1, −3, 1, −1, −3, 3, −3, −1, −1, −1, 3}, and q=16; orthe sequence {sn} is {−3, −3, 3, 3, 3, 1, −3, 1, 3, 3, 1, −3, −3, 3, −1, −3, −1, 1}, and q=17; orthe sequence {sn} is {−3, −1, 3, 3, −1, 3, −1, −3, −1, 1, −1, −3, −1, −1, −1, 3, 3, 1}, and q=18; orthe sequence {sn} is {−3, 1, −3, −1, −1, 3, 1, −3, −3, −3, −1, −3, −3, 1, 1, 1, −1, −1}, and q=19; orthe sequence {sn} is {3, 3, 3, −3, −1, −3, −1, 3, −1, 1, −1, −3, 1, −3, −3, −1, 3, 3}, and q=20; orthe sequence {sn} is {−3, −1, −3, −1, −3, 1, 3, −3, −1, 3, 3, 3, 1, −1, −3, 3, −1, −3}, and q=21; orthe sequence {sn} is {3, −1, −3, 1, −3, −3, −3, 3, 3, −1, 1, −3, −1, 3, 1, 1, 3, 3}, and q=22; orthe sequence {sn} is {−3, 1, 1, −3, 1, 1, 3, −3, −1, −3, −1, 3, −3, 3, −1, −1, −1, −3}, and q=23; orthe sequence {sn} is {−3, −1, −1, −3, 1, −3, 3, −1, −1, −3, 3, 3, −3, −1, 3, −1, −1, −1}, and q=24; orthe sequence {sn} is {−3, −3, −3, 1, −3, 3, 1, 1, 3, −3, −3, 1, 3, −1, 3, −3, −3, 3}, and q=25; orthe sequence {sn} is {1, 1, −3, −3, −3, −3, 1, 3, −3, 3, 3, 1, −3, −1, 3, −1, −3, 1}, and q=26; orthe sequence {sn} is {3, −1, −1, 1, −3, −1, −3, −1, −3, −3, −1, −3, 1, 1, 1, −3, −3, 3}, and q=27; orthe sequence {sn} is {3, 1, −3, 1, −3, 3, 3, −1, −3, −3, −1, −3, −3, 3, −3, −1, 1, 3}, and q=28; orthe sequence {sn} is {−1, −3, 1, −3, −3, −3, 1, 1, 3, 3, −3, 3, 3, −3, −1, 3, −3, 1}, and q=29; orthe sequence {sn} is {−3, 3, 1, −1, −1, −1, −1, 1, −1, 3, 3, −3, −1, 1, 3, −1, 3, −1}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn}, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime−q/31. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprime, is a largest prime number smaller than J. In addition, a sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} and a sequence {ym} or a sequence {hj} in different sequence groups, whose cross-correlation value is greater than 0.7 and a maximum cross-correlation value are shown in Table 16. TABLE 16Length-36Length-48Length-60Length-72sequencesequencesequencesequenceLength-180010sequenceMaximum cross-0.69350.69780.701480.6615correlation value In a twelfth optional implementation, when N=12, combinations of the sequence {sn} and q are a part or all of the following combinations:the sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and q=1; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and q=2; orthe sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and q=3; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and q=4; orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and q=5; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and q=6; orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and q=7; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and q=8; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and q=9; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and q=10; orthe sequence {sn} is {−3, 3, −3, 3, 3, −3, −1, −1, 3, 3, 1, −3}, and q=11; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and q=12; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and q=13; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and q=14; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and q=15; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and q=16; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and q=17; orthe sequence {sn} is {−1, 1, 1, −1, 1, 3, 3, −1, −1, −3, 1, −3}, and q=18; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and q=19; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and q=20; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and q=21; orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and q=22; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and q=23; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and q=24; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and q=25; orthe sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and q=26; orthe sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and q=27; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and q=28; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and q=29; orthe sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn} with a length N=12, the sequence {xn} with a length N=24, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31; and a combination manner of the sequence {xn} with a length N=24 and the sequence {ym} satisfies the seventh optional implementation in the specification. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} with a length N=12 is mapped to 12 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 12 subcarriers is 2t times a subcarrier spacing. A sequence {fn} with a length N=24 is mapped to 24 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 24 subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} with a length N=12 and a sequence with another length in different sequence groups, whose cross-correlation value is greater than 0.8 and a maximum cross-correlation value are shown in Table 17. TABLE 17Length-24Length-36Length-48Length-60Length-72sequencesequencesequencesequencesequenceLength-1255433sequenceMaximum0.84690.86180.87070.84120.824crosscorrelationvalue In a thirteenth optional implementation, when N=12, combinations of the sequence {sn} and q are a part or all of the following combinations:the sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and q=1; orthe sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and q=2; orthe sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and q=3; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and q=4; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and q=5; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and q=6; orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and q=7; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and q=8; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and q=9; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and q=10; orthe sequence {sn} is {−3, 3, −3, 3, 3, −3, −1, −1, 3, 3, 1, −3}, and q=11; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and q=12; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and q=13; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and q=14; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and q=15; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and q=16; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and q=17; orthe sequence {sn} is {−1, 1, 1, −1, 1, 3, 3, −1, −1, −3, 1, −3}, and q=18; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and q=19; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and q=20; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and q=21; orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and q=22; orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and q=23; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and q=24; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and q=25; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and q=26; orthe sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and q=27; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and q=28; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and q=29; orthe sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn} with a length N=12, the sequence {xn} with a length N=24, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime−q/31; and a combination manner of the sequence {xn} with a length N=24 and the sequence {ym} satisfies the eighth optional implementation in the specification. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} with a length N=12 is mapped to 12 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 12 subcarriers is 2t times a subcarrier spacing. A sequence {fn} with a length N=24 is mapped to 24 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 24 subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} with a length N=12 and a sequence with another length in different sequence groups, whose cross-correlation value is greater than 0.8 and a maximum cross-correlation value are shown in Table 18. TABLE 18Length-24Length-36Length-48Length-60Length-72sequencesequencesequencesequencesequenceLength-1265433sequenceMaximum0.84690.86180.87070.84120.824crosscorrelationvalue In a fourteenth optional implementation, when N=12, combinations of the sequence {sn} and q are a part or all of the following combinations:the sequence {sn} is {−3, 3, −3, 3, 3, −3, −1, −1, 3, 3, 1, −3}, and q=1; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and q=2; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and q=3; orthe sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and q=4; orthe sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and q=5; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and q=6; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and q=7; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and q=8; orthe sequence {sn} is {−1, 1, 1, −1, 1, 3, 3, −1, −1, −3, 1, −3}, and q=9; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and q=10; orthe sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and q=11; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and q=12; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and q=13; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and q=14; orthe sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and q=15; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and q=16; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and q=17; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and q=18; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and q=19; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and q=20; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and q=21; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and q=22; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and q=23; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and q=24; orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and q=25; orthe sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and q=26; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and q=27; orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and q=28; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and q=29; orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn} with a length N=12, the sequence {xn} with a length N=18, the sequence {xn} with a length N=24, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31; a combination manner of the sequence {xn} with a length N=18 and the sequence {ym} satisfies the sixteenth optional implementation in the specification; and a combination manner of the sequence {xn} with a length N=24 and the sequence {ym} satisfies the seventh optional implementation in the specification. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku⁢(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} with a length N=12 is mapped to 12 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 12 subcarriers is t times a subcarrier spacing. A sequence {fn} with a length N=18 and a sequence (A) with a length N=24 are respectively mapped to 18 and 24 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 18 or 24 subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} with a length N=12 and a sequence with another length in different sequence groups, whose cross-correlation value is greater than 0.8 and a maximum cross-correlation value are shown in Table 19. TABLE 19Length-18Length-24Length-36Length-48Length-60Length-72sequencesequencesequencesequencesequencesequenceLength-121051221sequenceMaximum0.88920.83730.8310.81110.82410.8074cross-correlationvalue In a fifteenth optional implementation, when N=12, combinations of the sequence {sn} and q are a part or all of the following combinations:the sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and q=1; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and q=2; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and q=3; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and q=4; orthe sequence {sn} is {−3, 3, −3, 3, 3, −3, −1, −1, 3, 3, 1, −3}, and q=5; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and q=6; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and q=7; orthe sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and q=8; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and q=9; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and q=10; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and q=11; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and q=12; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and q=13; orthe sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and q=14; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and q=15; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and q=16; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and q=17; orthe sequence {sn} is {−1, 1, 1, −1, 1, 3, 3, −1, −1, −3, 1, −3}, and q=18; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and q=19; orthe sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and q=20; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and q=21; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and q=22; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and q=23; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and q=24; orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 33, −1, −3, 3, 3 and q=25; orthe sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and q=26; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and q=27; orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and q=28; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and q=29; orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn} with a length N=12, the sequence {xn} with a length N=18, the sequence {xn} with a length N=24, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31; a combination manner of the sequence {xn} with a length N=18 and the sequence {ym} satisfies the seventeenth optional implementation in the specification; and a combination manner of the sequence {xn} with a length N=24 and the sequence {ym} satisfies the eighth optional implementation in the specification. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} with a length N=12 is mapped to 12 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 12 subcarriers is t times a subcarrier spacing. A sequence {fn} with a length N=18 and a sequence {fn} with a length N=24 are respectively mapped to 18 and 24 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 18 or 24 subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} with a length N=12 and a sequence with another length in different sequence groups, whose cross-correlation value is greater than 0.8 and a maximum cross-correlation value are shown in Table 20. TABLE 20Length-18Length-24Length-36Length-48Length-60Length-72sequencesequencesequencesequencesequencesequenceLength-121151111sequenceMaximum0.89330.83730.8310.81070.80340.8074cross-correlationvalue In a sixteenth optional implementation, when N=18, combinations of the sequence {sn} and q are a part or all of the following combinations:the sequence {sn} is {−1, 3, −1, −3, 3, 1, −3, −1, 3, −3, −1, −1, 1, 1, 1, −1, −1, −1}, and q=1; orthe sequence {sn} is {3, −3, 3, −1, 1, 3, −3, −1, −3, −3, −1, −3, 3, 1, −1, 3, −3, 3}, and q=2; orthe sequence {sn} is {−3, 3, 1, −1, −1, 3, −3, −1, 1, 1, 1, 1, 1, −1, 3, −1, −3, −1}, and q=3; orthe sequence {sn} is {1, −3, −1, −3, 3, 3, −1, −3, 1, −3, −3, −1, −3, −1, 1, 3, 3, 3}, and q=4; orthe sequence {sn} is {1, 1, −1, −1, −3, −1, 1, −3, −3, −3, 1, −3, −1, −1, 1, −1, 3, 1}, and q=5; orthe sequence {sn} is {−3, −3, 1, −1, −1, 1, 1, −3, −1, 3, 3, 3, 3, −1, 3, 1, 3, 1}, and q=6; orthe sequence {sn} is {−3, 3, −1, 1, 3, 1, −3, −1, 1, 1, −3, 1, 3, 3, −1, −3, −3, −3}, and q=7; orthe sequence {sn} is {−3, −3, 3, 3, 3, 1, −3, 1, 3, 3, 1, −3, −3, 3, −1, −3, −1, 1}, and q=8; orthe sequence {sn} is {−3, −1, 3, 3, −1, 3, −1, −3, −1, 1, −1, −3, −1, −1, −1, 3, 3, 1}, and q=9; orthe sequence {sn} is {3, −1, 3, 1, −3, −3, −1, 1, −3, −3, 3, 3, 3, 1, 3, −3, 3, −3}, and q=10; orthe sequence {sn} is {3, −1, −1, 1, −3, −1, −3, −1, −3, −3, −1, −3, 1, 1, 1, −3, −3, 3}, and q=11; orthe sequence {sn} is {−3, 1, −3, −3, 1, −3, −3, 3, 1, −3, −1, −3, −3, −3, −1, 1, 1, 3}, and q=12; orthe sequence {sn} is {−3, −1, −3, −3, 1, 1, −1, −3, −1, −3, −1, −1, 3, 3, −1, 3, 1, 3}, and q=13; orthe sequence {sn} is {1, 1, −3, −3, −3, −3, 1, 3, −3, 3, 3, 1, −3, −1, 3, −1, −3, 1}, and q=14; orthe sequence {sn} is {−3, 3, −1, −3, −1, −3, 1, 1, −3, −3, −1, −1, 3, −3, 1, 3, 1, 1}, and q=15; orthe sequence {sn} is {−3, −1, −3, −1, −3, 1, 3, −3, −1, 3, 3, 3, 1, −1, −3, 3, −1, −3}, and q=16; orthe sequence {sn} is {−3, −3, 3, 3, −3, 1, 3, −1, −3, 1, −1, −3, 3, −3, −1, −1, −1, 3}, and q=17; orthe sequence {sn} is {−1, −3, 1, −3, −3, −3, 1, 1, 3, 3, −3, 3, 3, −3, −1, 3, −3, 1}, and q=18; orthe sequence {sn} is {−3, 1, −3, −1, −1, 3, 1, −3, −3, −3, −1, −3, −3, 1, 1, 1, −1, −1}, and q=19; orthe sequence {sn} is {3, 3, 3, −3, −1, −3, −1, 3, −1, 1, −1, −3, 1, −3, −3, −1, 3, 3}, and q=20; orthe sequence {sn} is {−3, 1, 1, −3, 1, 1, 3, −3, −1, −3, −1, 3, −3, 3, −1, −1, −1, −3}, and q=21; orthe sequence {sn} is {−3, −3, 3, 3, 3, −1, −1, −3, −1, −1, −1, 3, 1, −3, −3, −1, 3, −1}, and q=22; orthe sequence {sn} is {−3, −1, −1, −3, 1, −3, −1, −, 3, 3, 3, −3, −1, −11, and q=23; orthe sequence {sn} is {−3, −3, 1, −3, 3, 3, 3, −1, 3, 1, 1, −3, −3, −3, 3, −3, −1, −1}, and q=24; orthe sequence {sn} is {3, −3, 1, 1, 3, −1, 1, −1, −1, −3, 1, 1, −1, 3, 3, −3, 3, −1}, and q=25; orthe sequence {sn} is {1, 1, −3, 3, 3, 1, 3, −3, 3, −1, 1, 1, −1, 1, −3, −3, −1, 3}, and q=26; orthe sequence {sn} is {3, 1, −3, 1, −3, 3, 3, −1, −3, −3, −1, −3, −3, 3, −3, −1, 1, 3}, and q=27; orthe sequence {sn} is {3, −1, −3, 1, −3, −3, −3, 3, 3, −1, 1, −3, −1, 3, 1, 1, 3, 3}, and q=28; orthe sequence {sn} is {−3, −3, −3, 1, −3, 3, 1, 1, 3, −3, −3, 1, 3, −1, 3, −3, −3, 3}, and q=29; orthe sequence {sn} is {−3, 3, 1, −1, −1, −1, −1, 1, −1, 3, 3, −3, −1, 1, 3, −1, 3, −1}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn} with a length N=18, the sequence {xn} with a length N=24, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31; and a combination manner of the sequence {xn} with a length N=24 and the sequence {ym} satisfies the seventh optional implementation in the specification. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku⁢(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} with a length N=18 and a sequence {fn} with a length N=24 are respectively mapped to 18 and 24 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 18 or 24 subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the A and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} with a length N=18 and a sequence with another length in different sequence groups, whose cross-correlation value is greater than 0.7 and a maximum cross-correlation value are shown in Table 21. TABLE 21Length-24Length-36Length-48Length-60Length-72sequencesequencesequencesequencesequenceLength-1821020sequenceMaximum0.7120.71920.69780.73870.6615cross-correlationvalue In a seventeenth optional implementation, when N=18, combinations of the sequence {sn} and q are a part or all of the following combinations:the sequence {sn} is {3, −3, 1, 1, 3, −1, 1, −1, −1, −3, 1, 1, −1, 3, 3, −3, 3, −1}, and q=1; orthe sequence {sn} is {3, −3, 3, −1, 1, 3, −3, −1, −3, −3, −1, −3, 3, 1, −1, 3, −3, 3}, and q=2; orthe sequence {sn} is {−3, 3, 1, −1, −1, 3, −3, −1, 1, 1, 1, 1, 1, −1, 3, −1, −3, −1}, and q=3; orthe sequence {sn} is {1, 1, −1, −1, −3, −1, 1, −3, −3, −3, 1, −3, −1, −1, 1, −1, 3, 1}, and q=4; orthe sequence {sn} is {1, 1, −3, 3, 3, 1, 3, −3, 3, −1, 1, 1, −1, 1, −3, −3, −1, 3}, and q=5; orthe sequence {sn} is {−3, −3, 1, −3, 3, 3, 3, −1, 3, 1, 1, −3, −3, −3, 3, −3, −1, −1}, and q=6; orthe sequence {sn} is {−3, 3, −1, 1, 3, 1, −3, −1, 1, 1, −3, 1, 3, 3, −1, −3, −3, −3}, and q=7; orthe sequence {sn} is {−1, 3, −1, −3, 3, 1, −3, −1, 3, −3, −1, −1, 1, 1, 1, −1, −1, −1}, and q=8; orthe sequence {sn} is {−3, 1, −3, −3, 1, −3, −3, 3, 1, −3, −1, −3, −3, −3, −1, 1, 1, 3}, and q=9; orthe sequence {sn} is {3, −1, 3, 1, −3, −3, −1, 1, −3, −3, 3, 3, 3, 1, 3, −3, 3, −3}, and q=10; orthe sequence {sn} is {−3, −3, 3, 3, −3, 1, 3, −1, −3, 1, −1, −3, 3, −3, −1, −1, −1, 3}, and q=11; orthe sequence {sn} is {−3, −3, 3, 3, 3, −1, −1, −3, −1, −1, −1, 3, 1, −3, −3, −1, 3, −1}, and q=12; orthe sequence {sn} is {−3, −1, −3, −3, 1, 1, −1, −3, −1, −3, −1, −1, 3, 3, −1, 3, 1, 3}, and q=13; orthe sequence {sn} is {1, −3, −1, −3, 3, 3, −1, −3, 1, −3, −3, −1, −3, −1, 1, 3, 3, 3}, and q=14; orthe sequence {sn} is {−3, 3, −1, −3, −1, −3, 1, 1, −3, −3, −1, −1, 3, −3, 1, 3, 1, 1}, and q=15; orthe sequence {sn} is {−3, −3, 1, −1, −1, 1, 1, −3, −1, 3, 3, 3, 3, −1, 3, 1, 3, 1}, and q=16; orthe sequence {sn} is {−3, −3, 3, 3, 3, 1, −3, 1, 3, 3, 1, −3, −3, 3, −1, −3, −1, 1}, and q=17; orthe sequence {sn} is {−3, −1, 3, 3, −1, 3, −1, −3, −1, 1, −1, −3, −1, −1, −1, 3, 3, 1}, and q=18; orthe sequence {sn} is {−3, 1, −3, −1, −1, 3, 1, −3, −3, −3, −1, −3, −3, 1, 1, 1, −1, −1}, and q=19; orthe sequence {sn} is {3, 3, 3, −3, −1, −3, −1, 3, −1, 1, −1, −3, 1, −3, −3, −1, 3, 3}, and q=20; orthe sequence {sn} is {−3, 1, 1, −3, 1, 1, 3, −3, −1, −3, −1, 3, −3, 3, −1, −1, −1, −3}, and q=21; orthe sequence {sn} is {−3, −1, −3, −1, −3, 1, 3, −3, −1, 3, 3, 3, 1, −1, −3, 3, −1, −3}, and q=22; orthe sequence {sn} is {3, −1, −3, 1, −3, −3, −3, 3, 3, −1, 1, −3, −1, 3, 1, 1, 3, 3}, and q=23; orthe sequence {sn} is {−3, −3, −3, 1, −3, 3, 1, 1, 3, −3, −3, 1, 3, −1, 3, −3, −3, 3}, and q=24; orthe sequence {sn} is {1, 1, −3, −3, −3, −3, 1, 3, −3, 3, 3, 1, −3, −1, 3, −1, −3, 1, and q=25; orthe sequence {sn} is {3, −1, −1, 1, −3, −1, −3, −1, −3, −3, −1, −3, 1, 1, 1, −3, −3, 3}, and q=26; orthe sequence {sn} is {3, 1, −3, 1, −3, 3, 3, −1, −3, −3, −1, −3, −3, 3, −3, −1, 1, 3}, and q=27; orthe sequence {sn} is {−3, −1, −1, −3, 1, −3, 3, −1, −1, −3, 3, 3, −3, −1, 3, −1, −1, −1}, and q=28; orthe sequence {sn} is {−1, −3, 1, −3, −3, −3, 1, 1, 3, 3, −3, 3, 3, −3, −1, 3, −3, 1}, and q=29; orthe sequence {sn} is {−3, 3, 1, −1, −1, −1, −1, 1, −1, 3, 3, −3, −1, 1, 3, −1, 3, −1}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn} with a length N=18, the sequence {xn} with a length N=24, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½, and ū=Jprime·q/31; and a combination manner of the sequence {xn} with a length N=24 and the sequence {ym} satisfies the seventh optional implementation in the specification. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} with a length N=18 and a sequence {fn} with a length N=24 are respectively mapped to 18 and 24 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 18 or 24 subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the A and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} with a length N=18 and a sequence with another length in different sequence groups, whose cross-correlation value is greater than 0.7 and a maximum cross-correlation value are shown in Table 22. TABLE 22Length-24Length-36Length-48Length-60Length-72sequencesequencesequencesequencesequenceLength 1860010sequenceMaximum0.74120.69350.69780.70150.6615cross-correlationvalue In an eighteenth optional implementation, when N=24, a combination of the sequence {sn} and q is at least one of the following combinations:the sequence {sn} is {−1, −3, 3, −1, 3, 1, 3, −1, 1, −3, −1, −3, −1, 1, 3, −3, −1, −3, 3, 3, 3, −3, −3, −3}, and q=1; orthe sequence {sn} is {−1, −3, 3, 1, 1, −3, 1, −3, −3, 1, −3, −1, −1, 3, −3, 3, 3, 3, −3, 1, 3, 3, −3, −3}, and q=2; orthe sequence {sn} is {−1, −3, −3, 1, −1, −1, −3, 1, 3, −1, −3, −1, −1, −3, 1, 1, 3, 1, −3, −1, −1, 3, −3, −3}, and q=3; orthe sequence {sn} is {1, −3, 3, −1, −3, −1, 3, 3, 1, −1, 1, 1, 3, −3, −1, −3, −3, −3, −1, 3, −3, −1, −3, −3}, and q=4; orthe sequence {sn} is {−3, 3, 1, 3, −1, 1, −3, 1, −3, 1, −1, −3, −1, −3, −3, −3, −3, −1, −1, −1, 1, 1, −3, −3}, and q=5; orthe sequence {sn} is {−3, −1, 1, −3, −3, 1, 1, −3, 3, −1, −1, −3, 1, 3, 1, −1, −3, −1, −3, 1, −3, −3, −3, −3}, and q=6; orthe sequence {sn} is {−3, 1, −3, 1, −3, −3, 1, −3, 1, −3, −3, −3, −3, −3, 1, −3, −3, 1, 1, −3, 1, 1, −3, −3}, and q=7; orthe sequence {sn} is {−3, 1, 3, −1, 1, −1, 3, −3, 3, −1, −3, −1, −3, 3, −−1, −1, −1, −3, −1, −1, −3, 3, 3, −3}, and q=8; orthe sequence {sn} is {−3, −3, 3, 3, 1, −1, −1, −1, 1, −3, −1, 1, −1, 3, −3, −1, −3, −1, −1, 1, −3, 3, −1, −3}, and q=9; orthe sequence {sn} is {1, 1, −1, −3, −1, 1, 1, −3, 1, −1, 1, −3, 3, −3, −3, 3, −1, −3, 1, 3, −3, 1, −3, −3}, and q=10; orthe sequence {sn} is {−3, −3, 1, −1, 3, 3, −3, −1, 1, −1, −1, 1, 1, −1, −1, 3, −3, 1, −3, 1, −1, −1, −1, −3}, and q=11; orthe sequence {sn} is {−3, 3, −1, 3, 1, −1, −1, −1, 3, 3, 1, 1, 1, 3, 3, 1, −3, −3, −1, 1, −3, 1, 3, −3}, and q=12; orthe sequence {sn} is {3, −3, 3, −1, −3, 1, 3, 1, −1, −1, −3, −1, 3, −3, 3, −1, −1, 3, 3, −3, −3, 3, −3, −3}, and q=13; orthe sequence {sn} is {−3, 3, −1, 3, −1, 3, 3, 1, 1, −3, 1, 3, −3, 3, −3, −3, −1, 1, 3, −3, −1, −1, −3, −3}, and q=14; orthe sequence {sn} is {−3, 1, −3, −1, −1, 3, 1, 3, −3, 1, −1, 3, 3, −1, −3, 3, −3, −1, −1, −3, −3, −3, 3, −3}, and q=15; orthe sequence {sn} is {−3, −1, −1, −3, 1, −3, −3, −1, −1, 3, −1, 1, −1, 3, 1, −3, −1, 3, 1, 1, −1, −1, −3, −3}, and q=16; orthe sequence {sn} is {−3, 1, −3, 3, −1, −1, −1, −3, 3, 1, −1, −3, −1, 1, 3, −1, 1, −1, 1, −3, −3, −3, −3, −3}, and q=17; orthe sequence {sn} is {3, −1, 3, −1, 1, −3, 1, 1, −3, −3, 3, −3, −1, −1, −1, −1, −1, −3, −3, −1, 1, 1, −3, −3}, and q=18: orthe sequence {sn} is {−3, −3, −3, −1, 3, −3, 3, 1, 3, 1, −3, −1, −1, −3, 1, 1, 3, 1, −1, −3, 3, 1, 3, −3}, and q=19; orthe sequence {sn} is {−1, 3, −3, −3, −1, 3, −1, −1, 1, 3, 1, 3, −1, −1, −3, 1, 3, 1, −1, −3, 1, −1, −3, −3}, and q=20; orthe sequence {sn} is {−3, −3, −1, −1, −1, −3, 1, −1, −3, −1, 3, −3, 1, −3, 3, −3, 3, 3, 1, −1, −1, 1, −3, −3}, and q=21; orthe sequence {sn} is {3, −1, 1, −1, 3, −3, 1, 1, 3, −1, −3, 3, 1, −3, 3, −1, −1, −1, −1, 1, −3, −3, −3, −3}, and q=22: orthe sequence {sn} is {−3, 1, −3, 3, −3, 1, −3, 3, 1, −1, −3, −1, −3, −3, −3, −3, 1, 3, −1, 1, 3, 3, 3, −3}, and q=23; orthe sequence {sn} is {−3, −1, 1, −3, −1, −1, 1, 1, 1, 3, 3, −1, 1, −1, 1, −1, −1, −3, −3, −3, 3, 1, −1, −3}, and q=24; orthe sequence {sn} is {−3, 3, −1, −3, −1, −1, −1, 3, −1, −1, 3, −3, −1, 3, −3, 3, −3, −1, 3, 1, 1, −1, −3, −3}, and q=25; orthe sequence {sn} is {−3, 1, −1, −3, −3, −1, 1, −3, −1, −3, 1, 1, −1, 1, 1, 3, 3, 3, −1, 1, −1, 1, −1, −3}, and q=26; orthe sequence {sn} is {−1, 3, −1, −1, 3, 3, −1, −1, −1, 3, −1, −3, 1, 3, 1, 1, −3, −3, −3, −1, −3, −1, −3, −3}, and q=27: orthe sequence {sn} is {3, −3, −3, −1, 3, 3, −3, −1, 3, 1, 1, 1, 3, −1, 3, −3, −1, 3, −1, 3, 1, −1, −3, −3}, and q=28; orthe sequence {sn} is {−3, 1, −3, 1, −3, 1, 1, 3, 1, −3, −3, −1, 1, 3, −1, −3, 3, 1, −1, −3, −3, −3, −3, −3}, and q=29; orthe sequence {sn} is {3, −3, −1, 1, 3, −1, −1, −3, −1, 3, −1, −3, −1, −3, 3, −1, 3, 1, 1, −3, 3, −3, −3, −3}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn}, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} and a sequence {ym} or a sequence {hj} in different sequence groups, whose cross-correlation value is greater than 0.6 and a maximum cross-correlation value are shown in Table 23. TABLE 23Length-36Length-48Length-60Length-72sequencesequencesequencesequenceLength-241021sequenceMaximum cross-0.6790.59630.64530.6121correlation value In a nineteenth optional implementation, when N=18, combinations of the sequence {sn} and q are a part or all of the following combinations:the sequence {sn} is {1, 1, −1, −1, −3, −1, 1, −3, −3, −3, 1, −3, −1, −1, 1, −1, 3, 1}, and q=1; orthe sequence {sn} is {3, −3, 3, −1, 1, 3, −3, −1, −3, −3, −1, −3, 3, 1, −1, 3, −3, 3}, and q=2; orthe sequence {sn} is {−3, 3, 1, −1, −1, 3, −3, −1, 1, 1, 1, 1, 1, −1, 3, −1, −3, −1}, and q=3; orthe sequence {sn} is {1, 1, −3, 3, 3, 1, 3, −3, 3, −1, 1, 1, −1, 1, −3, −3, −1, 3}, and q=4; orthe sequence {sn} is {−3, −3, 1, −3, 3, 3, 3, −1, 3, 1, 1, −3, −3, −3, 3, −3, −1, −1}, and q=5; orthe sequence {sn} is {3, −3, 1, 1, 3, −1, 1, −1, −1, −3, 1, 1, −1, 3, 3, −3, 3, −1}, and q=6; orthe sequence {sn} is {3, 3, −1, 1, 3, 1, −3, −1, 1, 1, −3, 1, 3, 3, −1, −3, −3, −3}, and q=7; orthe sequence {sn} is {−1, 3, −1, −3, 3, 1, −3, −1, 3, −3, −1, −1, 1, 1, 1, −1, −1, −1}, and q=8; orthe sequence {sn} is {−3, 1, −3, −3, 1, −3, −3, 3, 1, −3, −1, −3, −3, −3, −1, 1, 1, 3}, and q=9; orthe sequence {sn} is {3, −1, 3, 1, −3, −3, −1, 1, −3, −3, 3, 3, 3, 1, 3, −3, 3, −3}, and q=10; orthe sequence {sn} is {1, −3, −1, −3, 3, 3, −1, −3, 1, −3, −3, −1, −3, −1, 1, 3, 3, 3}, and q=11; orthe sequence {sn} is {−3, −3, 3, 3, 3, −1, −1, −3, −1, −1, −1, 3, 1, −3, −3, −1, 3, −1}, and q=12; orthe sequence {sn} is {−3, −1, −3, −3, 1, 1, −1, −3, −1, −3, −1, −1, 3, 3, −1, 3, 1, 3}, and q=13; orthe sequence {sn} is {1, 1, −3, −3, −3, −3, 1, 3, −3, 3, 3, 1, −3, −1, 3, −1, −3, 1}, and q=14; orthe sequence {sn} is {−3, 3, −1, −3, −1, −3, 1, 1, −3, −3, −1, −1, 3, −3, 1, 3, 1, 1}, and q=15; orthe sequence {sn} is {−3, −3, 1, −1, −1, 1, 1, −3, −1, 3, 3, 3, 3, −1, 3, 1, 3, 1}, and q=16; orthe sequence {sn} is {−3, −3, 3, 3, 3, 1, −3, 1, 3, 3, 1, −3, −3, 3, −1, −3, −1, 1}, and q=17; orthe sequence {sn} is {−3, −1, 3, 3, −1, 3, −1, −3, −1, 1, −1, −3, −1, −1, −1, 3, 3, 1}, and q=18; orthe sequence {sn} is {−3, 1, −3, −1, −1, 3, 1, −3, −3, −3, −1, −3, −3, 1, 1, 1, −1, −1}, and q=19; orthe sequence {sn} is {3, 3, 3, −3, −1, −3, −1, 3, −1, 1, −1, −3, 1, −3, −3, −1, 3, 3}, and q=20; orthe sequence {sn} is {−3, 1, 1, −3, 1, 1, 3, −3, −1, −3, −1, 3, −3, 3, −1, −1, −1, −3}, and q=21; orthe sequence {sn} is {−3, −1, −3, −1, −3, 1, 3, −3, −1, 3, 3, 3, 1, −1, −3, 3, −1, −3}, and q=22; orthe sequence {sn} is {3, −1, −3, 1, −3, −3, −3, 3, 3, −1, 1, −3, −1, 3, 1, 1, 3, 3}, and q=23; orthe sequence {sn} is {−3, −3, −3, 1, −3, 3, 1, 1, 3, −3, −3, 1, 3, −1, 3, −3, −3, 3}, and q=24; orthe sequence {sn} is {3, −1, −1, 1, −3, −1, −3, −1, −3, −3, −1, −3, 1, 1, 1, −3, −3, 3}, and q=25; orthe sequence {sn} is {3, 1, −3, 1, −3, 3, 3, −1, −3, −3, −1, −3, −3, 3, −3, −1, 1, 3}, and q=26; orthe sequence {sn} is {−3, −1, −1, −3, 1, −3, 3, −1, −1, −3, 3, 3, −3, −1, 3, −1, −1, −1}, and q=27; orthe sequence {sn} is {−3, −3, 3, 3, −3, 1, 3, −1, −3, 1, −1, −3, 3, −3, −1, −1, −1, 3}, and q=28; orthe sequence {sn} is {−1, −3, 1, −3, −3, −3, 1, 1, 3, 3, −3, 3, 3, −3, −1, 3, −3, 1}, and q=29; orthe sequence {sn} is {−3, 3, 1, −1, −1, −1, −1, 1, −1, 3, 3, −3, −1, 1, 3, −1, 3, −1}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn} with a length N=18, the sequence {xn} with a length N=24, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31; and a combination manner of the sequence {xn} with a length N=24 and the sequence {ym} satisfies the eighteenth optional implementation in the specification. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku⁢(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} with a length N=18 and a sequence {fn} with a length N=24 are respectively mapped to 18 and 24 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 18 or 24 subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} with a length N=18 and a sequence with another length in different sequence groups, whose cross-correlation value is greater than 0.7 and a maximum cross-correlation value are shown in Table 24. TABLE 24Length-24Length-36Length-48Length-60Length-72sequencesequencesequencesequencesequenceLength-1840010sequenceMaximum0.74120.69350.69780.70150.6615cross-correlationvalue In a twentieth optional implementation, when N=12, combinations of the sequence {sn} and q are a part or all of the following combinations:the sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and q=1; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and q=2; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and q=3; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and q=4; orthe sequence {sn} is {−3, 3, −3, 3, 3, −3, −1, −1, 3, 3, 1, −3}, and q=5; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and q=6; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and q=7; orthe sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and q=8; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and q=9; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and q=10; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and q=11; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and q=12; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and q=13; orthe sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and q=14; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and q=15; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and q=16; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and q=17; orthe sequence {sn} is {−1, 1, 1, −1, 1, 3, 3, −1, −1, −3, 1, −3}, and q=18; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and q=19; orthe sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and q=20; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and q=21; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and q=22; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and q=23; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and q=24; orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and q=25; orthe sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and q=26; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and q=27; orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and q=28; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and q=29; orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn} with a length N=12, the sequence {xn} with a length N=18, the sequence {xn} with a length N=24, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31; a combination manner of the sequence {xn} with a length N=18 and the sequence {ym} satisfies the nineteenth optional implementation in the specification; and a combination manner of the sequence {xn} with a length N=24 and the sequence {ym} satisfies the eighteenth optional implementation in the specification. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} with a length N=12 is mapped to 12 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 12 subcarriers is t times a subcarrier spacing. A sequence {fn} with a length N=18 and a sequence {fn} with a length N=24 are respectively mapped to 18 and 24 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 18 or 24 subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the A and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} with a length N=12 and a sequence with another length in different sequence groups, whose cross-correlation value is greater than 0.8 and a maximum cross-correlation value are shown in Table 25. TABLE 25Length-18Length-24Length-36Length-48Length-60Length-72sequencesequencesequencesequencesequencesequenceLength-121061111sequenceMaximum0.89330.8550.8310.81070.80340.8074cross-correlationvalue In a twenty-first optional implementation, when N=24, a combination of the sequence {sn} and q is at least one of the following combinations:the sequence {sn} is {−1, −3, 3, −1, 3, 1, 3, −1, 1, −3, −1, −3, −1, 1, 3, −3, −1, −3, 3, 3, 3, −3, −3, −3}, and q=1; orthe sequence {sn} is {−3, −1, 1, −3, −1, −1, 1, 1, 1, 3, 3, −1, 1, −1, 1, −1, −1, −3, −3, −3, 3, 1, −1, −3}), and q=2; orthe sequence {sn} is {−1, −3, −3, 1, −1, −1, −3, 1, 3, −1, −3, −1, −1, −3, 1, 1, 3, 1, −3, −1, −1, 3, −3, −3}, and q=3; orthe sequence {sn} is {1, −3, 3, −1, −3, −1, 3, 3, 1, −1, 1, 1, 3, −3, −1, −3, −3, −3, −1, 3, −3, −1, −3, −3}, and q=4; orthe sequence {sn} is {−1, 3, −3, −3, −1, 3, −1, −1, 1, 3, 1, 3, −1, −1, −3, 1, 3, 1, −1, −3, 1, −1, −3, −3}, and q=5; orthe sequence {sn} is {−3, 1, −3, 3, −3, 1, −3, 3, 1, −1, −3, −1, −3, −3, −3, −3, 1, 3, −1, 1, 3, 3, 3, −3}, and q=6; orthe sequence {sn} is {−3, 3, 1, 3, −1, 1, −3, 1, −3, 1, −1, −3, −1, −3, −3, −3, −3, −1, −1, −1, 1, 1, −3, −3}, and q=7; orthe sequence {sn} is {−3, 1, 3, −1, 1, −1, 3, −3, 3, −1, −3, −1, −3, 3, −1, −1, −1, −3, −1, −1, −3, 3, 3, −3}, and q=8; orthe sequence {sn} is {−3, 1, −3, 3, −1, −1, −1, −3, 3, 1, −1, −3, −1, 1, 3, −1, 1, −1, 1, −3, −3, −3, −3, −3}, and q=9; orthe sequence {sn} is {1, 1, −1, −3, −1, 1, 1, −3, 1, −1, 1, −3, 3, −3, −3, 3, −1, −3, 1, 3, −3, 1, −3, −3}, and q=10; orthe sequence {sn} is {−3, −3, −3, −1, 3, −3, 3, 1, 3, 1, −3, −1, −1, −3, 1, 1, 3, 1, −1, −3, 3, 1, 3, −3}, and q=11; orthe sequence {sn} is {−3, 3, −1, 3, 1, −1, −1, −1, 3, 3, 1, 1, 1, 3, 3, 1, −3, −3, −1, 1, −3, 1, 3, −3, and q=12; orthe sequence {sn} is {3, −3, 3, −1, −3, 1, 3, 1, −1, −1, −3, −1, 3, −3, 3, −1, −1, 3, 3, −3, −3, 3, −3, −3}, and q=13; orthe sequence {sn} is {−3, 3, −1, 3, −1, 3, 3, 1, 1, −3, 1, 3, −3, 3, −3, −3, −1, 1, 3, −3, −1, −1, −3, −3}, and q=14; orthe sequence {sn} is {−3, 1, −3, −1, −1, 3, 1, 3, −3, 1, −1, 3, 3, −1, −3, 3, −3, −1, −1, −3, −3, −3, 3, −3}, and q=15; orthe sequence {sn} is {−3, −1, −1, −3, 1, −3, −3, −1, −1, 3, −1, 1, −1, 3, 1, −3, −1, 3, 1, 1, −1, −1, −3, −3}, and q=16; orthe sequence {sn} is {−3, −3, 1, −1, 3, 3, −3, −1, 1, −1, −1, 1, 1, −1, −1, 3, −3, 1, −3, 1, −1, −1, −1, −3}, and q=17; orthe sequence {sn} is {3, −1, 3, −1, 1, −3, 1, 1, −3, −3, 3, −3, −1, −1, −1, −1, −1, −3, −3, −1, 1, 1, −3, −3}, and q=18; orthe sequence {sn} is {−3, 1, −3, 1, −3, −3, 1, −3, 1, −3, −3, −3, −3, −3, 1, −3, −3, 1, 1, −3, 1, 1, −3, −3}, and q=19; orthe sequence {sn} is {−3, −3, 3, 3, 1, −1, −1, −1, 1, −3, −1, 1, −1, 3, −3, −1, −3, −1, −1, 1, −3, 3, −1, −3}, and q=20; orthe sequence {sn} is {−3, −3, −1, −1, −1, −3, 1, −1, −3, −1, 3, −3, 1, −3, 3, −3, 3, 3, 1, −1, −1, 1, −3, −3}, and q=21; orthe sequence {sn} is {3, −1, 1, −1, 3, −3, 1, 1, 3, −1, −3, 3, 1, −3, 3, −1, −1, −1, −1, 1, −3, −3, −3, −3}, and q=22; orthe sequence {sn} is {−1, 3, −1, −1, 3, 3, −1, −1, −1, 3, −1, −3, 1, 3, 1, 1, −3, −3, −3, −1, −3, −1, −3, −3}, and q=23; orthe sequence {sn} is {−1, −3, 3, 1, 1, −3, 1, −3, −3, 1, −3, −1, −1, 3, −3, 3, 3, 3, −3, 1, 3, 3, −3, −3}, and q=24; orthe sequence {sn} is {−3, −1, 1, −3, −3, 1, 1, −3, 3, −1, −1, −3, 1, 3, 1, −1, −3, −1, −3, 1, −3, −3, −3, −3}, and q=25; orthe sequence {sn} is {−3, 3, −1, −3, −1, −1, −1, 3, −1, −1, 3, −3, −1, 3, −3, 3, −3, −1, 3, 1, 1, −1, −3, −3}, and q=26; orthe sequence {sn} is {−3, 1, −1, −3, −3, −1, 1, −3, −1, −3, 1, 1, −1, 1, 1, 3, 3, 3, −1, 1, −1, 1, −1, −3}, and q=27; orthe sequence {sn} is {3, −3, −3, −1, 3, 3, −3, −1, 3, 1, 1, 1, 3, −1, 3, −3, −1, 3, −1, 3, 1, −1, −3, −3}, and q=28; orthe sequence {sn} is {−3, 1, −3, 1, −3, 1, 1, 3, 1, −3, −3, −1, 1, 3, −1, −3, 3, 1, −1, −3, −3, −3, −3, −3}, and q=29; orthe sequence {sn} is {3, −3, −1, 1, 3, −1, −1, −3, −1, 3, −1, −3, −1, −3, 3, −1, 3, 1, 1, −3, 3, −3, −3, −3}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn}, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} and a sequence {ym} or a sequence {hj} in different sequence groups, whose cross-correlation value is greater than 0.6 and a maximum cross-correlation value are shown in Table 26. TABLE 26Length-36Length-48Length-60Length-72sequencesequencesequencesequenceLength-241021sequenceMaximum cross-0.6790.59630.64530.6121correlation value In a twenty-second optional implementation, when N=18, combinations of the sequence {sn} and q are a part or all of the following combinations:the sequence {sn} is {1, 1, −1, −1, −3, −1, 1, −3, −3, −3, 1, −3, −1, −1, 1, −1, 3, 1}, and q=1; orthe sequence {sn} is {3, −3, 3, −1, 1, 3, −3, −1, −3, −3, −1, −3, 3, 1, −1, 3, −3, 3}, and q=2; orthe sequence {sn} is {−3, 3, 1, −1, −1, 3, −3, −1, 1, 1, 1, 1, 1, −1, 3, −1, −3, −1}, and q=3; orthe sequence {sn} is {1, 1, −3, 3, 3, 1, 3, −3, 3, −1, 1, 1, −1, 1, −3, −3, −1, 3}, and q=4; orthe sequence {sn} is {−3, −3, 1, −3, 3, 3, 3, −1, 3, 1, 1, −3, −3, −3, 3, −3, −1, −1}, and q=5; orthe sequence {sn} is {−1, 3, −1, −3, 3, 1, −3, −1, 3, −3, −1, −1, 1, 1, 1, −1, −1, −1}, and q=6; orthe sequence {sn} is {−3, 3, −1, 1, 3, 1, −3, −1, 1, 1, −3, 1, 3, 3, −1, −3, −3, −3}, and q=7; orthe sequence {sn} is {−3, 1, −3, −3, 1, −3, −3, 3, 1, −3, −1, −3, −3, −3, −1, 1, 1, 3}, and q=8; orthe sequence {sn} is {1, −3, −1, −3, 3, 3, −1, −3, 1, −3, −3, −1, −3, −1, 1, 3, 3, 3}, and q=9; orthe sequence {sn} is {3, −1, 3, 1, −3, −3, −1, 1, −3, −3, 3, 3, 3, 1, 3, −3, 3, −3}, and q=10; orthe sequence {sn} is {−3, −3, 1, −1, −1, 1, 1, −3, −1, 3, 3, 3, 3, −1, 3, 1, 3, 1}, and q=11; orthe sequence {sn} is {−3, −3, 3, 3, 3, −1, −1, −3, −1, −1, −1, 3, 1, −3, −3, −1, 3, −1}, and q=12; orthe sequence {sn} is {−3, −1, −3, −3, 1, 1, −1, −3, −1, −3, −1, −1, 3, 3, −1, 3, 1, 3}, and q=13; orthe sequence {sn} is {1, 1, −3, −3, −3, −3, 1, 3, −3, 3, 3, 1, −3, −1, 3, −1, −3, 1}, and q=14; orthe sequence {sn} is {−3, 3, −1, −3, −1, −3, 1, 1, −3, −3, −1, −1, 3, −3, 1, 3, 1, 1}, and q=15; orthe sequence {sn} is {−3, −3, 3, 3, 3, 1, −3, 1, 3, 3, 1, −3, −3, 3, −1, −3, −1, 1}, and q=16; orthe sequence {sn} is {−3, −1, 3, 3, −1, 3, −1, −3, −1, 1, −1, −3, −1, −1, −1, 3, 3, 1}, and q=17; orthe sequence {sn} is {−3, −1, −3, −1, −3, 1, 3, −3, −1, 3, 3, 3, 1, −1, −3, 3, −1, −3}, and q=18; orthe sequence {sn} is {−3, 1, −3, −1, −1, 3, 1, −3, −3, −3, −1, −3, −3, 1, 1, 1, −1, −1}, and q=19; orthe sequence {sn} is {3, 3, 3, −3, −1, −3, −1, 3, −1, 1, −1, −3, 1, −3, −3, −1, 3, 3}, and q=20; orthe sequence {sn} is {−3, 1, 1, −3, 1, 1, 3, −3, −1, −3, −1, 3, −3, 3, −1, −1, −1, −3}, and q=21; orthe sequence {sn} is {3, −1, −3, 1, −3, −3, −3, 3, 3, −1, 1, −3, −1, 3, 1, 1, 3, 3}, and q=22; orthe sequence {sn} is {−3, −1, −1, −3, 1, −3, 3, −1, −1, −3, 3, 3, −3, −1, 3, −1, −1, −1}, and q=23; orthe sequence {sn} is {−3, −3, −3, 1, −3, 3, 1, 1, 3, −3, −3, 1, 3, −1, 3, −3, −3, 3}, and q=24; orthe sequence {sn} is {3, −3, 1, 1, 3, −1, 1, −1, −1, −3, 1, −1, 3, 3, −3, 3, −1}, and q=25; orthe sequence {sn} is {3, −1, −1, 1, −3, −1, −3, −1, −3, −3, −1, −3, 1, 1, 1, −3, −3, 3}, and q=26; orthe sequence {sn} is {3, 1, −3, 1, −3, 3, 3, −1, −3, −3, −1, −3, −3, 3, −3, −1, 1, 3}, and q=27; orthe sequence {sn} is {−3, −3, 3, 3, −3, 1, 3, −1, −3, 1, −1, −3, 3, −3, −1, −1, −1, 3}, and q=28; orthe sequence {sn} is {−1, −3, 1, −3, −3, −3, 1, 1, 3, 3, −3, 3, 3, −3, −1, 3, −3, 1}, and q=29; orthe sequence {sn} is {−3, 3, 1, −1, −1, −1, −1, 1, −1, 3, 3, −3, −1, 1, 3, −1, 3, −1}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn} with a length N=18, the sequence {xn} with a length N=24, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31; and a combination manner of the sequence {xn} with a length N=24 and the sequence {ym} satisfies the twenty-first optional implementation in the specification. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod·Jprime), ku⁢(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than 0.1. In addition, a sequence {fn} with a length N=18 and a sequence {fn} with a length N=24 are respectively mapped to 18 and 24 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 18 or 24 subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} with a length N=18 and a sequence with another length in different sequence groups, whose cross-correlation value is greater than 0.7 and a maximum cross-correlation value are shown in Table 27. TABLE 27Length-24Length-36Length-48Length-60Length-72sequencesequencesequencesequencesequenceLength-1840010sequenceMaximum0.74120.69350.69780.70150.6615cross-correlationvalue In a twenty-third optional implementation, when N=12, combinations of the sequence {sn} and q are a part or all of the following combinations:the sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and q=1; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and q=2; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and q=3; orthe sequence {sn} is {−1, 1, 1, −, 1, 3, 3, −1, −1, −3, 1, −3}, and q=4; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and q=5; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and q=6; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and q=7; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and q=8; orthe sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and q=9; orthe sequence {sn} is {−3, 3, −3, 3, 3, −3, −, −1, 3, 3, 1, −3}, and q=10; orthe sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and q=11; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and q=12; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and q=13; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and q=14; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and q=15; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and q=16; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and q=17; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and q=18; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and q=19; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and q=20; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and q=21; orthe sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and q=22; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and q=23; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and q=24; orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and q=25; orthe sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and q=26; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and q=27; orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and q=28; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and q=29; orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and q=30. It should be noted that all the foregoing combinations of the sequence {sn} and q may be considered as a whole; in other words, all the combinations may be considered as one combination set. Certainly, a part of all the combinations of the sequence {sn} and q in this embodiment of the present invention may be considered as a whole; in other words, a part of combinations may be considered as one combination set. Based on this optional implementation, when M=36, and the sequence group includes the sequence {xn} with a length N=12, the sequence {xn} with a length N=18, the sequence {xn} with a length N=24, the sequence {ym}, and the sequence {hj}, a combination manner of the sequence {ym} and the sequence {hj} satisfies u=└ū+½┘, and ū=Jprime·q/31; a combination manner of the sequence {xn} with a length N=18 and the sequence {ym} satisfies the twenty-second optional implementation in the specification; and a combination manner of the sequence {xn} with a length N=24 and the sequence {ym} satisfies the twenty-first optional implementation in the specification. A length of the sequence {hj} is J, a value of J is 48, 60, or 72 and satisfies hj=ku(j mod Jprime), ku⁢(i)=e-j⁢π·u·i·(i+1)Jprime, i is an integer, 0≤i≤Jprime−1, and Jprimeis a largest prime number smaller than J. In addition, a sequence {fn} with a length N=12 is mapped to 12 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 12 subcarriers is t times a subcarrier spacing. A sequence {fn} with a length N=18 and a sequence {fn} with a length N=24 are respectively mapped to 18 and 24 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 18 or 24 subcarriers is t times a subcarrier spacing. A sequence {gm} and the sequence {hj} are respectively mapped to M and J subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M and J subcarriers is t times a subcarrier spacing, where t is a positive integer. A quantity of sequence pairs, namely, a sequence {xn} with a length N=12 and a sequence with another length in different sequence groups, whose cross-correlation value is greater than 0.8 and a maximum cross-correlation value are shown in Table 28. TABLE 28Length-18Length-24Length-36Length-48Length-60Length-72sequencesequencesequencesequencesequencesequenceLength-12115111sequenceMaximum0.89330.83730.8310.81070.80340.8074cross-correlationvalue That is, in the first optional implementation of the foregoing embodiment, the combination of the sequence {sn} and q is at least one of the following combinations:the sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and q=1; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and q=2; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and q=3; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and q=4; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and q=5; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and q=6; orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and q=7; orthe sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and q=8; orthe sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and q=9; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and q=10; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and q=11; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and q=12; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and q=13; orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and q=14; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and q=15; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and q=16; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and q=17; orthe sequence {sn} is {−1, 1, 1, −1, 1, 3, 3, −1, −1, −3, 1, −3}, and q=18; orthe sequence {sn} is {−3, 3, −3, 3, 3, −3, −1, −1, 3, 3, 1, −3}, and q=19; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and q=20; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and q=21; orthe sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and q=22, orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and q=23; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and q=24; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and q=25; orthe sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and q=26; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and q=27; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and q=28; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and q=29; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and q=30. That is, in the first optional implementation of the foregoing embodiment, the combination of the sequence {sn} and q is at least one of the following combinations:a sequence whose {sn} index is 10, and q=1; ora sequence whose {sn} index is 16, and q=2; ora sequence whose {sn} index is 19, and q=3; ora sequence whose {sn} index is 12, and q=4; ora sequence whose {sn} index is 8, and q=5; ora sequence whose {sn} index is 25, and q=6; ora sequence whose {sn} index is 13, and q=7; ora sequence whose {sn} index is 29, and q=8; ora sequence whose {sn} index is 15, and q=9; ora sequence whose {sn} index is 3, and q=10; ora sequence whose {sn} index is 17, and q=11; ora sequence whose {sn} index is 4, and q=12; ora sequence whose {sn} index is 6, and q=13; ora sequence whose {sn} index is 22, and q=14; ora sequence whose {sn} index is 0, and q=15; ora sequence whose {sn} index is 27, and q=16; ora sequence whose {sn} index is 23, and q=17; ora sequence whose {sn} index is 5, and q=18; ora sequence whose {sn} index is 24, and q=19; ora sequence whose {sn} index is 1, and q=20; ora sequence whose {sn} index is 2, and q=21; ora sequence whose {sn} index is 14, and q=22; ora sequence whose {sn} index is 28, and q=23; ora sequence whose {sn} index is 20, and q=24; ora sequence whose {sn} index is 26, and q=25; ora sequence whose {sn} index is 7, and q=26; ora sequence whose {sn} index is 18, and q=27; ora sequence whose {sn} index is 21, and q=28; ora sequence whose {sn} index is 9, and q=29; ora sequence whose {sn} index is 11, and q=30. That is, in the second optional implementation of the foregoing embodiment, the combination of the sequence {sn} and q is at least one of the following combinations:a sequence whose {sn} index is 2, and q=1; ora sequence whose {sn} index is 3, and q=2; ora sequence whose {sn} index is 5, and q=3; ora sequence whose {sn} index is 6, and q=4; ora sequence whose {sn} index is 7, and q=5; ora sequence whose {sn} index is 25, and q=6; ora sequence whose {sn} index is 13, and q=7; ora sequence whose {sn} index is 8, and q=8; ora sequence whose {sn} index is 15, and q=9; ora sequence whose {sn} index is 10, and q=10; ora sequence whose {sn} index is 11, and q=11: ora sequence whose {sn} index is 4, and q=12; ora sequence whose {sn} index is 12, and q=13; ora sequence whose {sn} index is 16, and q=14; ora sequence whose {sn} index is 0, and q=15; ora sequence whose {sn} index is 27, and q=16; ora sequence whose {sn} index is 17, and q=17; ora sequence whose {sn} index is 18, and q=18; ora sequence whose {sn} index is 19, and q=19; ora sequence whose {sn} index is 1, and q=20; ora sequence whose {sn} index is 20, and q=21; ora sequence whose {sn} index is 14, and q=22; ora sequence whose {sn} index is 21, and q=23; ora sequence whose {sn} index is 24, and q=24; ora sequence whose {sn} index is 26, and q=25; ora sequence whose {sn} index is 22, and q=26; ora sequence whose {sn} index is 23, and q=27; ora sequence whose {sn} index is 28, and q=28; ora sequence whose {sn} index is 9, and q=29; ora sequence whose {sn} index is 29, and q=30. That is, in the third optional implementation of the foregoing embodiment, the combination of the sequence {sn} and q is at least one of the following combinations:a sequence whose {sn} index is 10, and q=1; ora sequence whose {sn} index is 23, and q=2; ora sequence whose {sn} index is 19, and q=3; ora sequence whose {sn} index is 1, and q=4; ora sequence whose {sn} index is 8, and q=5; ora sequence whose {sn} index is 25, and q=6; ora sequence whose {sn} index is 13, and q=7; ora sequence whose {sn} index is 29, and q=8; ora sequence whose {sn} index is 2, and q=9; ora sequence whose {sn} index is 22, and q=10; ora sequence whose {sn} index is 17, and q=11; ora sequence whose {sn} index is 4, and q=12; ora sequence whose {sn} index is 6, and q=13; ora sequence whose {sn} index is 14, and q=14; ora sequence whose {sn} index is 0, and q=15; ora sequence whose {sn} index is 27, and q=16; ora sequence whose {sn} index is 15, and q=17; ora sequence whose {sn} index is 5, and q=18; ora sequence whose {sn} index is 18, and q=19; ora sequence whose {sn} index is 16, and q=20; ora sequence whose {sn} index is 20, and q=21; ora sequence whose {sn} index is 3, and q=22; ora sequence whose {sn} index is 28, and q=23; ora sequence whose {sn} index is 24, and q=24; ora sequence whose {sn} index is 26, and q=25; ora sequence whose {sn} index is 21, and q=26; ora sequence whose {sn} index is 12, and q=27; ora sequence whose {sn} index is 7, and q=28; ora sequence whose {sn} index is 9, and q=29; ora sequence whose {sn} index is 11, and q=30. That is, in the fourth optional implementation of the foregoing embodiment, the combination of the sequence {sn} and q is at least one of the following combinations:a sequence whose {sn} index is 10, and q=1; ora sequence whose {sn} index is 23, and q=2; ora sequence whose {sn} index is 19, and q=3; ora sequence whose {sn} index is 4, and q=4; ora sequence whose {sn} index is 8, and q=5; ora sequence whose {sn} index is 25, and q=6; ora sequence whose {sn} index is 13, and q=7; ora sequence whose {sn} index is 29, and q=8; ora sequence whose {sn} index is 2, and q=9; ora sequence whose {sn} index is 22, and q=10; ora sequence whose {sn} index is 17, and q=11; ora sequence whose {sn} index is 1, and q=12; ora sequence whose {sn} index is 6, and q=13; ora sequence whose {sn} index is 14, and q=14; ora sequence whose {sn} index is 18, and q=15; ora sequence whose {sn} index is 27, and q=16; ora sequence whose {sn} index is 15, and q=17; ora sequence whose {sn} index is 5, and q=18; ora sequence whose {sn} index is 7, and q=19; ora sequence whose {sn} index is 16, and q=20; ora sequence whose {sn} index is 0, and q=21; ora sequence whose {sn} index is 3, and q=22; ora sequence whose {sn} index is 28, and q=23; ora sequence whose {sn} index is 24, and q=24; ora sequence whose {sn} index is 26, and q=25; ora sequence whose {sn} index is 20, and q=26; ora sequence whose {sn} index is 12, and q=27; ora sequence whose {sn} index is 21, and q=28; ora sequence whose {sn} index is 9, and q=29; ora sequence whose {sn} index is 11, and q=30. Optionally, in this embodiment of the present invention, the first sequence is corresponding to the second sequence. A sending device sends a signal generated based on the second sequence. For example, the sending device may be a terminal device or a modem processor in the terminal device. A receiving device processes a received first signal based on the second sequence. For example, the receiving device may be an access network device or a processor in the access network device. Optionally, when the first sequence is the sequence {xn}, the second sequence is the sequence {fn}. An element fnin the sequence {fn} satisfies fn=A·xn·ej·α·n. Further, in the formula fn=A·xn·ej·α·n, A may be 1 and/or α may be 0. The first sequence and the second sequence may be a same sequence. For example, when A is 1 and α is 0 in the formula fn=A·xn·ej·α·n, the first sequence and the second sequence are the same. When the first sequence is the sequence {ym}, the second sequence is the sequence {gm}. An element gmin the sequence {gm} satisfies gm=A·ym·ej·α·m. Further, in the formula gm=A·ym·ej·α·m, A may be 1 and/or α may be 0. The first sequence and the second sequence may be a same sequence. For example, when A is 1 and α is 0 in the formula gm=A·ym·ej·α·m, the first sequence and the second sequence are the same. fnis an element in the sequence {fn}, gmis an element in the sequence {gm}, a length of the sequence {fn} is N, a length of the sequence {gm} is M, n and m are integers, 0≤n≤N−1, and 0≤m≤M−1; and A is a non-zero complex number, a is a real number, and j=√{square root over (−1)}. Optionally, A may be a real number. Further, A may be 1. It should be noted that A and a in the formula fn=A·xn·ej·α·nthat the element fnsatisfies and the formula gm=A·ym·ej·α·mthat the element gmsatisfies may be the same or different. For example, the formula that the element fnsatisfies may be represented as fn=A·xn·ej·α·n, and the formula that the element gmsatisfies may be represented as gm=B·ym·ej·β·m. B and β herein may refer to definitions of A and α above. For brevity, A and α are used for expression in both the two formulas in the specification. Optionally, one or both of A and B may be a modulated symbol or modulated symbols. Alternatively, one or both of A and B may be a constant or constants. Alternatively, one or both of A and B may be a value or values determined based on a power control parameter. Certainly, A and B may be a combination of two of the foregoing modulated symbol, constant, and the value determined based on the power control parameter. For example, A is a modulated symbol, and B is a constant. For example, A may be a power adjustment parameter of a to-be-sent signal. For another example, A may be a modulated symbol. In this case, A is obtained by modulating a data information bit or an uplink control information (uplink control information, UCI) bit. A is carried on N elements included in a sequence to generate the second sequence, and A does not vary with N. For another example, A is a constant. For example, A=1. For another example, A may be a symbol known to the terminal device and a network device. For another example, A may also represent amplitude. It should be noted that A being a constant in one time unit does not represent that A is invariant, and when signals are sent in different time units, A is variable. One time unit may be duration of one OFDM symbol or one DFT-s-OFDM symbol. For example, for a subcarrier spacing of 15 kHz, duration of one OFDM symbol or one DFT-s-OFDM symbol is 1/15000 second. For a subcarrier spacing of 30 kHz, duration of one OFDM symbol or one DFT-s-OFDM symbol is 1/30000 second. For example, all N elements included in the sequence {fn} are a reference signal, and A is amplitude of the reference signal. When the terminal device sends the signal in a first time unit, A may be equal to 1. When the terminal device sends the signal in a second time unit, A may be equal to 2. Optionally, in this embodiment of the present invention, the sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is 2t times a subcarrier spacing. In a mapping manner with a spacing2ttimes a subcarrier spacing, different users may be multiplexed in a same frequency range, thereby improving a multiplexing capability. Optionally, the sequence {gm} is mapped to M subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M subcarriers is t times a subcarrier spacing. In a mapping manner with a spacing t times a subcarrier spacing, when the signal is used for channel estimation, channel estimation performance can be improved. When N=12, in this embodiment of the present invention, the sequence ( ) is mapped to 12 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 12 subcarriers is 2t times a subcarrier spacing. When M=36, in this embodiment of the present invention, the sequence (gi) is mapped to 36 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 36 subcarriers is t times a subcarrier spacing. In a mapping manner with a spacing t times a subcarrier spacing, when the signal is used for channel estimation, channel estimation performance can be improved. The foregoing t is a positive integer. Further, t may be 1. In this case, the sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is two times a subcarrier spacing. The sequence {gm} is mapped to M subcarriers, and the M subcarriers are consecutive subcarriers. Therefore, after generating the second sequence, the terminal device maps the second sequence to a corresponding subcarrier. For example, the terminal device or the modem processor in the terminal device may map the sequence {fn} to N equally-spaced subcarriers. Alternatively, the terminal device or the modem processor in the terminal device may map the sequence {gm} to M consecutive subcarriers. The access network device processes a received signal based on the second sequence. The received signal is a signal mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is 2t times a subcarrier spacing. Alternatively, the received signal is mapped to M subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M subcarriers is t times a subcarrier spacing. Optionally, the M subcarriers are consecutive subcarriers. When N=12 and M=36, the sequence {fn} is mapped to 12 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 12 subcarriers is 2t times a subcarrier spacing. The sequence {gm} is mapped to 36 subcarriers, and the 36 subcarriers are consecutive subcarriers. Therefore, after generating the second sequence, the terminal device maps the second sequence to a corresponding subcarrier. For example, the terminal device or the modem processor in the terminal device may map the sequence {fn} to 12 equally-spaced subcarriers. Alternatively, the terminal device or the modem processor in the terminal device may map the sequence {gm} to 36 consecutive subcarriers. The access network device processes a received signal based on the second sequence. The received signal is a signal mapped to 12 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 12 subcarriers is 2t times a subcarrier spacing. Alternatively, the received signal is mapped to 36 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 36 subcarriers is t times a subcarrier spacing. Optionally, the 36 subcarriers are consecutive subcarriers. Optionally, in this embodiment of the present invention, the sequence group is corresponding to one index. In an implementation, the index of the sequence group is determined based on an identity (identity, ID) configured by the access network device. For example, the identity may be an identity of the terminal device. Alternatively, the identity may be a physical uplink control channel (PUCCH) ID. Alternatively, the identity may be a reference signal (RS) ID or the like. For another example, the ID may be an ID used to determine an initialization parameter of some random sequences. For example, the random sequence is a random sequence corresponding to a sequence hopping pattern of a UCI sequence in a PUCCH format 1; or the random sequence is a random sequence corresponding to a cyclic shift hopping pattern of a UCI sequence in a PUCCH format 1; or the random sequence is a random sequence corresponding to a sequence hopping pattern of a DMRS sequence in a PUCCH format 1; or the random sequence is a random sequence corresponding to a cyclic shift hopping pattern of a DMRS sequence in a PUCCH format 1; or the random sequence is a random sequence corresponding to a sequence hopping pattern of a DMRS sequence in a PUCCH format 3 or a PUCCH format 4; or the random sequence is a random sequence corresponding to a cyclic shift hopping pattern of a DMRS sequence in a PUCCH format 3 or a PUCCH format 4; or the random sequence is a random sequence corresponding to a sequence hopping pattern of a sounding reference signal (SRS) sequence: or the random sequence is a random sequence corresponding to a cyclic shift hopping pattern of an SRS sequence. In still another implementation, the index of the sequence group is determined based on an identity of a first time unit. The first time unit is a time unit for sending the signal generated based on the second sequence. In this embodiment, the identity of the time unit is used, so that the index of the sequence group that is determined based on the time unit varies with time, and interference between neighboring cells can be more randomized in a period of time. In still another implementation, the index of the sequence group is determined based on the ID configured by the access network device and the identity of the time unit. The identity of the time unit may be an index of a slot or an index of a symbol. In this embodiment, the ID and the identity of the time unit are used, so that the determined index of the sequence group varies with the ID and time, and interference between neighboring cells can be more randomized in a period of time. In this implementation, optionally, the index u of the sequence group satisfies the following relationship: u=(fgh(ns)+fss)mod 30 fgh(ns)={0(∑i=07⁢c⁡(8⁢ns+i)·2i)⁢mod⁢30where u is the index of the sequence group, nsis an index of a slot of a cell, such as an index of the first time unit, fssis generated based on a reference signal (reference signal, RS) ID configured by the access network device, for example, fss=nIDRSmod 30, and nIDRSindicates the RS ID.c(i) is a pseudo random sequence and its formula may be as follows: c(n)=(x1(n+NC)+x2(n+NC))mod 2 x1(n+3)=(x1(n+3)+x1(n))mod 2 x2(n+31)=(x2(n+3)+x2(n+2)+x2(n+1)+x2(n))mod 2 An initial value of c(i) is determined based on cinit=⌊nIDRS30⌋·NC=1600. Therefore, the index u of the sequence group may be determined based on the index of the slot and the RS ID configured by the network device. In still another implementation, the index of the sequence group is determined based on a cell identity. Optionally, for the determining the index of the sequence group based on the cell identity, refer to the foregoing formula. That is, the RS ID is replaced with the cell identity. This embodiment provides a plurality of sequence groups, and the sequence group is one of the plurality of sequence groups. In an optional implementation, the plurality of sequence groups include a part or all of a first sequence group, a second sequence group, a third sequence group, a fourth sequence group, and a fifth sequence group. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the first sequence group is:the sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and q=6; in other words, a sequence whose {sn} index is 25, and q=6. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the second sequence group is:the sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and q=7; in other words, a sequence whose {sn} index is 13, and q=7. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the third sequence group is:the sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and q=16; in other words, a sequence whose {sn} index is 27, and q=16. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the fourth sequence group is:the sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and q=25; in other words, a sequence whose {sn} index is 26, and q=25. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the fifth sequence group is:the sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and q=29; in other words, a sequence whose {sn} index is 9, and q=29. Based on this implementation, in at least four cross-correlation measurement methods, it can be ensured that there is relatively high cross-correlation between a sending signal generated based on {sn} and a sending signal generated based on q in the five sequence groups, thereby reducing interference between neighboring cells. In still another optional implementation, the plurality of sequence groups include a part or all of a first sequence group, a second sequence group, a third sequence group, a fourth sequence group, a fifth sequence group, a sixth sequence group, a seventh sequence group, an eighth sequence group, a ninth sequence group, a tenth sequence group, and an eleventh sequence group. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the first sequence group is:the sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and q=6; in other words,a sequence whose {sn} index is 25, and q=6. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {yn} that are included in the second sequence group is:the sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and q=7: in other words, a sequence whose {sn} index is 13, and q=7. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the third sequence group is:the sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and q=9; in other words, a sequence whose {sn} index is 15, and q=9. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the fourth sequence group is:the sequence (sn) is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and q=10; in other words, a sequence whose {sn} index is 3, and q=10. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the fifth sequence group is:the sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and q=12; in other words, a sequence whose {sn} index is 4, and q=12. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the sixth sequence group is:the sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and q=15; in other words, a sequence whose {sn} index is 0, and q=15. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the seventh sequence group is:the sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and q=16; in other words, a sequence whose {sn} index is 27, and q=16. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the eighth sequence group is:the sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and q=20; in other words, a sequence whose {sn} index is 1, and q=20. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the ninth sequence group is:the sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and q=22; in other words, a sequence whose {sn} index is 14, and q=22. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the tenth sequence group is:the sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and q=25; in other words, a sequence whose {sn} index is 26, and q=25. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the eleventh sequence group is:the sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and q=29; in other words, a sequence whose {sn} index is 9, and q=29. Based on this implementation, in at least two cross-correlation measurement methods, it can be ensured that there is relatively high cross-correlation between a sending signal generated based on {sn} and a sending signal generated based on q in the 11 sequence groups, thereby reducing interference between neighboring cells. In still another optional implementation, the sequence group is one of a plurality of sequence groups, and the plurality of sequence groups include a part or all of a first sequence group, a second sequence group, a third sequence group, a fourth sequence group, and a fifth sequence group. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the first sequence group is.the sequence {sn} is {−1, 1, 1, −1, 1, 3, 3, −1, −1, −3, 1, −3}, and q=9. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the second sequence group is:the sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and q=12. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the third sequence group is:the sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and q=21. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the fourth sequence group is:the sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and q=22. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the fifth sequence group is:the sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and q=26. Based on this implementation, in at least two cross-correlation measurement methods, it can be ensured that there is relatively high cross-correlation between a sending signal generated based on {sn} and a sending signal generated based on q in the five sequence groups, thereby reducing interference between neighboring cells. In still another optional implementation, the sequence group is one of a plurality of sequence groups, and the plurality of sequence groups include a part or all of a first sequence group, a second sequence group, a third sequence group, a fourth sequence group, a fifth sequence group, and a sixth sequence group. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the first sequence group is:the sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and q=4. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the second sequence group is:the sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and q=10. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the third sequence group is:the sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and q=16. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the fourth sequence group is:the sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and q=21. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the fifth sequence group is:the sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and q=27. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the sixth sequence group is:the sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and q=30. Based on this implementation, in at least two cross-correlation measurement methods, it can be ensured that there is relatively high cross-correlation between a sending signal generated based on {sn} and a sending signal generated based on q in the six sequence groups, thereby reducing interference between neighboring cells. In still another optional implementation, the sequence group is one of a plurality of sequence groups, and the plurality of sequence groups include a part or all of a first sequence group, a second sequence group, a third sequence group, a fourth sequence group, a fifth sequence group, a sixth sequence group, a seventh sequence group, an eighth sequence group, a ninth sequence group, a tenth sequence group, and an eleventh sequence group. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the first sequence group is:the sequence {sn} is {3, −3, 3, −1, 1, 3, −3, −1, −3, −3, −1, −3, 3, 1, −1, 3, −3, 3}, and q=2. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the second sequence group is:the sequence {sn} is {−3, 3, 1, −1, −1, 3, −3, −1, 1, 1, 1, 1, 1, −1, 3, −1, −3, −1}, and q=3. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the third sequence group is:the sequence {sn} is {−3, 3, −1, 1, 3, 1, −3, −1, 1, 1, −3, 1, 3, 3, −1, −3, −3, −3}, and q=7. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the fourth sequence group is:the sequence {sn} is {3, −1, 3, 1, −3, −3, −1, 1, −3, −3, 3, 3, 3, 1, 3, −3, 3, −3}, and q=10. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the fifth sequence group is:the sequence {sn} is {−3, −1, −3, −3, 1, 1, −1, −3, −1, −3, −1, −1, 3, 3, −1, 3, 1, 3}, and q=13. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the sixth sequence group is:the sequence {sn} is {−3, 3, −1, −3, −1, −3, 1, 1, −3, −3, −1, −1, 3, −3, 1, 3, 1, 1}, and q=15. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the seventh sequence group is:the sequence {sn} is {−3, 1, −3, −1, −1, 3, 1, −3, −3, −3, −1, −3, −3, 1, 1, 1, −1, −1}, and q=19. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the eighth sequence group is:the sequence {sn} is {3, 3, 3, −3, −1, −3, −1, 3, −1, 1, −1, −3, 1, −3, −3, −1, 3, 3}, and q=20. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the ninth sequence group is:the sequence {sn} is {−3, 1, 1, −3, 1, 1, 3, −3, −1, −3, −1, 3, −3, 3, −1, −1, −1, −3}, and q=21. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the tenth sequence group is:the sequence {sn} is {3, 1, −3, 1, −3, 3, 3, −1, −3, −3, −1, −3, −3, 3, −3, −1, 1, 3}, and q=27. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the eleventh sequence group is:the sequence {sn} is {−3, 3, 1, −1, −1, −1, −1, 1, −1, 3, 3, −3, −1, 1, 3, −1, 3, −1}, and q=30. Based on this implementation, in at least two cross-correlation measurement methods, it can be ensured that there is relatively high cross-correlation between a sending signal generated based on {sn} and a sending signal generated based on q in the 11 sequence groups, thereby reducing interference between neighboring cells. In still another optional implementation, the sequence group is one of a plurality of sequence groups, and the plurality of sequence groups include a part or all of a first sequence group, a second sequence group, a third sequence group, a fourth sequence group, a fifth sequence group, a sixth sequence group, a seventh sequence group, an eighth sequence group, a ninth sequence group, a tenth sequence group, an eleventh sequence group, and a twelfth sequence group. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the first sequence group is:the sequence {sn} is {−1, −3, 3, −1, 3, 1, 3, −1, 1, −3, −1, −3, −1, 1, 3, −3, −1, −3, 3, 3, 3, −3, −3, −3}, and q=1. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the second sequence group is:the sequence {sn} is {1, −3, 3, −1, −3, −1, 3, 3, 1, −1, 1, 1, 3, −3, −1, −3, −3, −3, −1, 3, −3, −1, −3, −3}, and q=4. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the third sequence group is:the sequence {sn} is {−3, 1, 3, −1, 1, −1, 3, −3, 3, −1, −3, −1, −3, 3, −1, −1, −1, −3, −1, −1, −3, 3, 3, −3}, and q=8. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the fourth sequence group is:the sequence {sn} is {1, 1, −1, −3, −1, 1, 1, −3, 1, −1, 1, −3, 3, −3, −3, 3, −1, −3, 1, 3, −3, 1, −3, −3}, and q=10. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the fifth sequence group is:the sequence {sn} is {−3, 3, −1, 3, 1, −1, −1, −1, 3, 3, 1, 1, 3, 3, 1, −3, −3, −1, 1, −3, 1, 3, −3}, and q=12. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the sixth sequence group is:the sequence {sn} is {−3, 1, −3, −1, −1, 3, 1, 3, −3, 1, −1, 3, 3, −1, −3, 3, −3, −1, −1, −3, −3, −3, 3, −3}, and q=15. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the seventh sequence group is:the sequence {sn} is {−3, −1, −1, −3, 1, −3, −3, −1, −1, 3, −1, 1, −1, 3, 1, −3, −1, 3, 1, 1, −1, −1, −3, −3}, and q=16. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the eighth sequence group is:the sequence {sn} is {−3, −3, −1, −1, −1, −3, 1, −1, −3, −1, 3, −3, 1, −3, 3, −3, 3, 3, 1, −1, −1, 1, −3, −3}, and q=21. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the ninth sequence group is:the sequence {sn} is {−3, 3, −1, −3, −1, −1, −1, 3, −1, −1, 3, −3, −1, 3, −3, 3, −3, −1, 3, 1, 1, −1, −3, −3}, and q=26. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the tenth sequence group is:the sequence {sn} is {−3, 1, −1, −3, −3, −1, 1, −3, −1, −3, 1, 1, −1, 1, 1, 3, 3, 3, −1, 1, −1, 1, −1, −3}, and q=27. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the eleventh sequence group is:the sequence {sn} is {−3, 1, −3, 1, −3, 1, 1, 3, 1, −3, −3, −1, 1, 3, −1, −3, 3, 1, −1, −3, −3, −3, −3, −3}, and q=29. A combination of {sn} and q that are corresponding to a sequence {xn} and a sequence {ym} that are included in the twelfth sequence group is:the sequence {sn} is {3, −3, −1, 1, 3, −1, −1, −3, −1, 3, −1, −3, −1, −3, 3, −1, 3, 1, 1, −3, 3, −3, −3, −3}, and q=30. Based on this implementation, in at least two cross-correlation measurement methods, it can be ensured that there is relatively high cross-correlation between a sending signal generated based on {sn} and a sending signal generated based on q in the twelve sequence groups, thereby reducing interference between neighboring cells. Further, the sequence group in this embodiment of the present invention further includes a sequence {zm}. In this case, when the first sequence is the sequence {zm}, the second sequence is a sequence {hm}. An element zmin the sequence {zm} satisfies zm=lp(m mod Mprime), lp(i)=e-j⁢π·p·i·(i+1)Mprime, i is an integer, 0≤i≤Mprime−1, and Mprimeis a largest prime number smaller than M. An element hmin the sequence {hm} satisfies hm=A·zm·ej·α·m. A combination of {sn} and a value of p is at least one of the following combinations:the sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and p=1; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and p=2; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and p=3; orthe sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and p=4; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and p=5; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and p=6; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and p=7; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and p=8; orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and p=9; orthe sequence {sn} is {−1, 1, 1, −1, 1, 3, 3, −1, −1, −3, 1, −3}, and p=10; orthe sequence {sn} is {−3, 3, −3, 3, 3, −3, −1, −1, 3, 3, 1, −3}, and p=11; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and p=12; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and p=13; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and p=14; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and p=15; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and p=16; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and p=17; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and p=18; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and p=19; orthe sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and p=20; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and p=21; orthe sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and p=22; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and p=23; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and p=24; orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and p=25; orthe sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and p=26; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and p=27; orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and p=28; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and p=29; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and p=30. Alternatively, a combination of the sequence {sn} and p is at least one of the following combinations:the sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and p=1; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and p=2; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and p=3; orthe sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and p=4; orthe sequence {sn} is {−1, 1, 1, −1, 1, 3, 3, −1, −1, −3, 1, −3}, and p=5; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and p=6; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and p=7; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and p=8; orthe sequence {sn} is {−3, 3, −3, 3, 3, −3, −1, −1, 3, 3, 1, −3}, and p=9; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and p=10; orthe sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and p=11; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and p=12; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and p=13; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and p=14; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and p=15; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and p=16; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and p=17; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and p=18; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and p=19; orthe sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and p=20; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and p=21; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and p=22; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and p=23; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and p=24: orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and p=25; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and p=26; orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and p=27: orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and p=28; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and p=29; orthe sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and p=30. Alternatively, a combination of the sequence {sn} and p is at least one of the following combinations:the sequence {sn} is {−3, 3, 1, −1, 3, 3, −3, 1, −1, 1, −1, 1}, and p=1; orthe sequence {sn} is {−3, 3, 1, −3, 1, 3, −1, −1, 1, 3, 3, 3}, and p=2; orthe sequence {sn} is {−3, 3, 3, 1, −3, 3, −1, 1, 3, −3, 3, −3}, and p=3; orthe sequence {sn} is {−3, −1, −3, −1, −1, −3, 3, 3, −1, −1, 1, −3}, and p=4; orthe sequence {sn} is {−1, −3, 3, −1, −3, −3, −3, −1, 1, −1, 1, −3}, and p=5; orthe sequence {sn} is {−3, −3, 3, 1, −3, −3, −3, −1, 3, −1, 1, 3}, and p=6; orthe sequence {sn} is {1, −1, 3, −1, −1, −1, −3, −1, 1, 1, 1, −3}, and p=7; orthe sequence {sn} is {−3, −1, −1, 1, 3, 1, 1, −1, 1, −1, −3, 1}, and p=8; orthe sequence {sn} is {−3, −1, −1, −3, −3, −1, −3, 3, 1, 3, −1, −3}, and p=9; orthe sequence {sn} is {−1, 1, 1, −1, 1, 3, 3, −1, −1, −3, 1, −3}, and p=10; orthe sequence {sn} is {−3, 3, −3, 3, 3, −3, −1, −1, 3, 3, 1, −3}, and p=11; orthe sequence {sn} is {−3, −3, 3, −3, −1, 3, 3, 3, −1, −3, 1, −3}, and p=12; orthe sequence {sn} is {3, 1, 3, 1, 3, −3, −1, 1, 3, 1, −1, −3}, and p=13; orthe sequence {sn} is {−3, −1, 3, −3, −3, −1, −3, 1, −1, −3, 3, 3}, and p=14; orthe sequence {sn} is {−3, 3, 1, 3, −3, 1, 1, 1, 1, 3, −3, 3}, and p=15; orthe sequence {sn} is {−3, 1, −3, −3, −3, 3, −3, −1, 1, 1, 1, −3}, and p=16; orthe sequence {sn} is {−1, −1, −1, −1, 1, −3, −1, 3, 3, −1, −3, 1}, and p=17; orthe sequence {sn} is {−1, 1, 3, −3, 1, −1, 1, −1, −1, −3, 1, −1}, and p=18; orthe sequence {sn} is {−3, 1, 3, 3, −1, −1, −3, 3, 3, −3, 3, −3}, and p=19; orthe sequence {sn} is {−3, 1, −1, −1, 3, 3, −3, −1, −1, −3, −1, −3}, and p=20; orthe sequence {sn} is {−3, −3, −1, 3, 3, 3, −3, 3, −3, 1, −1, −3}, and p=21; orthe sequence {sn} is {−3, −1, 3, 1, −3, −1, −3, 3, 1, 3, 3, 1}, and p=22; orthe sequence {sn} is {−3, 3, 3, 3, −1, −3, −3, −1, −3, 1, 3, −3}, and p=23; orthe sequence {sn} is {3, −1, −3, 3, −3, −1, 3, 3, 3, −3, −1, −3}, and p=24; orthe sequence {sn} is {−3, −1, 1, −3, 1, 3, 3, 3, −1, −3, 3, 3}, and p=25; orthe sequence {sn} is {1, 3, −3, 1, 3, 3, 3, 1, −1, 1, −1, 3}, and p=26; orthe sequence {sn} is {−3, 1, 3, −1, −1, −3, −3, −1, −1, 3, 1, −3}, and p=27; orthe sequence {sn} is {−3, −3, 3, 3, 3, −3, −1, 1, −3, 3, 1, −3}, and p=28; orthe sequence {sn} is {1, −1, 3, 1, 1, −1, −1, −1, 1, 3, −3, 1}, and p=29; orthe sequence {sn} is {−3, 3, −3, 3, −3, −3, 3, −1, −1, 1, 3, −3}, and p=30. The sequence {hm} is mapped to M subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M subcarriers is 2t times a subcarrier spacing, where t is a positive integer. It should be noted that, in the formula hm=A·zmej·α·m, A may be 1 and/or α may be 0. Further, the first sequence and the second sequence may also be a same sequence, that is, A may be 1 and α may be 0. Optionally, when M=36, an element ymin the sequence {ym} satisfies ym=kq(m mod 31), kq(i)=e-j⁢π·q·i·(i+1)31, i is an integer, and 0≤i≤30. The sequence {gm} is mapped to 36 subcarriers, a center-frequency spacing of any two adjacent subcarriers in the 36 subcarriers is 2t times a subcarrier spacing, and t is a positive integer. Based on this implementation, it can be ensured that there is relatively high cross-correlation between sending signals when a sending signal generated based on {sn} and a sending signal generated based on p use a same mapping manner, thereby reducing interference between neighboring cells. It should be noted that A and α in the formula fn=A·xn·ej·α·nthat the element fnsatisfies, the formula gm=A·ym·ej·α·mthat the element gmsatisfies, and the formula hm=A·zm·ej·α·mthat the element hmsatisfies may be the same or different, or A and α in two of the formulas are the same. For example, the formula that the element fnsatisfies may be represented as fn=A·xn·ej·α·n, and the formula that the element gmsatisfies may be represented as gm=B·ym·ej·β·m. The formula that the element hmsatisfies may be represented as hm=C·zm·ej·γ·m. For C and γ herein, refer to definitions of A and a above. For brevity, A and α are used for expression in all the three formulas in the specification. That is, a combination of the sequence {sn} and p is at least one of the following combinations:a sequence whose {sn} index is 29, and p=1; ora sequence whose {sn} index is 27, and p=2; ora sequence whose {sn} index is 24, and p=3; ora sequence whose {sn} index is 3, and p=4; ora sequence whose {sn} index is 11, and p=5; ora sequence whose {sn} index is 15, and p=6; ora sequence whose {sn} index is 26, and p=7; ora sequence whose {sn} index is 23, and p=8; ora sequence whose {sn} index is 28, and p=9; ora sequence whose {sn} index is 5, and p=10; ora sequence whose {sn} index is 7, and p=11; ora sequence whose {sn} index is 19, and p=12; ora sequence whose {sn} index is 22, and p=13; ora sequence whose {sn} index is 17, and p=14; ora sequence whose {sn} index is 12, and p=15; ora sequence whose {sn} index is 4, and p=16; ora sequence whose {sn} index is 1, and p=17; ora sequence whose {sn} index is 16, and p=18; ora sequence whose {sn} index is 21, and p=19; ora sequence whose {sn} index is 20, and p=20; ora sequence whose {sn} index is 6, and p=21; ora sequence whose {sn} index is 2, and p=22; ora sequence whose {sn} index is 9, and p=23; ora sequence whose {sn} index is 25, and p=24; ora sequence whose {sn} index is 14, and p=25; ora sequence whose {sn} index is 10, and p=26; ora sequence whose {sn} index is 8, and p=27; ora sequence whose {sn} index is 13, and p=28; ora sequence whose {sn} index is 0, and p=29; ora sequence whose {sn} index is 18, and p=30. Alternatively, a combination of the sequence {sn} and p is at least one of the following combinations:a sequence whose {sn} index is 2, and p=1; ora sequence whose {sn} index is 27, and p=2; ora sequence whose {sn} index is 24, and p=3; ora sequence whose {sn} index is 3, and p=4; ora sequence whose {sn} index is 5, and p=5; ora sequence whose {sn} index is 15, and p=6; ora sequence whose {sn} index is 26, and p=7; ora sequence whose {sn} index is 6, and p=8; ora sequence whose {sn} index is 7, and p=9; ora sequence whose {sn} index is 8, and p=10; ora sequence whose {sn} index is 10, and p=11; ora sequence whose {sn} index is 11, and p=12; ora sequence whose {sn} index is 12, and p=13; ora sequence whose {sn} index is 16, and p=14; ora sequence whose {sn} index is 17, and p=15; ora sequence whose {sn} index is 4, and p=16; ora sequence whose {sn} index is 1, and p=17; ora sequence whose {sn} index is 18, and p=18; ora sequence whose {sn} index is 19, and p=19; ora sequence whose {sn} index is 20, and p=20; ora sequence whose {sn} index is 21, and p=21; ora sequence whose {sn} index is 22, and p=22; ora sequence whose {sn} index is 9, and p=23; ora sequence whose {sn} index is 25, and p=24; ora sequence whose {sn} index is 14, and p=25; ora sequence whose {sn} index is 23, and p=26; ora sequence whose {sn} index is 28, and p=27; ora sequence whose {sn} index is 13, and p=28; ora sequence whose {sn} index is 0, and p=29; ora sequence whose {sn} index is 29, and p=30. Alternatively, a combination of the sequence {sn} and p is at least one of the following combinations:a sequence whose {sn} index is 29, and p=1; ora sequence whose {sn} index is 27, and p=2; ora sequence whose {sn} index is 24, and p=3; ora sequence whose {sn} index is 8, and p=4; ora sequence whose {sn} index is 11, and p=5; ora sequence whose {sn} index is 15, and p=6; ora sequence whose {sn} index is 26, and p=7; ora sequence whose {sn} index is 17, and p=8; ora sequence whose {sn} index is 22, and p=9; ora sequence whose {sn} index is 5, and p=10; ora sequence whose {sn} index is 7, and p=11; ora sequence whose {sn} index is 19, and p=12; ora sequence whose {sn} index is 12, and p=13; ora sequence whose {sn} index is 18, and p=14; ora sequence whose {sn} index is 3, and p=15; ora sequence whose {sn} index is 2, and p=16; ora sequence whose {sn} index is 1, and p=17; ora sequence whose {sn} index is 16, and p=18; ora sequence whose {sn} index is 21, and p=19; ora sequence whose {sn} index is 20, and p=20; ora sequence whose {sn} index is 6, and p=21; ora sequence whose {sn} index is 23, and p=22; ora sequence whose {sn} index is 9, and p=23; ora sequence whose {sn} index is 25, and p=24; ora sequence whose {sn} index is 14, and p=25; ora sequence whose {sn} index is 10, and p=26; ora sequence whose {sn} index is 4, and p=27; ora sequence whose {sn} index is 13, and p=28; ora sequence whose {sn} index is 0, and p=29; ora sequence whose {sn} index is 28, and p=30. The sequence {gm} is mapped to M subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M subcarriers is 2t times a subcarrier spacing, where t is a positive integer. When M=36, an element ymin the sequence {ym} satisfies ym=kq(m mod 31), kq⁢(i)=e-j⁢π·q·i·(i+1)31, i is an integer, and 0≤i≤30. The sequence {gm} is mapped to 36 subcarriers, a center-frequency spacing of any two adjacent subcarriers in the 36 subcarriers is 2t times a subcarrier spacing, and t is a positive integer. Optionally, the second sequence in this embodiment of the present invention may be used to send uplink control information or a reference signal. The following further provides how the foregoing sequence group is applied to the embodiments of the present invention with reference to the foregoing embodiment. This embodiment provides a sequence determining method.FIG.4is a schematic signaling diagram of the method according to this embodiment of the present invention. It should be noted that some steps inFIG.4and in the following are optional, and there is no limitation that all steps need to be included in this embodiment of the present invention. In addition, sequence numbers of steps are merely used for description and do not represent a sequence. Step410: An access network device sends an identity to a terminal device. The terminal device receives the identity. This step is optional. The access network device may send the identity to the terminal device by using higher layer signaling. For example, the higher layer signaling may be a radio resource control (RRC) message. For the identity, refer to the foregoing description, and details are not described herein again. When the identity is a cell identity, this step may be that the access network device indicates the identity to the terminal device by using a synchronization signal, and the terminal device may obtain the cell identity by detecting the synchronization signal. The action in this step may be implemented by the transceiver301in the terminal device104described above. Certainly, the action in this step may be implemented by the modem processor304and the transceiver301in the terminal device104described above. The action in this step may be implemented by the transceiver202in the access network device102described above. Certainly, the action in this step may be implemented by the processor201and the transceiver202in the access network device102described above. Step420: Determine a first sequence in a sequence group based on an index of the sequence group. It should be noted that, in this step, determining the first sequence in the sequence group based on the index of the sequence group may be:determining an index of the first sequence in the sequence group based on the index of the sequence group. Alternatively, in this step, determining the first sequence in the sequence group based on the index of the sequence group may be:determining an element in the first sequence in the sequence group based on the index of the sequence group. It should be noted that the first sequence and a second sequence have a same index. Therefore, the index of the first sequence and an index of the second sequence may be the same in this embodiment. Therefore, step420may also be determining an index of a sequence based on the index of the sequence group, and the index may be the index of the second sequence. Further, if the index of the sequence group is determined based on an identity, this step may include: determining the index of the sequence group based on the identity and the foregoing method; and determining the index of the first sequence in the sequence group or determining the element in the first sequence in the sequence group based on the index of the sequence group and a length of a to-be-sent signal. For example, if a quantity of elements included in the to-be-sent signal is N, the first sequence is the foregoing sequence {xn}. If the quantity of elements included in the to-be-sent signal is M, the first sequence is the foregoing sequence {ym}. Optionally, step420may include: determining the first sequence in the sequence group based on the index of the sequence group and a mapping manner. Therefore, in this optional implementation, same sequence group numbers may be corresponding to different first sequences, and these different first sequences are corresponding to different mapping manners. Therefore, when determining the first sequence, the access network device and the terminal device may determine the first sequence based on the sequence group number and the mapping manner. The mapping manner may refer to a center-frequency spacing of any two adjacent subcarriers that is used to map a to-be-sent signal. In this embodiment of the present invention, the mapping manner is also referred to as a comb structure. Different mapping manners are used to distinguish between different grouping manners. This can ensure that there is relatively high cross-correlation between sequences in a same group in a mapping manner of continuous mapping or equally-spaced mapping, thereby reducing interference between neighboring cells. Further, the mapping manner in this embodiment of the present invention may be one time a subcarrier spacing. This is also referred to as continuous mapping or a 1-comb structure. Alternatively, the mapping manner in this embodiment of the present invention may be two times a subcarrier spacing. This is also referred to as a 2-comb structure. Optionally, the comb structure may include a 1-comb structure, a 2-comb structure, a 4-comb structure, and the like. This may be understood as follows: Subcarriers required for mapping are sorted in ascending or descending order, and for a given subcarrier spacing (for example, a subcarrier spacing of 15 kHz or a subcarrier spacing of a 30 kHz), a center-frequency difference of any two adjacent subcarriers is one time the subcarrier spacing, that is, the subcarriers are equally-spaced and have a spacing of one time the subcarrier spacing, which is a 1-comb structure; a center-frequency difference of any two adjacent subcarriers is two times the subcarrier spacing, that is, the subcarriers are equally-spaced and have a spacing of two times the subcarrier spacing, which is a 2-comb structure; a center-frequency difference of any two adjacent subcarriers is four times the subcarrier spacing, that is, the subcarriers are equally-spaced and have a spacing of four times the subcarrier spacing, which is a 4-comb structure. Optionally, in an implementation, the terminal device and the access network device determine sequences with a length N from 30 sequence groups based on a sequence length N that needs to be used, further determine 30 sequences from the sequences with a length N according to a comb structure of a to-be-sent signal, and then determine one sequence from the 30 sequences based on the index of the sequence group. Optionally, in another implementation, the terminal device and the access network device directly determine one sequence through table lookup based on a sequence length N that needs to be used, the index of the sequence group, and a comb structure of a to-be-sent signal. This step may be performed by both the access network device and the terminal device. Specifically, the action in this step may be implemented by the modem processor304of the terminal device104, and may be implemented by the processor201of the access network device102. Step430: Generate a second sequence based on the first sequence or an identity of the first sequence. When step420is determining the identity of the first sequence in the sequence group based on the index of the sequence group, step430may be generating the second sequence based on the identity of the first sequence, or step430may include determining the first sequence based on the identity of the first sequence and generating the second sequence based on the first sequence. Therefore, in this embodiment of the present invention, the second sequence may be directly generated based on the identity of the first sequence. In this embodiment, the first sequence may also be first generated based on the identity of the first sequence, and then the second sequence is generated based on the first sequence. When step420is determining the element in the first sequence in the sequence group based on the index of the sequence group, step430may be generating the second sequence based on the first sequence. This step may be performed by both the access network device and the terminal device. Specifically, the action in this step may be implemented by the modem processor304of the terminal device104, and may be implemented by the processor201of the access network device102. Step440: The terminal device maps the second sequence to subcarriers. In this step, a mapping manner used by the terminal device to map the second sequence to subcarriers is the same as the mapping manner described above. The second sequence is the sequence {fn}, the sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is 2t times a subcarrier spacing or t times a subcarrier spacing; or, the second sequence is the sequence {gm}, the sequence {gm} is mapped to M subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the M subcarriers is t times a subcarrier spacing. Alternatively, the second sequence is the sequence {fn}, the sequence {fn} is mapped to N subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the N subcarriers is 2t times a subcarrier spacing or t times a subcarrier spacing; or, the second sequence is the sequence {gm}, the sequence {gm} is mapped to M subcarriers, and the M subcarriers are consecutive subcarriers. t is a positive integer. When N=12 and M=36,the sequence {fn} is mapped to 12 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 12 subcarriers is 2t times a subcarrier spacing or t times a subcarrier spacing; or, the second sequence is the sequence {gm}, the sequence {gm} is mapped to 36 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 36 subcarriers is t times a subcarrier spacing. Alternatively, the second sequence is the sequence {fn}, the sequence {fn} is mapped to 12 subcarriers, and a center-frequency spacing of any two adjacent subcarriers in the 12 subcarriers is 2t times a subcarrier spacing or t times a subcarrier spacing; or, the second sequence is the sequence {gm}, the sequence {gm} is mapped to 36 subcarriers, and the 36 subcarriers are consecutive subcarriers. t is a positive integer. Step440may include the following: The terminal device maps a sequence {fn} with a length N to N subcarriers to generate an N-point frequency-domain signal. Optionally, in an implementation, the terminal device determines a length-12 sequence {xn}, and generates a sequence {fn} based on {xn} after determining a cyclic shift c corresponding to the sequence {xn} based on an implicit association manner configured by a network device or a predefined implicit association manner. A generation formula is as follows: fn=A·xn·ej·α·n. ej·α·nmeans performing cyclic shifting on the sequence {xn}, and α=2·π·c1⁢2,c=0,1,…,11. Optionally, N or Al elements in the second sequence may be respectively mapped to equally-spaced N or M subcarriers. Subcarriers required for mapping are sorted in ascending or descending order, and for a given subcarrier spacing (for example, a subcarrier spacing of 15 kHz or a subcarrier spacing of a 30 kHz), a center-frequency difference of any two adjacent subcarriers is t times the subcarrier spacing. When t=1, it may be understood that a signal is mapped to consecutive frequency-domain subcarriers, as shown inFIG.5. Optionally, when t=2, it may be understood that a signal is mapped to inconsecutive frequency-domain subcarriers, and an index difference of occupied frequency-domain subcarriers is 2, as shown inFIG.6. It should be noted that, in this embodiment of the present invention, the mapping manner for the second sequence is not limited to the foregoing manners. The action in this step may be implemented by the modem processor304in the terminal device104described above. Step450: The terminal device sends a signal generated based on the second sequence. This step may include the following: The terminal device transforms an N- or M-point frequency-domain signal into a time-domain signal by using inverse fast Fourier transform (IFFT), and adds a cyclic prefix to the time-domain signal, so as to generate a first signal, and the terminal device sends the first signal by using radio frequency. The action in this step may be implemented by the transceiver301in the terminal device104described above. Certainly, the action in this step may be implemented by the modem processor304and the transceiver301in the terminal device104described above. Step460: The access network device processes a received first signal based on the second sequence. Specifically, this step may include the following: The access network device receives the first signal carried on N subcarriers and obtains N elements in the sequence {fn}; or, the access network device receives the first signal carried on M subcarriers and obtains M elements in the sequence {gm}. Optionally, a process in which the access network device receives the first signal carried on the N subcarriers is that the access network device obtains a time-domain signal and removes a cyclic prefix. Then, the access network device performs K-point fast Fourier transform (FFT) on the signal from which the cyclic prefix is removed, so as to obtain an N-point frequency-domain signal, where K is greater than or equal to N. Then, the access network device receives the first signal carried on the N subcarriers, where the first signal is a sequence that includes N elements. For example, the receiving device receives the signal on the N subcarriers according to locations that are predefined or configured by a base station and that are of the N subcarriers in subcarriers in a communication system. Further, the access network device processes the first signal according to the N elements in the sequence {fn}. Optionally, the access network device determines channel state information based on a relationship between the sequence {fn} and the first signal, or the network device determines information about a modulated symbol or the like carried on the sequence based on a relationship between the sequence {fn} and the first signal. The case of the sequence {gm} is similar to that of the sequence {fn}, and details are not described again in the specification. The action in this step may be implemented by the transceiver202in the access network device102described above. Certainly, the action in this step may be implemented by the processor201and the transceiver202in the access network device102described above. It should be noted that, in this embodiment of the present invention, there is no sequence between steps410to430performed by the access network device and steps that are performed by the terminal device and that are corresponding to the terminal device in the method. Steps410to430may be performed before the terminal device sends the first signal, or may be performed after the terminal device sends the first signal. This is not limited in this embodiment of the present invention, provided that steps410to430are performed before the access network device uses the second sequence to process the first signal. An embodiment of the present invention further provides an apparatus (for example, an integrated circuit, a wireless device, or a circuit module) configured to implement the foregoing method. The apparatus implementing the method described in this specification may be an independent device, or may be a part of a relatively large device. The device may be (i) an independent IC; (ii) a set that has one or more ICs and that can include a memory IC configured to store data and/or an instruction; (iii) an RFIC, for example, an RF receiver or an RF transmitter; (iv) an ASIC, for example, a mobile station modem; (v) a module that can be built into another device; (vi) a receiver, a cellular phone, a wireless device, a handheld machine, or a mobile unit; or (vii) the like. The method and apparatus that are provided in the embodiments of the present invention may be applied to the terminal device or the access network device (both the terminal device and the access network device may be referred to as a wireless device). The terminal device, the access network device, or the wireless device may include a hardware layer, an operating system layer that is running on the hardware layer, and an application layer that is running on the operating system layer. The hardware layer includes hardware such as a central processing unit (CPU), a memory management unit (MMU), and a memory (also referred to as a main memory). Operating systems may be any one or more computer operating systems that process a service by using a process, for example, a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a Windows operating system. The application layer includes applications such as a browser, an address book, word processing software, and instant messaging software. In addition, a specific structure of an entity for performing the method is not limited in the embodiments of the present invention, provided that the entity can perform communication according to the wireless communication method in the embodiments of the present invention by running a program of code recording the method in the embodiments of the present invention. For example, the wireless communication method in the embodiments of the present invention may be performed by the terminal device, the access network device, or a function module that is in the terminal device or the access network device and that can invoke a program and execute the program. A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of the embodiments of the present invention. In addition, aspects or features in the embodiments of the present invention may be implemented as a method, an apparatus or a product that uses standard programming and/or engineering technologies. The term “product” used in this application covers computer programs that can be accessed from any computer readable component, carrier or medium. For example, the computer-readable medium may include but is not limited to: magnetic storage components (for example, a hard disk, a floppy disk, or a magnetic tape), optical discs (for example, compact discs (CD), digital versatile discs (DVD), smart cards and flash memory components (for example, erasable programmable read-only memory (EPROM), cards, sticks, or key drives). In addition, various storage media described in this specification may indicate one or more devices and/or other machine-readable media that is used to store information. The term “machine readable medium” may include but is not limited to radio channels, and various other media that can store, contain and/or carry instructions and/or data. All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When software is used to implement the embodiments, the embodiments may be implemented completely or partially in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the procedure or functions according to the embodiments of the present invention are all or partially generated. The computer may be general-purpose computers, dedicated computers, computer networks, or other programmable apparatuses. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible by a computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, DVD), a semiconductor medium (for example, a Solid State Disk (SSD)), or the like. It should be understood that sequence numbers of the foregoing processes do not mean execution sequences in various embodiments of the present invention. The execution sequences of the processes should be determined according to functions and internal logic of the processes, and should not be construed as any limitation on the implementation processes of the embodiments of the present invention. It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, reference may be made to a corresponding process in the foregoing method embodiments, and details are not described herein again. In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely an example. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of the embodiments. When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the embodiments of the present invention essentially, or the part contributing to the prior art, or some of the technical solutions may be implemented in a form of a software product. The software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, an access network device, or the like) to perform all or some of the steps of the methods described in the embodiments of the present invention. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc. The foregoing descriptions are merely specific implementations of the present invention, but are not intended to limit the protection scope of the present invention. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in the present invention shall fall within the protection scope of the present invention.
209,864
11863487
DETAILED DESCRIPTION The following detailed description refers to the accompanying drawings. The same reference numbers may be used in different drawings to identify the same or similar elements. In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular structures, architectures, interfaces, techniques, etc. in order to provide a thorough understanding of the various aspects of various embodiments. However, it will be apparent to those skilled in the art having the benefit of the present disclosure that the various aspects of the various embodiments may be practiced in other examples that depart from these specific details. In certain instances, descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the description of the various embodiments with unnecessary detail. For the purposes of the present document, the phrase “A or B” means (A), (B), or (A and B). 5G New Radio (NR) radio resource management (RRM) measurement is based on layer 3 (L3, e.g., network layer) filtered reference signal received power (RSRP), which can be measured based on synchronization signal block (SSB, e.g., synchronization signal (SS)/physical broadcast channel (PBCH) block) or channel state information (CSI)-reference signal (RS). In 3GPP release 15, the signal-to-interference-plus-noise ratio (SINR) side conditions as well as measurement accuracy requirement for SSB based L3-RSRP measurement, which is used for RRM purpose, have already been defined by 3GPP RAN4. However, the SINR side conditions as well as measurement accuracy requirements for CSI-RS based L3-RSRP measurement is missing in Release 15. For release 16, for RRM purpose, there is one RAN4 work item to define the SINR side conditions as well as the CSI-RS configurations for CSI-RS based L3-RSRP accuracy requirements. Compared to SSB, RRM CSI-RS supports more flexible configuration for repetition period, bandwidth (BW), and resource element (RE) density. Defining the CSI-RS configuration for CSI-RS based L3-RSRP measurement accuracy requirements has been a trade-off among UE complexity, spectrum overhead, and RRM measurement performance. This disclosure describes how to optimize the CSI-RS configuration for defining CSI-RS based L3-RSRP accuracy requirements, e.g., to optimize the above-described trade-off. The disclosure provides simulation results for CSI-RS based L3-RSRP measurement for RRM purpose. Additionally, aspects of various embodiments, as supported by the simulation results, include:(1) The minimal combination of a CSI-RS resource for defining the L3-RSRP accuracy requirements shall be satisfied by the following condition: D≥1 or numRB≥24. D is the CSI-RS RE density, e.g., the number of CSI-RS REs within a CSI-RS resource block (RB), while numRB is the number of CSI-RS resource blocks within a CSI-RS symbol.(2) The number of measurement samples for CSI-RS based RSRP measurement, which is required to apply L3 filtering, may be determined based on the configuration of the CSI-RS, including ref. RB number and ref. RE density.(3) The RSRP error threshold, which is used to evaluate the accuracy of CSI-RS based L3-RSRP measurement, may be further adapted to the configuration of the RRM CSI-RS, and/or the frequency range in which a RRM CSI-RS is transmitted. Simulation results are shown below, which support various embodiments herein. In particular, Table 1 and Table 2 show L3-RSRP errors for Frequency Range 1 (FR1) for extended pedestrian A (EPA) and extended typical urban (ETU) channel for 24 RB and 96 RB based on 1/3/5/10 samples respectively, as well as the measurement samples. TABLE 1RSRP delta for EPA5 channel (dB)1 sample3 sample5 sample10 samples24RB, D = 15.914.133.252.8124RB, D = 34.182.87232.411.8696RB, D = 13.862.57952.121.3696RB, D = 32.592.101.350.95 TABLE 2RSRP delta for ETU30 channel (dB)1 sample3 sample5 sample10 samples24RB, D = 16.344.163.492.9824RB, D = 34.462.882.311.9996RB, D = 15.213.542.552.1396RB, D = 32.922.151.561.55 Various embodiments are described further below, as supported by the simulation results. For example, based on the simulation results, it shows that, for the 24RB, D=1 case, the accuracy delta are 3.49 dB and 2.98 dB for 5 samples and 10 samples, respectively. When radio frequency (RF) margin is considered, the total accuracy delta are 5.49 dB and 4.98 dB, respectively, which are worse than that of SSB. Therefore, in accordance with various embodiments, no measurement requirement may be defined for when numRB<24RB and D=1 case. Accordingly, in some embodiments, the minimal configuration of a CSI-RS resource for defining the L3-RSRP accuracy requirements shall satisfy at least the following condition: D≥1 or numRB≥24. Wherein D is the CSI-RS ref. RE density, which is the number of CSI-RS REs within a CSI-RS RB, while numRB is the number of CSI-RS resource blocks within a CSI-RS symbol. Additionally, based on the simulation results, it also shows, to reach the same L3-RSRP measurement accuracy, the number of measurement samples, which is required to apply L3 filtering, may be different if the CSI-RS configuration different. Hence, in accordance with various embodiments, the measurement and report latency, which are required to apply L3 filtering of the CSI-RS based RSRP measurements over multiple measurement samples, may be determined based on the configuration of a CSI-RS resource. Note that, the configuration of a RRM CSI-RS resource may include at least the following items:(1) Reference RE density of the CSI-RS resource;(2) The reference RB number of a CSI-RS resource; and/or(3) The repetition period of a CSI-RS resource. FIG.1illustrates an example process100for determining the L3-RSRP measurement latency based on the CSI-RS configuration, in accordance with various embodiments. At102, the process100may include determining the configuration of a RRM CSI-RS resource for L3-RSRP measurement. The configuration may include, for example, a CSI-RS RB number, a RE density (D), and/or a repetition period. At104, the process100may include determining the minimal number of measurement samples required for L3 filtering based on the CSI-RS configuration. At106, the process100may include determining the minimal latency for CSI-RS based RRM measurement based on the minimal number of measurement samples and the repetition period of the CSI-RS resource. The process100may be performed by a UE (e.g., UE301a-b), a gNB (e.g., radio access network node311a-b), and/or another network entity (or a portion thereof). In particular, it also proposes that, when the reference RB number of a CSI-RS is below 25, at least 10 measurement samples are required to apply L3 filtering of instantaneous CSI-RS RSRPs, so as to satisfy the L3-RSRP accuracy requirements. Additionally, or alternatively, in various embodiments, the RSRP error threshold, which is used to evaluate the accuracy of CSI-RS based L3-RSRP measurement, may be further adapted to the configuration of the RRM CSI-RS, and/or the frequency range in which a RRM CSI-RS is transmitted. The RSRP error threshold may be adapted to the frequency range since, compared with FR1, a higher RSRP error margin may need to be reserved for Frequency Range 2 (FR2), e.g., due to higher millimeter wave (mmWave) RF uncertainties, such as a higher phase noise, and/or the ambiguities of the RSRP reference point for mmWave. Those RF uncertainties are usually not easily reduced by L3 filtering. FIG.2illustrates an example process200for determining the L3-RSRP error threshold based on the CSI-RS configuration and the frequency range, in accordance with various embodiments. At202, the process200may include determining the configuration of a RRM CSI-RS resource for L3-RSRP measurement. The configuration may include, for example, a CSI-RS RB number, a RE density (D), and/or a repetition period. At204, the process200may further include determining the frequency range in which the RRM CSI-RS resource is transmitted (e.g., FR1 or FR2). At206, the process200may further include determining the RSRP error threshold for evaluating the L3-RSRP accuracy of the RRM CSI-RS resource. The process200may be performed by a UE (e.g., UE301a-b), a gNB (e.g., radio access network node311a-b), and/or another network entity (or a portion thereof). Systems and Implementations FIG.3illustrates an example architecture of a system300of a network, in accordance with various embodiments. The following description is provided for an example system300that operates in conjunction with the LTE system standards and 5G or NR system standards as provided by 3GPP technical specifications. However, the example embodiments are not limited in this regard and the described embodiments may apply to other networks that benefit from the principles described herein, such as future 3GPP systems (e.g., Sixth Generation (6G)) systems, IEEE 802.16 protocols (e.g., WMAN, WiMAX, etc.), or the like. As shown byFIG.3, the system300includes UE301aand UE301b(collectively referred to as “UEs301” or “UE301”). In this example, UEs301are illustrated as smartphones (e.g., handheld touchscreen mobile computing devices connectable to one or more cellular networks), but may also comprise any mobile or non-mobile computing device, such as consumer electronics devices, cellular phones, smartphones, feature phones, tablet computers, wearable computer devices, personal digital assistants (PDAs), pagers, wireless handsets, desktop computers, laptop computers, in-vehicle infotainment (IVI), in-car entertainment (ICE) devices, an Instrument Cluster (IC), head-up display (HUD) devices, onboard diagnostic (OBD) devices, dashtop mobile equipment (DME), mobile data terminals (MDTs), Electronic Engine Management System (EEMS), electronic/engine control units (ECUs), electronic/engine control modules (ECMs), embedded systems, microcontrollers, control modules, engine management systems (EMS), networked or “smart” appliances, MTC devices, M2M, IoT devices, and/or the like. In some embodiments, any of the UEs301may be IoT UEs, which may comprise a network access layer designed for low-power IoT applications utilizing short-lived UE connections. An IoT UE can utilize technologies such as M2M or MTC for exchanging data with an MTC server or device via a PLMN, ProSe or D2D communication, sensor networks, or IoT networks. The M2M or MTC exchange of data may be a machine-initiated exchange of data. An IoT network describes interconnecting IoT UEs, which may include uniquely identifiable embedded computing devices (within the Internet infrastructure), with short-lived connections. The IoT UEs may execute background applications (e.g., keep-alive messages, status updates, etc.) to facilitate the connections of the IoT network. The UEs301may be configured to connect, for example, communicatively couple, with an or RAN310. In embodiments, the RAN310may be an NG RAN or a 5G RAN, an E-UTRAN, or a legacy RAN, such as a UTRAN or GERAN. As used herein, the term “NG RAN” or the like may refer to a RAN310that operates in an NR or 5G system300, and the term “E-UTRAN” or the like may refer to a RAN310that operates in an LTE or 4G system300. The UEs301utilize connections (or channels)303and304, respectively, each of which comprises a physical communications interface or layer (discussed in further detail below). In this example, the connections303and304are illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols, such as a GSM protocol, a CDMA network protocol, a PTT protocol, a POC protocol, a UMTS protocol, a 3GPP LTE protocol, a 5G protocol, a NR protocol, and/or any of the other communications protocols discussed herein. In embodiments, the UEs301may directly exchange communication data via a ProSe interface305. The ProSe interface305may alternatively be referred to as a SL interface305and may comprise one or more logical channels, including but not limited to a PSCCH, a PSSCH, a PSDCH, and a PSBCH. The UE301bis shown to be configured to access an AP306(also referred to as “WLAN node306,” “WLAN306,” “WLAN Termination306,” “WT306” or the like) via connection307. The connection307can comprise a local wireless connection, such as a connection consistent with any IEEE 802.11 protocol, wherein the AP306would comprise a wireless fidelity (Wi-Fi®) router. In this example, the AP306is shown to be connected to the Internet without connecting to the core network of the wireless system (described in further detail below). In various embodiments, the UE301b, RAN310, and AP306may be configured to utilize LWA operation and/or LWIP operation. The LWA operation may involve the UE301bin RRC_CONNECTED being configured by a RAN node311a-bto utilize radio resources of LTE and WLAN. LWIP operation may involve the UE301busing WLAN radio resources (e.g., connection307) via IPsec protocol tunneling to authenticate and encrypt packets (e.g., IP packets) sent over the connection307. IPsec tunneling may include encapsulating the entirety of original IP packets and adding a new packet header, thereby protecting the original header of the IP packets. The RAN310can include one or more AN nodes or RAN nodes311aand311b(collectively referred to as “RAN nodes311” or “RAN node311”) that enable the connections303and304. As used herein, the terms “access node,” “access point,” or the like may describe equipment that provides the radio baseband functions for data and/or voice connectivity between a network and one or more users. These access nodes can be referred to as BS, gNBs, RAN nodes, eNBs, NodeBs, RSUs, TRxPs or TRPs, and so forth, and can comprise ground stations (e.g., terrestrial access points) or satellite stations providing coverage within a geographic area (e.g., a cell). As used herein, the term “NG RAN node” or the like may refer to a RAN node311that operates in an NR or 5G system300(for example, a gNB), and the term “E-UTRAN node” or the like may refer to a RAN node311that operates in an LTE or 4G system300(e.g., an eNB). According to various embodiments, the RAN nodes311may be implemented as one or more of a dedicated physical device such as a macrocell base station, and/or a low power (LP) base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells. In some embodiments, all or parts of the RAN nodes311may be implemented as one or more software entities running on server computers as part of a virtual network, which may be referred to as a CRAN and/or a virtual baseband unit pool (vBBUP). In these embodiments, the CRAN or vBBUP may implement a RAN function split, such as a PDCP split wherein RRC and PDCP layers are operated by the CRAN/vBBUP and other L2 protocol entities are operated by individual RAN nodes311; a MAC/PHY split wherein RRC, PDCP, RLC, and MAC layers are operated by the CRAN/vBBUP and the PHY layer is operated by individual RAN nodes311; or a “lower PHY” split wherein RRC, PDCP, RLC, MAC layers and upper portions of the PHY layer are operated by the CRAN/vBBUP and lower portions of the PHY layer are operated by individual RAN nodes311. This virtualized framework allows the freed-up processor cores of the RAN nodes311to perform other virtualized applications. In some implementations, an individual RAN node311may represent individual gNB-DUs that are connected to a gNB-CU via individual F1 interfaces (not shown byFIG.3). In these implementations, the gNB-DUs may include one or more remote radio heads or RFEMs (see, e.g.,FIG.4), and the gNB-CU may be operated by a server that is located in the RAN310(not shown) or by a server pool in a similar manner as the CRAN/vBBUP. Additionally or alternatively, one or more of the RAN nodes311may be next generation eNBs (ng-eNBs), which are RAN nodes that provide E-UTRA user plane and control plane protocol terminations toward the UEs301, and are connected to a 5GC via an NG interface (discussed infra). In V2X scenarios one or more of the RAN nodes311may be or act as RSUs. The term “Road Side Unit” or “RSU” may refer to any transportation infrastructure entity used for V2X communications. An RSU may be implemented in or by a suitable RAN node or a stationary (or relatively stationary) UE, where an RSU implemented in or by a UE may be referred to as a “UE-type RSU,” an RSU implemented in or by an eNB may be referred to as an “eNB-type RSU,” an RSU implemented in or by a gNB may be referred to as a “gNB-type RSU,” and the like. In one example, an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs301(vUEs301). The RSU may also include internal data storage circuitry to store intersection map geometry, traffic statistics, media, as well as applications/software to sense and control ongoing vehicular and pedestrian traffic. The RSU may operate on the 5.9 GHz Direct Short Range Communications (DSRC) band to provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may operate on the cellular V2X band to provide the aforementioned low latency communications, as well as other cellular communications services. Additionally or alternatively, the RSU may operate as a Wi-Fi hotspot (2.4 GHz band) and/or provide connectivity to one or more cellular networks to provide uplink and downlink communications. The computing device(s) and some or all of the radiofrequency circuitry of the RSU may be packaged in a weatherproof enclosure suitable for outdoor installation, and may include a network interface controller to provide a wired connection (e.g., Ethernet) to a traffic signal controller and/or a backhaul network. Any of the RAN nodes311can terminate the air interface protocol and can be the first point of contact for the UEs301. In some embodiments, any of the RAN nodes311can fulfill various logical functions for the RAN310including, but not limited to, radio network controller (RNC) functions such as radio bearer management, uplink and downlink dynamic radio resource management and data packet scheduling, and mobility management. In embodiments, the UEs301can be configured to communicate using OFDM communication signals with each other or with any of the RAN nodes311over a multicarrier communication channel in accordance with various communication techniques, such as, but not limited to, an OFDMA communication technique (e.g., for downlink communications) or a SC-FDMA communication technique (e.g., for uplink and ProSe or sidelink communications), although the scope of the embodiments is not limited in this respect. The OFDM signals can comprise a plurality of orthogonal subcarriers. In some embodiments, a downlink resource grid can be used for downlink transmissions from any of the RAN nodes311to the UEs301, while uplink transmissions can utilize similar techniques. The grid can be a time-frequency grid, called a resource grid or time-frequency resource grid, which is the physical resource in the downlink in each slot. Such a time-frequency plane representation is a common practice for OFDM systems, which makes it intuitive for radio resource allocation. Each column and each row of the resource grid corresponds to one OFDM symbol and one OFDM subcarrier, respectively. The duration of the resource grid in the time domain corresponds to one slot in a radio frame. The smallest time-frequency unit in a resource grid is denoted as a resource element. Each resource grid comprises a number of resource blocks, which describe the mapping of certain physical channels to resource elements. Each resource block comprises a collection of resource elements; in the frequency domain, this may represent the smallest quantity of resources that currently can be allocated. There are several different physical downlink channels that are conveyed using such resource blocks. According to various embodiments, the UEs301and the RAN nodes311communicate data (for example, transmit and receive) data over a licensed medium (also referred to as the “licensed spectrum” and/or the “licensed band”) and an unlicensed shared medium (also referred to as the “unlicensed spectrum” and/or the “unlicensed band”). The licensed spectrum may include channels that operate in the frequency range of approximately 400 MHz to approximately 3.8 GHz, whereas the unlicensed spectrum may include the 5 GHz band. To operate in the unlicensed spectrum, the UEs301and the RAN nodes311may operate using LAA, eLAA, and/or feLAA mechanisms. In these implementations, the UEs301and the RAN nodes311may perform one or more known medium-sensing operations and/or carrier-sensing operations in order to determine whether one or more channels in the unlicensed spectrum is unavailable or otherwise occupied prior to transmitting in the unlicensed spectrum. The medium/carrier sensing operations may be performed according to a listen-before-talk (LBT) protocol. LBT is a mechanism whereby equipment (for example, UEs301RAN nodes311, etc.) senses a medium (for example, a channel or carrier frequency) and transmits when the medium is sensed to be idle (or when a specific channel in the medium is sensed to be unoccupied). The medium sensing operation may include CCA, which utilizes at least ED to determine the presence or absence of other signals on a channel in order to determine if a channel is occupied or clear. This LBT mechanism allows cellular/LAA networks to coexist with incumbent systems in the unlicensed spectrum and with other LAA networks. ED may include sensing RF energy across an intended transmission band for a period of time and comparing the sensed RF energy to a predefined or configured threshold. Typically, the incumbent systems in the 5 GHz band are WLANs based on IEEE 802.11 technologies. WLAN employs a contention-based channel access mechanism, called CSMA/CA. Here, when a WLAN node (e.g., a mobile station (MS) such as UE301, AP306, or the like) intends to transmit, the WLAN node may first perform CCA before transmission. Additionally, a backoff mechanism is used to avoid collisions in situations where more than one WLAN node senses the channel as idle and transmits at the same time. The backoff mechanism may be a counter that is drawn randomly within the CWS, which is increased exponentially upon the occurrence of collision and reset to a minimum value when the transmission succeeds. The LBT mechanism designed for LAA is somewhat similar to the CSMA/CA of WLAN. In some implementations, the LBT procedure for DL or UL transmission bursts including PDSCH or PUSCH transmissions, respectively, may have an LAA contention window that is variable in length between X and Y ECCA slots, where X and Y are minimum and maximum values for the CWSs for LAA. In one example, the minimum CWS for an LAA transmission may be 9 microseconds (μs); however, the size of the CWS and a MCOT (for example, a transmission burst) may be based on governmental regulatory requirements. The LAA mechanisms are built upon CA technologies of LTE-Advanced systems. In CA, each aggregated carrier is referred to as a CC. A CC may have a bandwidth of 1.4, 3, 5, 10, 15 or 20 MHz and a maximum of five CCs can be aggregated, and therefore, a maximum aggregated bandwidth is 100 MHz. In FDD systems, the number of aggregated carriers can be different for DL and UL, where the number of UL CCs is equal to or lower than the number of DL component carriers. In some cases, individual CCs can have a different bandwidth than other CCs. In TDD systems, the number of CCs as well as the bandwidths of each CC is usually the same for DL and UL. CA also comprises individual serving cells to provide individual CCs. The coverage of the serving cells may differ, for example, because CCs on different frequency bands will experience different pathloss. A primary service cell or PCell may provide a PCC for both UL and DL, and may handle RRC and NAS related activities. The other serving cells are referred to as SCells, and each SCell may provide an individual SCC for both UL and DL. The SCCs may be added and removed as required, while changing the PCC may require the UE301to undergo a handover. In LAA, eLAA, and feLAA, some or all of the SCells may operate in the unlicensed spectrum (referred to as “LAA SCells”), and the LAA SCells are assisted by a PCell operating in the licensed spectrum. When a UE is configured with more than one LAA SCell, the UE may receive UL grants on the configured LAA SCells indicating different PUSCH starting positions within a same subframe. The PDSCH carries user data and higher-layer signaling to the UEs301. The PDCCH carries information about the transport format and resource allocations related to the PDSCH channel, among other things. It may also inform the UEs301about the transport format, resource allocation, and HARQ information related to the uplink shared channel. Typically, downlink scheduling (assigning control and shared channel resource blocks to the UE301bwithin a cell) may be performed at any of the RAN nodes311based on channel quality information fed back from any of the UEs301. The downlink resource assignment information may be sent on the PDCCH used for (e.g., assigned to) each of the UEs301. The PDCCH uses CCEs to convey the control information. Before being mapped to resource elements, the PDCCH complex-valued symbols may first be organized into quadruplets, which may then be permuted using a sub-block interleaver for rate matching. Each PDCCH may be transmitted using one or more of these CCEs, where each CCE may correspond to nine sets of four physical resource elements known as REGs. Four Quadrature Phase Shift Keying (QPSK) symbols may be mapped to each REG. The PDCCH can be transmitted using one or more CCEs, depending on the size of the DCI and the channel condition. There can be four or more different PDCCH formats defined in LTE with different numbers of CCEs (e.g., aggregation level, L=1, 2, 4, or 8). Some embodiments may use concepts for resource allocation for control channel information that are an extension of the above-described concepts. For example, some embodiments may utilize an EPDCCH that uses PDSCH resources for control information transmission. The EPDCCH may be transmitted using one or more ECCEs. Similar to above, each ECCE may correspond to nine sets of four physical resource elements known as an EREGs. An ECCE may have other numbers of EREGs in some situations. The RAN nodes311may be configured to communicate with one another via interface312. In embodiments where the system300is an LTE system (e.g., when CN320is an EPC), the interface312may be an X2 interface312. The X2 interface may be defined between two or more RAN nodes311(e.g., two or more eNBs and the like) that connect to EPC320, and/or between two eNBs connecting to EPC320. In some implementations, the X2 interface may include an X2 user plane interface (X2-U) and an X2 control plane interface (X2-C). The X2-U may provide flow control mechanisms for user data packets transferred over the X2 interface, and may be used to communicate information about the delivery of user data between eNBs. For example, the X2-U may provide specific sequence number information for user data transferred from a MeNB to an SeNB; information about successful in sequence delivery of PDCP PDUs to a UE301from an SeNB for user data; information of PDCP PDUs that were not delivered to a UE301; information about a current minimum desired buffer size at the SeNB for transmitting to the UE user data; and the like. The X2-C may provide intra-LTE access mobility functionality, including context transfers from source to target eNBs, user plane transport control, etc.; load management functionality; as well as inter-cell interference coordination functionality. In embodiments where the system300is a 5G or NR system (e.g., when CN320is an 5GC), the interface312may be an Xn interface312. The Xn interface is defined between two or more RAN nodes311(e.g., two or more gNBs and the like) that connect to 5GC320, between a RAN node311(e.g., a gNB) connecting to 5GC320and an eNB, and/or between two eNBs connecting to 5GC320. In some implementations, the Xn interface may include an Xn user plane (Xn-U) interface and an Xn control plane (Xn-C) interface. The Xn-U may provide non-guaranteed delivery of user plane PDUs and support/provide data forwarding and flow control functionality. The Xn-C may provide management and error handling functionality, functionality to manage the Xn-C interface; mobility support for UE301in a connected mode (e.g., CM-CONNECTED) including functionality to manage the UE mobility for connected mode between one or more RAN nodes311. The mobility support may include context transfer from an old (source) serving RAN node311to new (target) serving RAN node311; and control of user plane tunnels between old (source) serving RAN node311to new (target) serving RAN node311. A protocol stack of the Xn-U may include a transport network layer built on Internet Protocol (IP) transport layer, and a GTP-U layer on top of a UDP and/or IP layer(s) to carry user plane PDUs. The Xn-C protocol stack may include an application layer signaling protocol (referred to as Xn Application Protocol (Xn-AP)) and a transport network layer that is built on SCTP. The SCTP may be on top of an IP layer, and may provide the guaranteed delivery of application layer messages. In the transport IP layer, point-to-point transmission is used to deliver the signaling PDUs. In other implementations, the Xn-U protocol stack and/or the Xn-C protocol stack may be same or similar to the user plane and/or control plane protocol stack(s) shown and described herein. The RAN310is shown to be communicatively coupled to a core network—in this embodiment, core network (CN)320. The CN320may comprise a plurality of network elements322, which are configured to offer various data and telecommunications services to customers/subscribers (e.g., users of UEs301) who are connected to the CN320via the RAN310. The components of the CN320may be implemented in one physical node or separate physical nodes including components to read and execute instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium). In some embodiments, NFV may be utilized to virtualize any or all of the above-described network node functions via executable instructions stored in one or more computer-readable storage mediums (described in further detail below). A logical instantiation of the CN320may be referred to as a network slice, and a logical instantiation of a portion of the CN320may be referred to as a network sub-slice. NFV architectures and infrastructures may be used to virtualize one or more network functions, alternatively performed by proprietary hardware, onto physical resources comprising a combination of industry-standard server hardware, storage hardware, or switches. In other words, NFV systems can be used to execute virtual or reconfigurable implementations of one or more EPC components/functions. Generally, the application server330may be an element offering applications that use IP bearer resources with the core network (e.g., UMTS PS domain, LTE PS data services, etc.). The application server330can also be configured to support one or more communication services (e.g., VoIP sessions, PTT sessions, group communication sessions, social networking services, etc.) for the UEs301via the EPC320. In embodiments, the CN320may be a 5GC (referred to as “5GC320” or the like), and the RAN310may be connected with the CN320via an NG interface313. In embodiments, the NG interface313may be split into two parts, an NG user plane (NG-U) interface314, which carries traffic data between the RAN nodes311and a UPF, and the S1 control plane (NG-C) interface315, which is a signaling interface between the RAN nodes311and AMFs. In embodiments, the CN320may be a 5G CN (referred to as “5GC320” or the like), while in other embodiments, the CN320may be an EPC). Where CN320is an EPC (referred to as “EPC320” or the like), the RAN310may be connected with the CN320via an S1 interface313. In embodiments, the S1 interface313may be split into two parts, an S1 user plane (S1-U) interface314, which carries traffic data between the RAN nodes311and the S-GW, and the S1-MME interface315, which is a signaling interface between the RAN nodes311and MMES. FIG.4illustrates an example of infrastructure equipment400in accordance with various embodiments. The infrastructure equipment400(or “system400”) may be implemented as a base station, radio head, RAN node such as the RAN nodes311and/or AP306shown and described previously, application server(s)330, and/or any other element/device discussed herein. In other examples, the system400could be implemented in or by a UE. The system400includes application circuitry405, baseband circuitry410, one or more radio front end modules (RFEMs)415, memory circuitry420, power management integrated circuitry (PMIC)425, power tee circuitry430, network controller circuitry435, network interface connector440, satellite positioning circuitry445, and user interface450. In some embodiments, the device400may include additional elements such as, for example, memory/storage, display, camera, sensor, or input/output (I/O) interface. In other embodiments, the components described below may be included in more than one device. For example, said circuitries may be separately included in more than one device for CRAN, vBBU, or other like implementations. Application circuitry405includes circuitry such as, but not limited to one or more processors (or processor cores), cache memory, and one or more of low drop-out voltage regulators (LDOs), interrupt controllers, serial interfaces such as SPI, I2C or universal programmable serial interface module, real time clock (RTC), timer-counters including interval and watchdog timers, general purpose input/output (I/O or IO), memory card controllers such as Secure Digital (SD) MultiMediaCard (MMC) or similar, Universal Serial Bus (USB) interfaces, Mobile Industry Processor Interface (MIPI) interfaces and Joint Test Access Group (JTAG) test access ports. The processors (or cores) of the application circuitry405may be coupled with or may include memory/storage elements and may be configured to execute instructions stored in the memory/storage to enable various applications or operating systems to run on the system400. In some implementations, the memory/storage elements may be on-chip memory circuitry, which may include any suitable volatile and/or non-volatile memory, such as DRAM, SRAM, EPROM, EEPROM, Flash memory, solid-state memory, and/or any other type of memory device technology, such as those discussed herein. The processor(s) of application circuitry405may include, for example, one or more processor cores (CPUs), one or more application processors, one or more graphics processing units (GPUs), one or more reduced instruction set computing (RISC) processors, one or more Acorn RISC Machine (ARM) processors, one or more complex instruction set computing (CISC) processors, one or more digital signal processors (DSP), one or more FPGAs, one or more PLDs, one or more ASICs, one or more microprocessors or controllers, or any suitable combination thereof. In some embodiments, the application circuitry405may comprise, or may be, a special-purpose processor/controller to operate according to the various embodiments herein. As examples, the processor(s) of application circuitry405may include one or more Intel Pentium®, Core®, or Xeon® processor(s); Advanced Micro Devices (AMD) Ryzen® processor(s), Accelerated Processing Units (APUs), or Epyc® processors; ARM-based processor(s) licensed from ARM Holdings, Ltd. such as the ARM Cortex-A family of processors and the ThunderX2® provided by Cavium™, Inc.; a MIPS-based design from MIPS Technologies, Inc. such as MIPS Warrior P-class processors; and/or the like. In some embodiments, the system400may not utilize application circuitry405, and instead may include a special-purpose processor/controller to process IP data received from an EPC or 5GC, for example. In some implementations, the application circuitry405may include one or more hardware accelerators, which may be microprocessors, programmable processing devices, or the like. The one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators. As examples, the programmable processing devices may be one or more a field-programmable devices (FPDs) such as field-programmable gate arrays (FPGAs) and the like; programmable logic devices (PLDs) such as complex PLDs (CPLDs), high-capacity PLDs (HCPLDs), and the like; ASICs such as structured ASICs and the like; programmable SoCs (PSoCs); and the like. In such implementations, the circuitry of application circuitry405may comprise logic blocks or logic fabric, and other interconnected resources that may be programmed to perform various functions, such as the procedures, methods, functions, etc. of the various embodiments discussed herein. In such embodiments, the circuitry of application circuitry405may include memory cells (e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, static memory (e.g., static random access memory (SRAM), anti-fuses, etc.)) used to store logic blocks, logic fabric, data, etc. in look-up-tables (LUTs) and the like. The baseband circuitry410may be implemented, for example, as a solder-down substrate including one or more integrated circuits, a single packaged integrated circuit soldered to a main circuit board or a multi-chip module containing two or more integrated circuits. The various hardware electronic elements of baseband circuitry410are discussed infra with regard toFIG.6. User interface circuitry450may include one or more user interfaces designed to enable user interaction with the system400or peripheral component interfaces designed to enable peripheral component interaction with the system400. User interfaces may include, but are not limited to, one or more physical or virtual buttons (e.g., a reset button), one or more indicators (e.g., light emitting diodes (LEDs)), a physical keyboard or keypad, a mouse, a touchpad, a touchscreen, speakers or other audio emitting devices, microphones, a printer, a scanner, a headset, a display screen or display device, etc. Peripheral component interfaces may include, but are not limited to, a nonvolatile memory port, a universal serial bus (USB) port, an audio jack, a power supply interface, etc. The radio front end modules (RFEMs)415may comprise a millimeter wave (mmWave) RFEM and one or more sub-mmWave radio frequency integrated circuits (RFICs). In some implementations, the one or more sub-mmWave RFICs may be physically separated from the mmWave RFEM. The RFICs may include connections to one or more antennas or antenna arrays (see e.g., antenna array611ofFIG.6infra), and the RFEM may be connected to multiple antennas. In alternative implementations, both mmWave and sub-mmWave radio functions may be implemented in the same physical RFEM415, which incorporates both mmWave antennas and sub-mmWave. The memory circuitry420may include one or more of volatile memory including dynamic random access memory (DRAM) and/or synchronous dynamic random access memory (SDRAM), and nonvolatile memory (NVM) including high-speed electrically erasable memory (commonly referred to as Flash memory), phase change random access memory (PRAM), magnetoresistive random access memory (MRAM), etc., and may incorporate the three-dimensional (3D) cross-point (XPOINT) memories from Intel® and Micron®. Memory circuitry420may be implemented as one or more of solder down packaged integrated circuits, socketed memory modules and plug-in memory cards. The PMIC425may include voltage regulators, surge protectors, power alarm detection circuitry, and one or more backup power sources such as a battery or capacitor. The power alarm detection circuitry may detect one or more of brown out (under-voltage) and surge (over-voltage) conditions. The power tee circuitry430may provide for electrical power drawn from a network cable to provide both power supply and data connectivity to the infrastructure equipment400using a single cable. The network controller circuitry435may provide connectivity to a network using a standard network interface protocol such as Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), or some other suitable protocol. Network connectivity may be provided to/from the infrastructure equipment400via network interface connector440using a physical connection, which may be electrical (commonly referred to as a “copper interconnect”), optical, or wireless. The network controller circuitry435may include one or more dedicated processors and/or FPGAs to communicate using one or more of the aforementioned protocols. In some implementations, the network controller circuitry435may include multiple controllers to provide connectivity to other networks using the same or different protocols. The positioning circuitry445includes circuitry to receive and decode signals transmitted/broadcasted by a positioning network of a global navigation satellite system (GNSS). Examples of navigation satellite constellations (or GNSS) include United States' Global Positioning System (GPS), Russia's Global Navigation System (GLONASS), the European Union's Galileo system, China's BeiDou Navigation Satellite System, a regional navigation system or GNSS augmentation system (e.g., Navigation with Indian Constellation (NAVIC), Japan's Quasi-Zenith Satellite System (QZSS), France's Doppler Orbitography and Radio-positioning Integrated by Satellite (DORIS), etc.), or the like. The positioning circuitry445comprises various hardware elements (e.g., including hardware devices such as switches, filters, amplifiers, antenna elements, and the like to facilitate OTA communications) to communicate with components of a positioning network, such as navigation satellite constellation nodes. In some embodiments, the positioning circuitry445may include a Micro-Technology for Positioning, Navigation, and Timing (Micro-PNT) IC that uses a master timing clock to perform position tracking/estimation without GNSS assistance. The positioning circuitry445may also be part of, or interact with, the baseband circuitry410and/or RFEMs415to communicate with the nodes and components of the positioning network. The positioning circuitry445may also provide position data and/or time data to the application circuitry405, which may use the data to synchronize operations with various infrastructure (e.g., RAN nodes311, etc.), or the like. The components shown byFIG.4may communicate with one another using interface circuitry, which may include any number of bus and/or interconnect (IX) technologies such as industry standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extended (PCIx), PCI express (PCIe), or any number of other technologies. The bus/IX may be a proprietary bus, for example, used in a SoC based system. Other bus/IX systems may be included, such as an I2C interface, an SPI interface, point to point interfaces, and a power bus, among others. FIG.5illustrates an example of a platform500(or “device500”) in accordance with various embodiments. In embodiments, the computer platform500may be suitable for use as UEs301, application servers330, and/or any other element/device discussed herein. The platform500may include any combinations of the components shown in the example. The components of platform500may be implemented as integrated circuits (ICs), portions thereof, discrete electronic devices, or other modules, logic, hardware, software, firmware, or a combination thereof adapted in the computer platform500, or as components otherwise incorporated within a chassis of a larger system. The block diagram ofFIG.5is intended to show a high level view of components of the computer platform500. However, some of the components shown may be omitted, additional components may be present, and different arrangement of the components shown may occur in other implementations. Application circuitry505includes circuitry such as, but not limited to one or more processors (or processor cores), cache memory, and one or more of LDOs, interrupt controllers, serial interfaces such as SPI, I2C or universal programmable serial interface module, RTC, timer-counters including interval and watchdog timers, general purpose I/O, memory card controllers such as SD MMC or similar, USB interfaces, MIPI interfaces, and JTAG test access ports. The processors (or cores) of the application circuitry505may be coupled with or may include memory/storage elements and may be configured to execute instructions stored in the memory/storage to enable various applications or operating systems to run on the system500. In some implementations, the memory/storage elements may be on-chip memory circuitry, which may include any suitable volatile and/or non-volatile memory, such as DRAM, SRAM, EPROM, EEPROM, Flash memory, solid-state memory, and/or any other type of memory device technology, such as those discussed herein. The processor(s) of application circuitry405may include, for example, one or more processor cores, one or more application processors, one or more GPUs, one or more RISC processors, one or more ARM processors, one or more CISC processors, one or more DSP, one or more FPGAs, one or more PLDs, one or more ASICs, one or more microprocessors or controllers, a multithreaded processor, an ultra-low voltage processor, an embedded processor, some other known processing element, or any suitable combination thereof. In some embodiments, the application circuitry405may comprise, or may be, a special-purpose processor/controller to operate according to the various embodiments herein. As examples, the processor(s) of application circuitry505may include an Intel® Architecture Core™ based processor, such as a Quark™, an Atom™, an i3, an i5, an i7, or an MCU-class processor, or another such processor available from Intel® Corporation, Santa Clara, CA The processors of the application circuitry505may also be one or more of Advanced Micro Devices (AMD) Ryzen® processor(s) or Accelerated Processing Units (APUs); A5-A9 processor(s) from Apple® Inc., Snapdragon™ processor(s) from Qualcomm® Technologies, Inc., Texas Instruments, Inc.® Open Multimedia Applications Platform (OMAP)™ processor(s); a MIPS-based design from MIPS Technologies, Inc. such as MIPS Warrior M-class, Warrior I-class, and Warrior P-class processors; an ARM-based design licensed from ARM Holdings, Ltd., such as the ARM Cortex-A, Cortex-R, and Cortex-M family of processors; or the like. In some implementations, the application circuitry505may be a part of a system on a chip (SoC) in which the application circuitry505and other components are formed into a single integrated circuit, or a single package, such as the Edison™ or Galileo™ SoC boards from Intel® Corporation. Additionally or alternatively, application circuitry505may include circuitry such as, but not limited to, one or more a field-programmable devices (FPDs) such as FPGAs and the like; programmable logic devices (PLDs) such as complex PLDs (CPLDs), high-capacity PLDs (HCPLDs), and the like; ASICs such as structured ASICs and the like; programmable SoCs (PSoCs); and the like. In such embodiments, the circuitry of application circuitry505may comprise logic blocks or logic fabric, and other interconnected resources that may be programmed to perform various functions, such as the procedures, methods, functions, etc. of the various embodiments discussed herein. In such embodiments, the circuitry of application circuitry505may include memory cells (e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, static memory (e.g., static random access memory (SRAM), anti-fuses, etc.)) used to store logic blocks, logic fabric, data, etc. in look-up tables (LUTs) and the like. The baseband circuitry510may be implemented, for example, as a solder-down substrate including one or more integrated circuits, a single packaged integrated circuit soldered to a main circuit board or a multi-chip module containing two or more integrated circuits. The various hardware electronic elements of baseband circuitry510are discussed infra with regard toFIG.6. The RFEMs515may comprise a millimeter wave (mmWave) RFEM and one or more sub-mmWave radio frequency integrated circuits (RFICs). In some implementations, the one or more sub-mmWave RFICs may be physically separated from the mmWave RFEM. The RFICs may include connections to one or more antennas or antenna arrays (see e.g., antenna array611ofFIG.6infra), and the RFEM may be connected to multiple antennas. In alternative implementations, both mmWave and sub-mmWave radio functions may be implemented in the same physical RFEM515, which incorporates both mmWave antennas and sub-mmWave. The memory circuitry520may include any number and type of memory devices used to provide for a given amount of system memory. As examples, the memory circuitry520may include one or more of volatile memory including random access memory (RAM), dynamic RAM (DRAM) and/or synchronous dynamic RAM (SDRAM), and nonvolatile memory (NVM) including high-speed electrically erasable memory (commonly referred to as Flash memory), phase change random access memory (PRAM), magnetoresistive random access memory (MRAM), etc. The memory circuitry520may be developed in accordance with a Joint Electron Devices Engineering Council (JEDEC) low power double data rate (LPDDR)-based design, such as LPDDR2, LPDDR3, LPDDR4, or the like. Memory circuitry520may be implemented as one or more of solder down packaged integrated circuits, single die package (SDP), dual die package (DDP) or quad die package (Q17P), socketed memory modules, dual inline memory modules (DIMMs) including microDIMMs or MiniDIMMs, and/or soldered onto a motherboard via a ball grid array (BGA). In low power implementations, the memory circuitry520may be on-die memory or registers associated with the application circuitry505. To provide for persistent storage of information such as data, applications, operating systems and so forth, memory circuitry520may include one or more mass storage devices, which may include, inter alia, a solid state disk drive (SSDD), hard disk drive (HDD), a micro HDD, resistance change memories, phase change memories, holographic memories, or chemical memories, among others. For example, the computer platform500may incorporate the three-dimensional (3D) cross-point (XPOINT) memories from Intel® and Micron®. Removable memory circuitry523may include devices, circuitry, enclosures/housings, ports or receptacles, etc. used to couple portable data storage devices with the platform500. These portable data storage devices may be used for mass storage purposes, and may include, for example, flash memory cards (e.g., Secure Digital (SD) cards, microSD cards, xD picture cards, and the like), and USB flash drives, optical discs, external HDDs, and the like. The platform500may also include interface circuitry (not shown) that is used to connect external devices with the platform500. The external devices connected to the platform500via the interface circuitry include sensor circuitry521and electro-mechanical components (EMCs)522, as well as removable memory devices coupled to removable memory circuitry523. The sensor circuitry521include devices, modules, or subsystems whose purpose is to detect events or changes in its environment and send the information (sensor data) about the detected events to some other a device, module, subsystem, etc. Examples of such sensors include, inter alia, inertia measurement units (IMUS) comprising accelerometers, gyroscopes, and/or magnetometers; microelectromechanical systems (MEMS) or nanoelectromechanical systems (NEMS) comprising 3-axis accelerometers, 3-axis gyroscopes, and/or magnetometers; level sensors; flow sensors; temperature sensors (e.g., thermistors); pressure sensors; barometric pressure sensors; gravimeters; altimeters; image capture devices (e.g., cameras or lensless apertures); light detection and ranging (LiDAR) sensors; proximity sensors (e.g., infrared radiation detector and the like), depth sensors, ambient light sensors, ultrasonic transceivers; microphones or other like audio capture devices; etc. EMCs522include devices, modules, or subsystems whose purpose is to enable platform500to change its state, position, and/or orientation, or move or control a mechanism or (sub)system. Additionally, EMCs522may be configured to generate and send messages/signalling to other components of the platform500to indicate a current state of the EMCs522. Examples of the EMCs522include one or more power switches, relays including electromechanical relays (EMRs) and/or solid state relays (SSRs), actuators (e.g., valve actuators, etc.), an audible sound generator, a visual warning device, motors (e.g., DC motors, stepper motors, etc.), wheels, thrusters, propellers, claws, clamps, hooks, and/or other like electro-mechanical components. In embodiments, platform500is configured to operate one or more EMCs522based on one or more captured events and/or instructions or control signals received from a service provider and/or various clients. In some implementations, the interface circuitry may connect the platform500with positioning circuitry545. The positioning circuitry545includes circuitry to receive and decode signals transmitted/broadcasted by a positioning network of a GNSS. Examples of navigation satellite constellations (or GNSS) include United States' GPS, Russia's GLONASS, the European Union's Galileo system, China's BeiDou Navigation Satellite System, a regional navigation system or GNSS augmentation system (e.g., NAVIC), Japan's QZSS, France's DORIS, etc.), or the like. The positioning circuitry545comprises various hardware elements (e.g., including hardware devices such as switches, filters, amplifiers, antenna elements, and the like to facilitate OTA communications) to communicate with components of a positioning network, such as navigation satellite constellation nodes. In some embodiments, the positioning circuitry545may include a Micro-PNT IC that uses a master timing clock to perform position tracking/estimation without GNSS assistance. The positioning circuitry545may also be part of, or interact with, the baseband circuitry410and/or RFEMs515to communicate with the nodes and components of the positioning network. The positioning circuitry545may also provide position data and/or time data to the application circuitry505, which may use the data to synchronize operations with various infrastructure (e.g., radio base stations), for turn-by-turn navigation applications, or the like In some implementations, the interface circuitry may connect the platform500with Near-Field Communication (NFC) circuitry540. NFC circuitry540is configured to provide contactless, short-range communications based on radio frequency identification (RFID) standards, wherein magnetic field induction is used to enable communication between NFC circuitry540and NFC-enabled devices external to the platform500(e.g., an “NFC touchpoint”). NFC circuitry540comprises an NFC controller coupled with an antenna element and a processor coupled with the NFC controller. The NFC controller may be a chip/IC providing NFC functionalities to the NFC circuitry540by executing NFC controller firmware and an NFC stack. The NFC stack may be executed by the processor to control the NFC controller, and the NFC controller firmware may be executed by the NFC controller to control the antenna element to emit short-range RF signals. The RF signals may power a passive NFC tag (e.g., a microchip embedded in a sticker or wristband) to transmit stored data to the NFC circuitry540, or initiate data transfer between the NFC circuitry540and another active NFC device (e.g., a smartphone or an NFC-enabled POS terminal) that is proximate to the platform500. The driver circuitry546may include software and hardware elements that operate to control particular devices that are embedded in the platform500, attached to the platform500, or otherwise communicatively coupled with the platform500. The driver circuitry546may include individual drivers allowing other components of the platform500to interact with or control various input/output (I/O) devices that may be present within, or connected to, the platform500. For example, driver circuitry546may include a display driver to control and allow access to a display device, a touchscreen driver to control and allow access to a touchscreen interface of the platform500, sensor drivers to obtain sensor readings of sensor circuitry521and control and allow access to sensor circuitry521, EMC drivers to obtain actuator positions of the EMCs522and/or control and allow access to the EMCs522, a camera driver to control and allow access to an embedded image capture device, audio drivers to control and allow access to one or more audio devices. The power management integrated circuitry (PMIC)525(also referred to as “power management circuitry525”) may manage power provided to various components of the platform500. In particular, with respect to the baseband circuitry510, the PMIC525may control power-source selection, voltage scaling, battery charging, or DC-to-DC conversion. The PMIC525may often be included when the platform500is capable of being powered by a battery530, for example, when the device is included in a UE301. In some embodiments, the PMIC525may control, or otherwise be part of, various power saving mechanisms of the platform500. For example, if the platform500is in an RRC_Connected state, where it is still connected to the RAN node as it expects to receive traffic shortly, then it may enter a state known as Discontinuous Reception Mode (DRX) after a period of inactivity. During this state, the platform500may power down for brief intervals of time and thus save power. If there is no data traffic activity for an extended period of time, then the platform500may transition off to an RRC_Idle state, where it disconnects from the network and does not perform operations such as channel quality feedback, handover, etc. The platform500goes into a very low power state and it performs paging where again it periodically wakes up to listen to the network and then powers down again. The platform500may not receive data in this state; in order to receive data, it must transition back to RRC_Connected state. An additional power saving mode may allow a device to be unavailable to the network for periods longer than a paging interval (ranging from seconds to a few hours). During this time, the device is totally unreachable to the network and may power down completely. Any data sent during this time incurs a large delay and it is assumed the delay is acceptable. A battery530may power the platform500, although in some examples the platform500may be mounted deployed in a fixed location, and may have a power supply coupled to an electrical grid. The battery530may be a lithium ion battery, a metal-air battery, such as a zinc-air battery, an aluminum-air battery, a lithium-air battery, and the like. In some implementations, such as in V2X applications, the battery530may be a typical lead-acid automotive battery. In some implementations, the battery530may be a “smart battery,” which includes or is coupled with a Battery Management System (BMS) or battery monitoring integrated circuitry. The BMS may be included in the platform500to track the state of charge (SoCh) of the battery530. The BMS may be used to monitor other parameters of the battery530to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of the battery530. The BMS may communicate the information of the battery530to the application circuitry505or other components of the platform500. The BMS may also include an analog-to-digital (ADC) convertor that allows the application circuitry505to directly monitor the voltage of the battery530or the current flow from the battery530. The battery parameters may be used to determine actions that the platform500may perform, such as transmission frequency, network operation, sensing frequency, and the like. A power block, or other power supply coupled to an electrical grid may be coupled with the BMS to charge the battery530. In some examples, the power block XS30 may be replaced with a wireless power receiver to obtain the power wirelessly, for example, through a loop antenna in the computer platform500. In these examples, a wireless battery charging circuit may be included in the BMS. The specific charging circuits chosen may depend on the size of the battery530, and thus, the current required. The charging may be performed using the Airfuel standard promulgated by the Airfuel Alliance, the Qi wireless charging standard promulgated by the Wireless Power Consortium, or the Rezence charging standard promulgated by the Alliance for Wireless Power, among others. User interface circuitry550includes various input/output (I/O) devices present within, or connected to, the platform500, and includes one or more user interfaces designed to enable user interaction with the platform500and/or peripheral component interfaces designed to enable peripheral component interaction with the platform500. The user interface circuitry550includes input device circuitry and output device circuitry. Input device circuitry includes any physical or virtual means for accepting an input including, inter alia, one or more physical or virtual buttons (e.g., a reset button), a physical keyboard, keypad, mouse, touchpad, touchscreen, microphones, scanner, headset, and/or the like. The output device circuitry includes any physical or virtual means for showing information or otherwise conveying information, such as sensor readings, actuator position(s), or other like information. Output device circuitry may include any number and/or combinations of audio or visual display, including, inter alia, one or more simple visual outputs/indicators (e.g., binary status indicators (e.g., light emitting diodes (LEDs)) and multi-character visual outputs, or more complex outputs such as display devices or touchscreens (e.g., Liquid Chrystal Displays (LCD), LED displays, quantum dot displays, projectors, etc.), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of the platform500. The output device circuitry may also include speakers or other audio emitting devices, printer(s), and/or the like. In some embodiments, the sensor circuitry521may be used as the input device circuitry (e.g., an image capture device, motion capture device, or the like) and one or more EMCs may be used as the output device circuitry (e.g., an actuator to provide haptic feedback or the like). In another example, NFC circuitry comprising an NFC controller coupled with an antenna element and a processing device may be included to read electronic tags and/or connect with another NFC-enabled device. Peripheral component interfaces may include, but are not limited to, a non-volatile memory port, a USB port, an audio jack, a power supply interface, etc. Although not shown, the components of platform500may communicate with one another using a suitable bus or interconnect (IX) technology, which may include any number of technologies, including ISA, EISA, PCI, PCIx, PCIe, a Time-Trigger Protocol (TTP) system, a FlexRay system, or any number of other technologies. The bus/IX may be a proprietary bus/IX, for example, used in a SoC based system. Other bus/IX systems may be included, such as an I2C interface, an SPI interface, point-to-point interfaces, and a power bus, among others. FIG.6illustrates example components of baseband circuitry610and radio front end modules (RFEM)615in accordance with various embodiments. The baseband circuitry610corresponds to the baseband circuitry410and510ofFIGS.4and5, respectively. The RFEM615corresponds to the RFEM415and515ofFIGS.4and5, respectively. As shown, the RFEMs615may include Radio Frequency (RF) circuitry606, front-end module (FEM) circuitry608, antenna array611coupled together at least as shown. The baseband circuitry610includes circuitry and/or control logic configured to carry out various radio/network protocol and radio control functions that enable communication with one or more radio networks via the RF circuitry606. The radio control functions may include, but are not limited to, signal modulation/demodulation, encoding/decoding, radio frequency shifting, etc. In some embodiments, modulation/demodulation circuitry of the baseband circuitry610may include Fast-Fourier Transform (FFT), precoding, or constellation mapping/demapping functionality. In some embodiments, encoding/decoding circuitry of the baseband circuitry610may include convolution, tail-biting convolution, turbo, Viterbi, or Low Density Parity Check (LDPC) encoder/decoder functionality. Embodiments of modulation/demodulation and encoder/decoder functionality are not limited to these examples and may include other suitable functionality in other embodiments. The baseband circuitry610is configured to process baseband signals received from a receive signal path of the RF circuitry606and to generate baseband signals for a transmit signal path of the RF circuitry606. The baseband circuitry610is configured to interface with application circuitry405/505(seeFIGS.4and5) for generation and processing of the baseband signals and for controlling operations of the RF circuitry606. The baseband circuitry610may handle various radio control functions. The aforementioned circuitry and/or control logic of the baseband circuitry610may include one or more single or multi-core processors. For example, the one or more processors may include a 3G baseband processor604A, a 4G/LTE baseband processor604B, a 5G/NR baseband processor604C, or some other baseband processor(s)604D for other existing generations, generations in development or to be developed in the future (e.g., sixth generation (6G), etc.). In other embodiments, some or all of the functionality of baseband processors604A-D may be included in modules stored in the memory604G and executed via a Central Processing Unit (CPU)604E. In other embodiments, some or all of the functionality of baseband processors604A-D may be provided as hardware accelerators (e.g., FPGAs, ASICs, etc.) loaded with the appropriate bit streams or logic blocks stored in respective memory cells. In various embodiments, the memory604G may store program code of a real-time OS (RTOS), which when executed by the CPU604E (or other baseband processor), is to cause the CPU604E (or other baseband processor) to manage resources of the baseband circuitry610, schedule tasks, etc. Examples of the RTOS may include Operating System Embedded (OSE)™ provided by Enea®, Nucleus RTOS™ provided by Mentor Graphics®, Versatile Real-Time Executive (VRTX) provided by Mentor Graphics®, ThreadX™ provided by Express Logic®, FreeRTOS, REX OS provided by Qualcomm®, OKL4 provided by Open Kernel (OK) Labs®, or any other suitable RTOS, such as those discussed herein. In addition, the baseband circuitry610includes one or more audio digital signal processor(s) (DSP)604F. The audio DSP(s)604F include elements for compression/decompression and echo cancellation and may include other suitable processing elements in other embodiments. In some embodiments, each of the processors604A-604E include respective memory interfaces to send/receive data to/from the memory604G. The baseband circuitry610may further include one or more interfaces to communicatively couple to other circuitries/devices, such as an interface to send/receive data to/from memory external to the baseband circuitry610; an application circuitry interface to send/receive data to/from the application circuitry405/505ofFIG.4-XT); an RF circuitry interface to send/receive data to/from RF circuitry606ofFIG.6; a wireless hardware connectivity interface to send/receive data to/from one or more wireless hardware elements (e.g., Near Field Communication (NFC) components, Bluetooth®/Bluetooth® Low Energy components, Wi-Fi® components, and/or the like); and a power management interface to send/receive power or control signals to/from the PMIC525. In alternate embodiments (which may be combined with the above described embodiments), baseband circuitry610comprises one or more digital baseband systems, which are coupled with one another via an interconnect subsystem and to a CPU subsystem, an audio subsystem, and an interface subsystem. The digital baseband subsystems may also be coupled to a digital baseband interface and a mixed-signal baseband subsystem via another interconnect subsystem. Each of the interconnect subsystems may include a bus system, point-to-point connections, network-on-chip (NOC) structures, and/or some other suitable bus or interconnect technology, such as those discussed herein. The audio subsystem may include DSP circuitry, buffer memory, program memory, speech processing accelerator circuitry, data converter circuitry such as analog-to-digital and digital-to-analog converter circuitry, analog circuitry including one or more of amplifiers and filters, and/or other like components. In an aspect of the present disclosure, baseband circuitry610may include protocol processing circuitry with one or more instances of control circuitry (not shown) to provide control functions for the digital baseband circuitry and/or radio frequency circuitry (e.g., the radio front end modules615). Although not shown byFIG.6, in some embodiments, the baseband circuitry610includes individual processing device(s) to operate one or more wireless communication protocols (e.g., a “multi-protocol baseband processor” or “protocol processing circuitry”) and individual processing device(s) to implement PHY layer functions. In these embodiments, the PHY layer functions include the aforementioned radio control functions. In these embodiments, the protocol processing circuitry operates or implements various protocol layers/entities of one or more wireless communication protocols. In a first example, the protocol processing circuitry may operate LTE protocol entities and/or 5G/NR protocol entities when the baseband circuitry610and/or RF circuitry606are part of mmWave communication circuitry or some other suitable cellular communication circuitry. In the first example, the protocol processing circuitry would operate MAC, RLC, PDCP, SDAP, RRC, and NAS functions. In a second example, the protocol processing circuitry may operate one or more IEEE-based protocols when the baseband circuitry610and/or RF circuitry606are part of a Wi-Fi communication system. In the second example, the protocol processing circuitry would operate Wi-Fi MAC and logical link control (LLC) functions. The protocol processing circuitry may include one or more memory structures (e.g.,604G) to store program code and data for operating the protocol functions, as well as one or more processing cores to execute the program code and perform various operations using the data. The baseband circuitry610may also support radio communications for more than one wireless protocol. The various hardware elements of the baseband circuitry610discussed herein may be implemented, for example, as a solder-down substrate including one or more integrated circuits (ICs), a single packaged IC soldered to a main circuit board or a multi-chip module containing two or more ICs. In one example, the components of the baseband circuitry610may be suitably combined in a single chip or chipset, or disposed on a same circuit board. In another example, some or all of the constituent components of the baseband circuitry610and RF circuitry606may be implemented together such as, for example, a system on a chip (SoC) or System-in-Package (SiP). In another example, some or all of the constituent components of the baseband circuitry610may be implemented as a separate SoC that is communicatively coupled with and RF circuitry606(or multiple instances of RF circuitry606). In yet another example, some or all of the constituent components of the baseband circuitry610and the application circuitry405/505may be implemented together as individual SoCs mounted to a same circuit board (e.g., a “multi-chip package”). In some embodiments, the baseband circuitry610may provide for communication compatible with one or more radio technologies. For example, in some embodiments, the baseband circuitry610may support communication with an E-UTRAN or other WMAN, a WLAN, a WPAN. Embodiments in which the baseband circuitry610is configured to support radio communications of more than one wireless protocol may be referred to as multi-mode baseband circuitry. RF circuitry606may enable communication with wireless networks using modulated electromagnetic radiation through a non-solid medium. In various embodiments, the RF circuitry606may include switches, filters, amplifiers, etc. to facilitate the communication with the wireless network. RF circuitry606may include a receive signal path, which may include circuitry to down-convert RF signals received from the FEM circuitry608and provide baseband signals to the baseband circuitry610. RF circuitry606may also include a transmit signal path, which may include circuitry to up-convert baseband signals provided by the baseband circuitry610and provide RF output signals to the FEM circuitry608for transmission. In some embodiments, the receive signal path of the RF circuitry606may include mixer circuitry606a, amplifier circuitry606band filter circuitry606c. In some embodiments, the transmit signal path of the RF circuitry606may include filter circuitry606cand mixer circuitry606a. RF circuitry606may also include synthesizer circuitry606dfor synthesizing a frequency for use by the mixer circuitry606aof the receive signal path and the transmit signal path. In some embodiments, the mixer circuitry606aof the receive signal path may be configured to down-convert RF signals received from the FEM circuitry608based on the synthesized frequency provided by synthesizer circuitry606d. The amplifier circuitry606bmay be configured to amplify the down-converted signals and the filter circuitry606cmay be a low-pass filter (LPF) or band-pass filter (BPF) configured to remove unwanted signals from the down-converted signals to generate output baseband signals. Output baseband signals may be provided to the baseband circuitry610for further processing. In some embodiments, the output baseband signals may be zero-frequency baseband signals, although this is not a requirement. In some embodiments, mixer circuitry606aof the receive signal path may comprise passive mixers, although the scope of the embodiments is not limited in this respect. In some embodiments, the mixer circuitry606aof the transmit signal path may be configured to up-convert input baseband signals based on the synthesized frequency provided by the synthesizer circuitry606dto generate RF output signals for the FEM circuitry608. The baseband signals may be provided by the baseband circuitry610and may be filtered by filter circuitry606c. In some embodiments, the mixer circuitry606aof the receive signal path and the mixer circuitry606aof the transmit signal path may include two or more mixers and may be arranged for quadrature downconversion and upconversion, respectively. In some embodiments, the mixer circuitry606aof the receive signal path and the mixer circuitry606aof the transmit signal path may include two or more mixers and may be arranged for image rejection (e.g., Hartley image rejection). In some embodiments, the mixer circuitry606aof the receive signal path and the mixer circuitry606aof the transmit signal path may be arranged for direct downconversion and direct upconversion, respectively. In some embodiments, the mixer circuitry606aof the receive signal path and the mixer circuitry606aof the transmit signal path may be configured for super-heterodyne operation. In some embodiments, the output baseband signals and the input baseband signals may be analog baseband signals, although the scope of the embodiments is not limited in this respect. In some alternate embodiments, the output baseband signals and the input baseband signals may be digital baseband signals. In these alternate embodiments, the RF circuitry606may include analog-to-digital converter (ADC) and digital-to-analog converter (DAC) circuitry and the baseband circuitry610may include a digital baseband interface to communicate with the RF circuitry606. In some dual-mode embodiments, a separate radio IC circuitry may be provided for processing signals for each spectrum, although the scope of the embodiments is not limited in this respect. In some embodiments, the synthesizer circuitry606dmay be a fractional-N synthesizer or a fractional N/N+1 synthesizer, although the scope of the embodiments is not limited in this respect as other types of frequency synthesizers may be suitable. For example, synthesizer circuitry606dmay be a delta-sigma synthesizer, a frequency multiplier, or a synthesizer comprising a phase-locked loop with a frequency divider. The synthesizer circuitry606dmay be configured to synthesize an output frequency for use by the mixer circuitry606aof the RF circuitry606based on a frequency input and a divider control input. In some embodiments, the synthesizer circuitry606dmay be a fractional N/N+1 synthesizer. In some embodiments, frequency input may be provided by a voltage controlled oscillator (VCO), although that is not a requirement. Divider control input may be provided by either the baseband circuitry610or the application circuitry405/505depending on the desired output frequency. In some embodiments, a divider control input (e.g., N) may be determined from a look-up table based on a channel indicated by the application circuitry405/505. Synthesizer circuitry606dof the RF circuitry606may include a divider, a delay-locked loop (DLL), a multiplexer and a phase accumulator. In some embodiments, the divider may be a dual modulus divider (DMD) and the phase accumulator may be a digital phase accumulator (DPA). In some embodiments, the DMD may be configured to divide the input signal by either N or N+1 (e.g., based on a carry out) to provide a fractional division ratio. In some example embodiments, the DLL may include a set of cascaded, tunable, delay elements, a phase detector, a charge pump and a D-type flip-flop. In these embodiments, the delay elements may be configured to break a VCO period up into Nd equal packets of phase, where Nd is the number of delay elements in the delay line. In this way, the DLL provides negative feedback to help ensure that the total delay through the delay line is one VCO cycle. In some embodiments, synthesizer circuitry606dmay be configured to generate a carrier frequency as the output frequency, while in other embodiments, the output frequency may be a multiple of the carrier frequency (e.g., twice the carrier frequency, four times the carrier frequency) and used in conjunction with quadrature generator and divider circuitry to generate multiple signals at the carrier frequency with multiple different phases with respect to each other. In some embodiments, the output frequency may be a LO frequency (fLO). In some embodiments, the RF circuitry606may include an IQ/polar converter. FEM circuitry608may include a receive signal path, which may include circuitry configured to operate on RF signals received from antenna array611, amplify the received signals and provide the amplified versions of the received signals to the RF circuitry606for further processing. FEM circuitry608may also include a transmit signal path, which may include circuitry configured to amplify signals for transmission provided by the RF circuitry606for transmission by one or more of antenna elements of antenna array611. In various embodiments, the amplification through the transmit or receive signal paths may be done solely in the RF circuitry606, solely in the FEM circuitry608, or in both the RF circuitry606and the FEM circuitry608. In some embodiments, the FEM circuitry608may include a TX/RX switch to switch between transmit mode and receive mode operation. The FEM circuitry608may include a receive signal path and a transmit signal path. The receive signal path of the FEM circuitry608may include an LNA to amplify received RF signals and provide the amplified received RF signals as an output (e.g., to the RF circuitry606). The transmit signal path of the FEM circuitry608may include a power amplifier (PA) to amplify input RF signals (e.g., provided by RF circuitry606), and one or more filters to generate RF signals for subsequent transmission by one or more antenna elements of the antenna array611. The antenna array611comprises one or more antenna elements, each of which is configured convert electrical signals into radio waves to travel through the air and to convert received radio waves into electrical signals. For example, digital baseband signals provided by the baseband circuitry610is converted into analog RF signals (e.g., modulated waveform) that will be amplified and transmitted via the antenna elements of the antenna array611including one or more antenna elements (not shown). The antenna elements may be omnidirectional, direction, or a combination thereof. The antenna elements may be formed in a multitude of arranges as are known and/or discussed herein. The antenna array611may comprise microstrip antennas or printed antennas that are fabricated on the surface of one or more printed circuit boards. The antenna array611may be formed in as a patch of metal foil (e.g., a patch antenna) in a variety of shapes, and may be coupled with the RF circuitry606and/or FEM circuitry608using metal transmission lines or the like. Processors of the application circuitry405/505and processors of the baseband circuitry610may be used to execute elements of one or more instances of a protocol stack. For example, processors of the baseband circuitry610, alone or in combination, may be used execute Layer 3, Layer 2, or Layer 1 functionality, while processors of the application circuitry405/505may utilize data (e.g., packet data) received from these layers and further execute Layer 4 functionality (e.g., TCP and UDP layers). As referred to herein, Layer 3 may comprise a RRC layer, described in further detail below. As referred to herein, Layer 2 may comprise a MAC layer, an RLC layer, and a PDCP layer, described in further detail below. As referred to herein, Layer 1 may comprise a PHY layer of a UE/RAN node, described in further detail below. FIG.7is a block diagram illustrating components, according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.7shows a diagrammatic representation of hardware resources700including one or more processors (or processor cores)710, one or more memory/storage devices720, and one or more communication resources730, each of which may be communicatively coupled via a bus740. For embodiments where node virtualization (e.g., NFV) is utilized, a hypervisor702may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources700. The processors710may include, for example, a processor712and a processor714. The processor(s)710may be, for example, a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a DSP such as a baseband processor, an ASIC, an FPGA, a radio-frequency integrated circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof. The memory/storage devices720may include main memory, disk storage, or any suitable combination thereof. The memory/storage devices720may include, but are not limited to, any type of volatile or nonvolatile memory such as dynamic random access memory (DRAM), static random access memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, etc. The communication resources730may include interconnection or network interface components or other suitable devices to communicate with one or more peripheral devices704or one or more databases706via a network708. For example, the communication resources730may include wired communication components (e.g., for coupling via USB), cellular communication components, NFC components, Bluetooth® (or Bluetooth® Low Energy) components, Wi-Fi® components, and other communication components. Instructions750may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors710to perform any one or more of the methodologies discussed herein. The instructions750may reside, completely or partially, within at least one of the processors710(e.g., within the processor's cache memory), the memory/storage devices720, or any suitable combination thereof. Furthermore, any portion of the instructions750may be transferred to the hardware resources700from any combination of the peripheral devices704or the databases706. Accordingly, the memory of processors710, the memory/storage devices720, the peripheral devices704, and the databases706are examples of computer-readable and machine-readable media. For one or more embodiments, at least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below. For example, the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below. For another example, circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section. EXAMPLES Example 1 may include a method, which defines a measurement and reporting latency requirement for RRM CSI-RS based L3-RSRP measurement, based on a resource configuration of the RRM CSI-RS resource, wherein the RRM CSI-RS resource configuration includes at least a reference resource block (RB) number within a CSI-RS OFDM symbol, a reference resource element (RE) density, and a repetition period of a same CSI-RS resource. Example 2 may include the method of example 1 or some other example herein, wherein the CSI-RS reference RE density of a CSI-RS resource is the number of CSI-RS REs within a CSI-RS RB, which can be 1 or 3. Example 3 may include the method of example 1 or some other example herein, wherein the measurement and reporting latency requirement for CSI-RS based L3-RSRP measurement can be increased, if the CSI-RS RB number is reduced. Example 4 may include the methods of examples 1 and 2 or some other example herein, wherein the measurement and reporting latency requirement for CSI-RS based L3-RSRP measurement can be increased, if the CSI-RS ref. RE density is reduced. Example 5 may include the method of examples 1-4 or some other example herein, when the CSI-RS RB number is below 24, the number of measurement samples required for applying L3 filtering of the CSI-RS based RSRP measurements, are not lower than 10. Example 6 may include a method, wherein a base station configures a RRM CSI-RS resource to a UE for L3-RSRP measurement, to meet a L3-RSRP accuracy requirement, based on a minimal CSI-RS configuration, such that either its RB number is higher than 24 or its CSI-RS ref RE density is higher than 1. Example 7 may include a method, which defines RSRP measurement error threshold to evaluate a RRM CSI-RS based L3-RSRP measurement, based on a resource configuration of the RRM CSI-RS resource, and based on the frequency range information in which the RRM CSI-RS is transmitted, wherein a frequency range could be FR1 (sub7 GHz bands) or FR2 (mmWave bands) Example 8 may include the method of example 7 or some other example herein, wherein the RSRP measurement error threshold is relaxed, if the RRM CSI-RS is transmitted within a FR2 band. Example 9 may include a method comprising: receiving configuration information for a radio resource management (RRM) channel state information reference signal (CSI-RS) for Layer 3 (L3) reference signal received power (L3-RSRP) measurements; determining a minimum number of the L3-RSRP measurements required for L3 filtering based on the configuration of the RRM CSI-RS; and performing the L3 filtering of the L3-RSRP measurements based on the determined minimum number. Example 10 may include the method of example 9 or some other example herein, further comprising determining a minimum latency for RRM measurements on the RRM CSI-RS based on the determined minimum number. Example 11 may include the method of example 10 or some other example herein, wherein the minimum latency is determined further based on a repetition period of CSI-RS resources for the RRM CSI-RS. Example 12 may include the method of example 9-11 or some other example herein, wherein the configuration information includes one or more of a reference resource element (RE) density of the RRM CSI-RS, a reference resource block (RB) number of the RRM CSI-RS, and/or a repetition period of the RRM CSI-RS. Example 13 may include the method of example 9-12 or some other example herein, wherein the configuration includes a reference resource block (RB) number of less than 25, and wherein the determined minimum number is 10 or more. Example 14 may include the method of example 9-13 or some other example herein, wherein the method is performed by a UE or a portion thereof. Example 15 may include a method comprising: transmitting or causing to transmit, to a user equipment (UE), configuration information for a radio resource management (RRM) channel state information reference signal (CSI-RS) for L3 reference signal received power (L3-RSRP) measurements; determining a minimum number of the L3-RSRP measurements required for L3 filtering based on the CSI-RS configuration; and determining a minimum latency for radio resource management (RRM) measurements on the CSI-RS based on the determined minimum number. Example 16 may include the method of example 15 or some other example herein, wherein the minimum latency is determined further based on a repetition period of CSI-RS resources for the RRM CSI-RS. Example 17 may include the method of example 15-16 or some other example herein, wherein the configuration information includes one or more of a reference resource element (RE) density of the RRM CSI-RS, a reference resource block (RB) number of the RRM CSI-RS, and/or a repetition period of the RRM CSI-RS. Example 18 may include the method of example 15-17 or some other example herein, wherein the configuration information includes a reference resource block (RB) number of less than 25, and wherein the determined minimum number is 10 or more. Example 19 may include the method of example 15-18 or some other example herein, wherein the method is performed by a next generation node B (gNB) or a portion thereof. Example 20 may include a method comprising: identifying configuration information for a radio resource management (RRM) channel state information reference signal (CSI-RS) for L3 reference signal received power (L3-RSRP) measurements; determining a frequency range in which the RRM CSI-RS is transmitted; determining an error threshold based on the determined frequency range; and evaluating an accuracy of the L3-RSRP measurements based on the error threshold. Example 21 may include the method of example 20 or some other example herein, wherein determining the frequency range includes determining whether the RRM CSI-RS is transmitted in a new radio (NR) frequency range 1 (FR1) or a NR frequency range 2 (FR2). Example 22 may include the method of example 20-21 or some other example herein, wherein the determined error threshold is higher for NR FR2 than for NR FR1 for the same configuration information. Example 23 may include the method of example 20-22 or some other example herein, wherein the configuration information includes one or more of a reference resource element (RE) density of the RRM CSI-RS, a reference resource block (RB) number of the RRM CSI-RS, and/or a repetition period of the RRM CSI-RS. Example 24 may include the method of example 20-23 or some other example herein, wherein the method is performed by a user equipment (UE) or a portion thereof. Example 25 may include the method of example 20-23 or some other example herein, wherein the method is performed by a next generation Node B (gNB) or a portion thereof. Example 26 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1-25, or any other method or process described herein. Example 27 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-25, or any other method or process described herein. Example 28 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1-25, or any other method or process described herein. Example 29 may include a method, technique, or process as described in or related to any of examples 1-25, or portions or parts thereof. Example 30 may include an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-25, or portions thereof. Example 31 may include a signal as described in or related to any of examples 1-25, or portions or parts thereof. Example 32 may include a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-25, or portions or parts thereof, or otherwise described in the present disclosure. Example 33 may include a signal encoded with data as described in or related to any of examples 1-25, or portions or parts thereof, or otherwise described in the present disclosure. Example 34 may include a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-25, or portions or parts thereof, or otherwise described in the present disclosure. Example 35 may include an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-25, or portions thereof. Example 36 may include a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples 1-25, or portions thereof. Example 37 may include a signal in a wireless network as shown and described herein. Example 38 may include a method of communicating in a wireless network as shown and described herein. Example 39 may include a system for providing wireless communication as shown and described herein. Example 40 may include a device for providing wireless communication as shown and described herein. Any of the above-described examples may be combined with any other example (or combination of examples), unless explicitly stated otherwise. The foregoing description of one or more implementations provides illustration and description, but is not intended to be exhaustive or to limit the scope of embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. Terminology For the purposes of the present document, the following terms and definitions are applicable to the examples and embodiments discussed herein. The term “circuitry” as used herein refers to, is part of, or includes hardware components such as an electronic circuit, a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable device (FPD) (e.g., a field-programmable gate array (FPGA), a programmable logic device (PLD), a complex PLD (CPLD), a high-capacity PLD (HCPLD), a structured ASIC, or a programmable SoC), digital signal processors (DSPs), etc., that are configured to provide the described functionality. In some embodiments, the circuitry may execute one or more software or firmware programs to provide at least some of the described functionality. The term “circuitry” may also refer to a combination of one or more hardware elements (or a combination of circuits used in an electrical or electronic system) with the program code used to carry out the functionality of that program code. In these embodiments, the combination of hardware elements and program code may be referred to as a particular type of circuitry. The term “processor circuitry” as used herein refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data. The term “processor circuitry” may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computer-executable instructions, such as program code, software modules, and/or functional processes. The terms “application circuitry” and/or “baseband circuitry” may be considered synonymous to, and may be referred to as, “processor circuitry.” The term “interface circuitry” as used herein refers to, is part of, or includes circuitry that enables the exchange of information between two or more components or devices. The term “interface circuitry” may refer to one or more hardware interfaces, for example, buses, I/O interfaces, peripheral component interfaces, network interface cards, and/or the like. The term “user equipment” or “UE” as used herein refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network. The term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, etc. Furthermore, the term “user equipment” or “UE” may include any type of wireless/wired device or any computing device including a wireless communications interface. The term “network element” as used herein refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services. The term “network element” may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, RAN device, RAN node, gateway, server, virtualized VNF, NFVI, and/or the like. The term “computer system” as used herein refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the term “computer system” and/or “system” may refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources. The term “appliance,” “computer appliance,” or the like, as used herein refers to a computer device or computer system with program code (e.g., software or firmware) that is specifically designed to provide a specific computing resource. A “virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource. The term “resource” as used herein refers to a physical or virtual device, a physical or virtual component within a computing environment, and/or a physical or virtual component within a particular device, such as computer devices, mechanical devices, memory space, processor/CPU time, processor/CPU usage, processor and accelerator loads, hardware time or usage, electrical power, input/output operations, ports or network sockets, channel/link allocation, throughput, memory usage, storage, network, database and applications, workload units, and/or the like. A “hardware resource” may refer to compute, storage, and/or network resources provided by physical hardware element(s). A “virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, device, system, etc. The term “network resource” or “communication resource” may refer to resources that are accessible by computer devices/systems via a communications network. The term “system resources” may refer to any kind of shared entities to provide services, and may include computing and/or network resources. System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable. The term “channel” as used herein refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream. The term “channel” may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated. Additionally, the term “link” as used herein refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information. The terms “instantiate,” “instantiation,” and the like as used herein refers to the creation of an instance. An “instance” also refers to a concrete occurrence of an object, which may occur, for example, during execution of program code. The terms “coupled,” “communicatively coupled,” along with derivatives thereof are used herein. The term “coupled” may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term “directly coupled” may mean that two or more elements are in direct contact with one another. The term “communicatively coupled” may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or ink, and/or the like. The term “information element” refers to a structural element containing one or more fields. The term “field” refers to individual contents of an information element, or a data element that contains content. The term “SMTC” refers to an SSB-based measurement timing configuration configured by SSB-MeasurementTimingConfiguration. The term “SSB” refers to an SS/PBCH block. The term “a “Primary Cell” refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure. The term “Primary SCG Cell” refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation. The term “Secondary Cell” refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA. The term “Secondary Cell Group” refers to the subset of serving cells comprising the PSCell and zero or more secondary cells for a UE configured with DC. The term “Serving Cell” refers to the primary cell for a UE in RRC_CONNECTED not configured with CA/DC there is only one serving cell comprising of the primary cell. The term “serving cell” or “serving cells” refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC_CONNECTED configured with CA/. The term “Special Cell” refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell.
106,837
11863488
DETAILED DESCRIPTION Various aspects of the disclosure are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Based on the teachings herein, one skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the disclosure disclosed herein, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. Several aspects of telecommunication systems will now be presented with reference to various apparatuses and techniques. These apparatuses and techniques will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, or the like (collectively referred to as “elements”). These elements may be implemented using hardware, software, or combinations thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. It should be noted that while aspects may be described herein using terminology commonly associated with a 5G or NR radio access technology (RAT), aspects of the present disclosure can be applied to other RATs, such as a 3G RAT, a 4G RAT, and/or a RAT subsequent to 5G (e.g., 6G). FIG.1is a diagram illustrating an example of a wireless network100, in accordance with the present disclosure. The wireless network100may be or may include elements of a 5G (NR) network and/or an LTE network, among other examples. The wireless network100may include a number of base stations110(shown as BS110a, BS110b, BS110c, and BS110d) and other network entities. A base station (BS) is an entity that communicates with user equipment (UEs) and may also be referred to as an NR BS, a Node B, a gNB, a 5G node B (NB), an access point, a transmit receive point (TRP), or the like. Each BS may provide communication coverage for a particular geographic area. In 3GPP, the term “cell” can refer to a coverage area of a BS and/or a BS subsystem serving this coverage area, depending on the context in which the term is used. A BS may provide communication coverage for a macro cell, a pico cell, a femto cell, and/or another type of cell. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscription. A pico cell may cover a relatively small geographic area and may allow unrestricted access by UEs with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs having association with the femto cell (e.g., UEs in a closed subscriber group (CSG)). ABS for a macro cell may be referred to as a macro BS. ABS for a pico cell may be referred to as a pico BS. A BS for a femto cell may be referred to as a femto BS or a home BS. In the example shown inFIG.1, a BS110amay be a macro BS for a macro cell102a, a BS110bmay be a pico BS for a pico cell102b, and a BS110cmay be a femto BS for a femto cell102c. A BS may support one or multiple (e.g., three) cells. The terms “eNB”, “base station”, “NR BS”, “gNB”, “TRP”, “AP”, “node B”, “5G NB”, and “cell” may be used interchangeably herein. In some aspects, a cell may not necessarily be stationary, and the geographic area of the cell may move according to the location of a mobile BS. In some aspects, the BSs may be interconnected to one another and/or to one or more other BSs or network nodes (not shown) in the wireless network100through various types of backhaul interfaces, such as a direct physical connection or a virtual network, using any suitable transport network. Wireless network100may also include relay stations. A relay station is an entity that can receive a transmission of data from an upstream station (e.g., a BS or a UE) and send a transmission of the data to a downstream station (e.g., a UE or a BS). A relay station may also be a UE that can relay transmissions for other UEs. In the example shown inFIG.1, a relay BS110dmay communicate with macro BS110aand a UE120din order to facilitate communication between BS110aand UE120d. A relay BS may also be referred to as a relay station, a relay base station, a relay, or the like. Wireless network100may be a heterogeneous network that includes BSs of different types, such as macro BSs, pico BSs, femto BSs, relay BSs, or the like. These different types of BSs may have different transmit power levels, different coverage areas, and different impacts on interference in wireless network100. For example, macro BSs may have a high transmit power level (e.g., 5 to 40 watts) whereas pico BSs, femto BSs, and relay BSs may have lower transmit power levels (e.g., 0.1 to 2 watts). A network controller130may couple to a set of BSs and may provide coordination and control for these BSs. Network controller130may communicate with the BSs via a backhaul. The BSs may also communicate with one another, directly or indirectly, via a wireless or wireline backhaul. UEs120(e.g.,120a,120b,120c) may be dispersed throughout wireless network100, and each UE may be stationary or mobile. A UE may also be referred to as an access terminal, a terminal, a mobile station, a subscriber unit, a station, or the like. A UE may be a cellular phone (e.g., a smart phone), a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet, a camera, a gaming device, a netbook, a smartbook, an ultrabook, a medical device or equipment, biometric sensors/devices, wearable devices (smart watches, smart clothing, smart glasses, smart wrist bands, smart jewelry (e.g., smart ring, smart bracelet)), an entertainment device (e.g., a music or video device, or a satellite radio), a vehicular component or sensor, smart meters/sensors, industrial manufacturing equipment, a global positioning system device, or any other suitable device that is configured to communicate via a wireless or wired medium. Some UEs may be considered machine-type communication (MTC) or evolved or enhanced machine-type communication (eMTC) UEs. MTC and eMTC UEs include, for example, robots, drones, remote devices, sensors, meters, monitors, and/or location tags, that may communicate with a base station, another device (e.g., remote device), or some other entity. A wireless node may provide, for example, connectivity for or to a network (e.g., a wide area network such as Internet or a cellular network) via a wired or wireless communication link. Some UEs may be considered Internet-of-Things (IoT) devices, and/or may be implemented as NB-IoT (narrowband internet of things) devices. Some UEs may be considered a Customer Premises Equipment (CPE). UE120may be included inside a housing that houses components of UE120, such as processor components and/or memory components. In some aspects, the processor components and the memory components may be coupled together. For example, the processor components (e.g., one or more processors) and the memory components (e.g., a memory) may be operatively coupled, communicatively coupled, electronically coupled, and/or electrically coupled. In general, any number of wireless networks may be deployed in a given geographic area. Each wireless network may support a particular RAT and may operate on one or more frequencies. A RAT may also be referred to as a radio technology, an air interface, or the like. A frequency may also be referred to as a carrier, a frequency channel, or the like. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. In some cases, NR or 5G RAT networks may be deployed. In some aspects, two or more UEs120(e.g., shown as UE120aand UE120e) may communicate directly using one or more sidelink channels (e.g., without using a base station110as an intermediary to communicate with one another). For example, the UEs120may communicate using peer-to-peer (P2P) communications, device-to-device (D2D) communications, a vehicle-to-everything (V2X) protocol (e.g., which may include a vehicle-to-vehicle (V2V) protocol or a vehicle-to-infrastructure (V2I) protocol), and/or a mesh network. In this case, the UE120may perform scheduling operations, resource selection operations, and/or other operations described elsewhere herein as being performed by the base station110. Devices of wireless network100may communicate using the electromagnetic spectrum, which may be subdivided based on frequency or wavelength into various classes, bands, channels, or the like. For example, devices of wireless network100may communicate using an operating band having a first frequency range (FR1), which may span from 410 MHz to 7.125 GHz, and/or may communicate using an operating band having a second frequency range (FR2), which may span from 24.25 GHz to 52.6 GHz. The frequencies between FR1 and FR2 are sometimes referred to as mid-band frequencies. Although a portion of FR1 is greater than 6 GHz, FR1 is often referred to as a “sub-6 GHz” band. Similarly, FR2 is often referred to as a “millimeter wave” band despite being different from the extremely high frequency (EHF) band (30 GHz-300 GHz) which is identified by the International Telecommunications Union (ITU) as a “millimeter wave” band. Thus, unless specifically stated otherwise, it should be understood that the term “sub-6 GHz” or the like, if used herein, may broadly represent frequencies less than 6 GHz, frequencies within FR1, and/or mid-band frequencies (e.g., greater than 7.125 GHz). Similarly, unless specifically stated otherwise, it should be understood that the term “millimeter wave” or the like, if used herein, may broadly represent frequencies within the EHF band, frequencies within FR2, and/or mid-band frequencies (e.g., less than 24.25 GHz). It is contemplated that the frequencies included in FR1 and FR2 may be modified, and techniques described herein are applicable to those modified frequency ranges. As indicated above,FIG.1is provided as an example. Other examples may differ from what is described with regard toFIG.1. FIG.2is a diagram illustrating an example200of a base station110in communication with a UE120in a wireless network100, in accordance with the present disclosure. Base station110may be equipped with T antennas234athrough234t, and UE120may be equipped with R antennas252athrough252r, where in general T≥1 and R≥1. At base station110, a transmit processor220may receive data from a data source212for one or more UEs, select one or more modulation and coding schemes (MCS) for each UE based at least in part on channel quality indicators (CQIs) received from the UE, process (e.g., encode and modulate) the data for each UE based at least in part on the MCS(s) selected for the UE, and provide data symbols for all UEs. Transmit processor220may also process system information (e.g., for semi-static resource partitioning information (SRPI)) and control information (e.g., CQI requests, grants, and/or upper layer signaling) and provide overhead symbols and control symbols. Transmit processor220may also generate reference symbols for reference signals (e.g., a cell-specific reference signal (CRS) or a demodulation reference signal (DMRS)) and synchronization signals (e.g., a primary synchronization signal (PSS) or a secondary synchronization signal (SSS)). A transmit (TX) multiple-input multiple-output (MIMO) processor230may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, the overhead symbols, and/or the reference symbols, if applicable, and may provide T output symbol streams to T modulators (MODs)232athrough232t. Each modulator232may process a respective output symbol stream (e.g., for OFDM) to obtain an output sample stream. Each modulator232may further process (e.g., convert to analog, amplify, filter, and upconvert) the output sample stream to obtain a downlink signal. T downlink signals from modulators232athrough232tmay be transmitted via T antennas234athrough234t, respectively. At UE120, antennas252athrough252rmay receive the downlink signals from base station110and/or other base stations and may provide received signals to demodulators (DEMODs)254athrough254r, respectively. Each demodulator254may condition (e.g., filter, amplify, downconvert, and digitize) a received signal to obtain input samples. Each demodulator254may further process the input samples (e.g., for OFDM) to obtain received symbols. A MIMO detector256may obtain received symbols from all R demodulators254athrough254r, perform MIMO detection on the received symbols if applicable, and provide detected symbols. A receive processor258may process (e.g., demodulate and decode) the detected symbols, provide decoded data for UE120to a data sink260, and provide decoded control information and system information to a controller/processor280. The term “controller/processor” may refer to one or more controllers, one or more processors, or a combination thereof. A channel processor may determine a reference signal received power (RSRP) parameter, a received signal strength indicator (RSSI) parameter, a reference signal received quality (RSRQ) parameter, and/or a channel quality indicator (CQI) parameter, among other examples. In some aspects, one or more components of UE120may be included in a housing284. Network controller130may include communication unit294, controller/processor290, and memory292. Network controller130may include, for example, one or more devices in a core network. Network controller130may communicate with base station110via communication unit294. Antennas (e.g., antennas234athrough234tand/or antennas252athrough252r) may include, or may be included within, one or more antenna panels, antenna groups, sets of antenna elements, and/or antenna arrays, among other examples. An antenna panel, an antenna group, a set of antenna elements, and/or an antenna array may include one or more antenna elements. An antenna panel, an antenna group, a set of antenna elements, and/or an antenna array may include a set of coplanar antenna elements and/or a set of non-coplanar antenna elements. An antenna panel, an antenna group, a set of antenna elements, and/or an antenna array may include antenna elements within a single housing and/or antenna elements within multiple housings. An antenna panel, an antenna group, a set of antenna elements, and/or an antenna array may include one or more antenna elements coupled to one or more transmission and/or reception components, such as one or more components ofFIG.2. On the uplink, at UE120, a transmit processor264may receive and process data from a data source262and control information (e.g., for reports that include RSRP, RSSI, RSRQ, and/or CQI) from controller/processor280. Transmit processor264may also generate reference symbols for one or more reference signals. The symbols from transmit processor264may be precoded by a TX MIMO processor266if applicable, further processed by modulators254athrough254r(e.g., for DFT-s-OFDM or CP-OFDM), and transmitted to base station110. In some aspects, a modulator and a demodulator (e.g., MOD/DEMOD254) of the UE120may be included in a modem of the UE120. In some aspects, the UE120includes a transceiver. The transceiver may include any combination of antenna(s)252, modulators and/or demodulators254, MIMO detector256, receive processor258, transmit processor264, and/or TX MIMO processor266. The transceiver may be used by a processor (e.g., controller/processor280) and memory282to perform aspects of any of the methods described herein (for example, as described with reference toFIGS.5-7). At base station110, the uplink signals from UE120and other UEs may be received by antennas234, processed by demodulators232, detected by a MIMO detector236if applicable, and further processed by a receive processor238to obtain decoded data and control information sent by UE120. Receive processor238may provide the decoded data to a data sink239and the decoded control information to controller/processor240. Base station110may include communication unit244and communicate to network controller130via communication unit244. Base station110may include a scheduler246to schedule UEs120for downlink and/or uplink communications. In some aspects, a modulator and a demodulator (e.g., MOD/DEMOD232) of the base station110may be included in a modem of the base station110. In some aspects, the base station110includes a transceiver. The transceiver may include any combination of antenna(s)234, modulators and/or demodulators232, MIMO detector236, receive processor238, transmit processor220, and/or TX MIMO processor230. The transceiver may be used by a processor (e.g., controller/processor240) and memory242to perform aspects of any of the methods described herein (for example, as described with reference toFIGS.5-7). Controller/processor240of base station110, controller/processor280of UE120, and/or any other component(s) ofFIG.2may perform one or more techniques associated with using timing information from a single reference signal for measurements of multiple reference signals of multiple cells, as described in more detail elsewhere herein. For example, controller/processor240of base station110, controller/processor280of UE120, and/or any other component(s) ofFIG.2may perform or direct operations of, for example, process600ofFIG.6, process700ofFIG.7, and/or other processes as described herein. Memories242and282may store data and program codes for base station110and UE120, respectively. In some aspects, memory242and/or memory282may include a non-transitory computer-readable medium storing one or more instructions (e.g., code and/or program code) for wireless communication. For example, the one or more instructions, when executed (e.g., directly, or after compiling, converting, and/or interpreting) by one or more processors of the base station110and/or the UE120, may cause the one or more processors, the UE120, and/or the base station110to perform or direct operations of, for example, process600ofFIG.6, process700ofFIG.7, and/or other processes as described herein. In some aspects, executing instructions may include running the instructions, converting the instructions, compiling the instructions, and/or interpreting the instructions, among other examples. In some aspects, UE120may include means for receiving an indication of a single reference signal to use for determining timing information for measuring multiple reference signals from multiple cells on a common frequency layer, means for measuring the multiple reference signals based at least in part on the timing information, and/or the like. In some aspects, such means may include one or more components of UE120described in connection withFIG.2, such as controller/processor280, transmit processor264, TX MIMO processor266, MOD254, antenna252, DEMOD254, MIMO detector256, receive processor258, and/or the like. In some aspects, base station110may include means for determining a single reference signal for a UE to use for determining timing information for measuring multiple reference signals from multiple cells on a common frequency layer; means for transmitting, to the UE, an indication of the single reference signal; and/or the like. In some aspects, such means may include one or more components of base station110described in connection withFIG.2, such as antenna234, DEMOD232, MIMO detector236, receive processor238, controller/processor240, transmit processor220, TX MIMO processor230, MOD232, antenna234, and/or the like. While blocks inFIG.2are illustrated as distinct components, the functions described above with respect to the blocks may be implemented in a single hardware, software, or combination component or in various combinations of components. For example, the functions described with respect to the transmit processor264, the receive processor258, and/or the TX MIMO processor266may be performed by or under the control of controller/processor280. As indicated above,FIG.2is provided as an example. Other examples may differ from what is described with regard toFIG.2. FIG.3is a diagram illustrating an example300of a UE receiving synchronization signal blocks (SSBs) from multiple cells, in accordance with the present disclosure. As shown inFIG.3, a UE may communicate with a serving base station. The UE and the base station may be part of a wireless network that includes multiple neighbor base stations that provide neighbor cells of the wireless network. As shown inFIG.3, and by reference number305, the UE may communicate (e.g., transmit uplink transmissions and/or receive downlink transmissions) with the serving base station via a wireless link. Based at least in part on measurements of reference signals from the serving base station and/or neighbor base stations, the UE may determine to measure reference signals from one or more neighbor base stations and/or perform a cell reselection process. For example, the UE may determine to measure reference signals from one or more neighbor base stations and/or perform a cell reselection process based at least in part on a determination that a reference signal receive power (RSRP) associated with the wireless link satisfies a threshold (e.g., is below the threshold). As shown by reference number310, the UE may receive a first SSB from a first neighbor base station (e.g., via a first neighbor cell). As shown by reference number315, the UE may receive a second SSB from a second neighbor base station (e.g., via a second neighbor cell). As shown by reference number320, the UE may determine timing information of each SSB. The UE may use the first SSB to determine first timing information for receiving one or more channel state information reference signals (CSI-RSs) from the first base station. The UE may use the second SSB to determine second timing information for receiving one or more CSI-RSs from the second base station. The UE may receive additional SSBs from additional neighbor base stations (e.g., via additional neighbor cells) to determine additional timing information for receiving one or more CSI-RSs from the additional base stations. As indicated above,FIG.3is provided as an example. Other examples may differ from what is described with respect toFIG.3. FIG.4is a diagram illustrating an example400of a UE using different timing information to measure multiple CSI-RSs from multiple cells, in accordance with the present disclosure. Multiple neighbor base stations may provide the multiple cells.FIG.4may illustrate one or more procedures that may occur after one or more processes illustrated inFIG.3. As shown inFIG.4, and by reference number405, the UE may communicate (e.g., transmit uplink transmissions and/or receive downlink transmissions) with the serving base station via a wireless link. Based at least in part on measurements of reference signals from the serving base station and/or neighbor base stations, the UE may determine to measure reference signals from one or more neighbor base stations and/or perform a cell reselection process. For example, the UE may determine to measure CSI-RSs from one or more neighbor base stations and/or perform a cell reselection process based at least in part on a determination that an RSRP associated with the wireless link satisfies a threshold. As shown by reference number410, the UE may receive a first CSI-RS from a first neighbor base station via a first neighbor cell. As shown by reference number415, the UE may receive a second CSI-RS from a second neighbor base station via a second neighbor cell. The UE may receive additional CSI-RSs from additional neighbor base stations (e.g., via additional neighbor cells). As shown by reference number420, the UE may use timing information of associated SSBs to measure the CSI-RSs. For example, the UE may use first timing information from a first SSB from the first neighbor base station to measure the first CSI-RS, may use second timing information from a second SSB from the second neighbor base station to measure the second CSI-RS, may use additional timing information from one or more additional SSBs from additional neighbor base stations to measure additional CSI-RSs, and/or the like. As shown by reference number425, the UE may perform cell reselection using measurements of the CSI-RSs. For example, the UE may measure a respective RSRP for each of the CSI-RSs, using unique timing information for each of the CSI-RSs, and determine to reselect to a neighbor cell having a highest measured RSRP. The UE may receive a plurality of SSBs and associated CSI-RSs, including from neighbor cells that are unlikely to be selected, and may consume computing, power, and communication resources to determine the unique timing information. Additionally, based at least in part on receiving many (e.g., more than 2) SSBs and associated CSI-RSs, the UE may be unable to determine timing information for one or more CSI-RSs in time to receive the CSI-RSs. This may cause the UE to reselect to a neighbor cell with incomplete information. As indicated above,FIG.4is provided as an example. Other examples may differ from what is described with respect toFIG.4. In some aspects described herein, a UE may use an indication received from a network (e.g., via a signaling extension in a measurement object configuration) to identify timing information to use for measuring multiple reference signals (e.g., used for cell reselection). In some aspects, the UE may receive an indication of a single reference signal to use for determining the timing information for measuring the multiple reference signals from multiple cells on a common frequency layer. The UE may measure the multiple reference signals based at least in part on the timing information (e.g., a single set of timing to use for each of the multiple reference signals) instead of using unique timing information for each of the multiple reference signals. Based at least in part on the UE measuring the multiple reference signals using the same timing information, the UE may conserve computing, power, and communication resources. FIG.5is a diagram illustrating an example500of a user equipment using timing information from a single reference signal for measurements of multiple reference signals of multiple cells, in accordance with the present disclosure. As shown inFIG.5, a UE (e.g., UE120) may communicate with a serving base station (e.g., base station110). The UE and the serving base station may be part of a wireless network (e.g., wireless network100). The wireless network may also include one or more additional base stations (e.g., additional base stations110) that provide one or more neighbor cells. As shown by reference number505, the serving base station may transmit, and the UE may receive, configuration information. In some aspects, the UE may receive configuration information from another device (e.g., from another base station, another UE, a network controller, and/or the like). In some aspects, the UE may receive the configuration information via one or more of radio resource control (RRC) signaling, medium access control (MAC) signaling (e.g., MAC control elements (MAC CEs)), and/or the like. In some aspects, the configuration information may include an indication of one or more configuration parameters (e.g., already known to the UE) for selection by the UE, explicit configuration information for the UE to use to configure the UE, and/or the like. In some aspects, the configuration information may indicate that the serving base station is to transmit an indication of a single reference signal (e.g., an SSB) for the UE to use for determining timing information for measuring multiple additional reference signals (e.g., CSI-RSs). In some aspects, the configuration information may indicate that the UE is to store information (e.g., timing information, information that the UE can use to derive timing information, and/or the like) associated with one or more reference signals (e.g., SSBs). In some aspects, the configuration information may indicate that the UE is to select, based at least in part on the indication of the single reference signal, a reference signal (e.g., SSB) of the one or more reference signals (e.g., SSBs) to use as a basis for determining timing information for measuring the multiple additional reference signals. As shown by reference number510, the UE may configure the UE to communicate with the serving base station, receive reference signals (e.g., SSBs) from neighbor cells, determine timing information to use to measure multiple additional reference signals (e.g., CSI-RSs), and/or the like. In some aspects, the UE may configure the UE based at least in part on the configuration information. In some aspects, the UE may be configured to perform one or more operations described herein. As shown by reference number515, the serving base station may determine a single reference signal for the UE to use for determining timing information for measuring multiple additional reference signals from multiple cells. In some aspects, the multiple cells may be on a common frequency layer (e.g., FR1, FR2, frequency bands within FR1 or FR2, and/or the like). In some aspects, the base station may determine the single reference signal for the UE based at least in part on a request from the UE, a determination that the UE is likely to perform cell reselection (e.g., based at least in part on an RSRP, a number of radio link failures, and/or the like), and/or the like. In some aspects, the base station may determine which single reference signal the UE is to use for determining the timing information based at least in part on a location of the UE (e.g., a geolocation of the UE, a location based at least in part on a beam used to communicate with the UE, and/or the like), a trajectory of the UE (e.g., based at least in part on tracking a geolocation of the UE, tracking beam selection for communicating with the UE, tracking reference signals of the UE, and/or the like), and/or the like. In some aspects, the base station may determine which single reference signal the UE is to use for determining the timing information based at least in part on a capability of the UE to support receiving the indication of the single reference signal to use for determining timing information for measuring multiple additional reference signals from multiple cells on a common frequency layer, a deployment configuration of neighbor base stations, a neighbor base station that is expected to have a highest RSRP, and/or the like. As shown by reference number520, the UE may receive one or more reference signals having timing information. For example, the UE may receive one or more SSBs from the additional base stations that provide neighbor cells. In some aspects, the UE may derive timing information from each of the one or more reference signals upon receipt of the one or more reference signals and store the timing information until receiving an indication of which timing information to use for measuring multiple additional reference signals. In some aspects, the UE may store information for the one or more reference signals that the UE can use to derive timing information. The UE may wait to derive the timing information until receipt of an indication of one of the reference signals to use for determining the timing information for measuring the multiple additional reference signals. As shown by reference signal525, the UE may receive an indication of the single reference signal to use for determining timing information for measuring multiple additional reference signals from multiple cells (e.g., on a common frequency layer). In some aspects, the serving base station may transmit, and the UE may receive, the indication via RRC signaling, a MAC CE, and/or the like. In some aspects, the indication of the single reference signal includes an identification of the single reference signal from a set of previously received reference signals. The UE may select the timing information that is associated with the single reference signal, may derive the timing information from stored information that is associated with the single reference signal, and/or the like. In some aspects, the UE may determine to receive the single reference signal (e.g., based at least in part on the UE not having timing information associated with the single reference signal). In some aspects, the indication of the single reference signal may include a prioritized list of reference signals from which the UE is to select the single reference signal (e.g., a highest priority reference signal) of the prioritized list of reference signals. For example, the UE may select a highest priority reference signal of the prioritized list for which the UE has timing information, or information from which timing information may be derived. As shown by reference number530, the UE may receive reference signals having timing information. For example, the UE may receive the reference signals having timing information based at least in part on determining that the UE does not yet have timing information, or information from which timing may be derived, associated with the single reference signal to use for timing information. In some aspects, the UE may receive the reference signals having timing information to update previously determined timing information associated with a previous occasion of the single reference signal. As shown by reference number535, the UE may receive an indication of reception beam parameters to measure the multiple additional reference signals. In some aspects, the reception beam parameters may indicate whether the UE is to use, for measuring the multiple additional reference signals, a default reception beam width or a narrow reception beam that is narrower than the default reception beam. In some aspects, a narrow reception beam may increase a measured RSRP for a reference signal, so long as the reference signal arrives at the UE within the narrow reception beam. In some aspects, the reception beam parameters may indicate one or more beams to use to measure the multiple additional reference signals. In some aspects, the indication indicates the one or more reception beams based at least in part on a direction relative to a reception beam used to communicate with the serving base station (e.g., a serving cell provided by the serving base station). As shown by reference number540, the UE may obtain and/or store the timing information from the single reference signal. In some aspects, obtaining and/or storing the timing information from the single reference signal may include receiving the single reference signal, deriving or identifying the timing information from the single reference signal, storing the timing information for use in measuring the multiple reference signals, and/or the like. As shown by reference number545, the UE may receive and measure the multiple additional reference signals based at least in part on the timing information. In some aspects, multiple base stations may transmit the multiple additional reference signals. In some aspects, a single base station, of the multiple base stations, may transmit one or more additional reference signals of multiple additional reference signals. In some aspects, the multiple additional reference signals may include multiple CSI-RSs. In some aspects, when measuring the multiple additional reference signals, the UE may measure the multiple reference signals based at least in part on application of the timing information to the multiple additional reference signals, determining RSRPs and/or noise levels for the multiple additional reference signals, and/or the like. As shown by reference number550, the UE may perform a cell reselection process. For example, the UE may choose a neighbor cell for cell reselection based at least in part on measurements (e.g., RSRPs) of the multiple reference signals. Based at least in part on the UE measuring the multiple reference signals from the one or more additional base stations using the same timing information, the UE may conserve computing, power, and communication resources. Additionally, or alternatively, based at least in part on the base station providing the indication of which reference signal to use for timing information, the UE may use timing information that is likely (e.g., based at least in part on information known to the base station) to improve an ability of the UE to receive a reference signal from a neighbor base station that is expected to have a highest RSRP. As indicated above,FIG.5is provided as an example. Other examples may differ from what is described with respect toFIG.5. FIG.6is a diagram illustrating an example process600performed, for example, by a UE, in accordance with the present disclosure. Example process600is an example where the UE (e.g., UE120and/or the like) performs operations associated with using timing information from a single reference signal for measurements of multiple reference signals of multiple cells. As shown inFIG.6, in some aspects, process600may include receiving an indication of a single reference signal to use for determining timing information for measuring multiple reference signals from multiple cells on a common frequency layer (block610). For example, the UE (e.g., using receive processor258, controller/processor280, memory282, and/or the like) may receive an indication of a single reference signal to use for determining timing information for measuring multiple reference signals from multiple cells on a common frequency layer, as described above. As further shown inFIG.6, in some aspects, process600may include measuring the multiple reference signals based at least in part on the timing information (block620). For example, the UE (e.g., using receive processor258, transmit processor264, controller/processor280, memory282, and/or the like) may measure the multiple reference signals based at least in part on the timing information, as described above. Process600may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, the single reference signal is a previously received reference signal, and the indication of the single reference signal includes an identification of the single reference signal from a set of previously received reference signals. In a second aspect, alone or in combination with the first aspect, determining timing information includes receiving the single reference signal, deriving or identifying the timing information from the single reference signal, and storing the timing information for use in measuring the multiple reference signals. In a third aspect, alone or in combination with one or more of the first and second aspects, process600includes receiving an additional indication of whether the UE is to use, for measuring the multiple reference signals, a default reception beam or a narrow reception beam that is narrower than the default reception beam, wherein is measuring the multiple reference signals includes measuring the multiple reference signals using, based at least in part on the additional indication, the default reception beam or the narrow reception beam. In a fourth aspect, alone or in combination with one or more of the first through third aspects, process600includes receiving an additional indication of one or more reception beams to use for measuring the multiple reference signals. In a fifth aspect, alone or in combination with one or more of the first through fourth aspects, the additional indication indicates the one or more reception beams based at least in part on a direction relative to a reception beam used to communicate with a serving cell. In a sixth aspect, alone or in combination with one or more of the first through fifth aspects, the single reference signal includes a single SSB. In a seventh aspect, alone or in combination with one or more of the first through sixth aspects, the multiple reference signals include multiple CSI-RSs transmitted by multiple base stations. In an eighth aspect, alone or in combination with one or more of the first through seventh aspects, receiving the indication of the single reference signal includes receiving a prioritized list of reference signals, the single reference signal is a highest priority reference signal, of the prioritized list of reference signals, for which timing information is available to the UE. In a ninth aspect, alone or in combination with one or more of the first through eighth aspects, measuring the multiple reference signals includes measuring the multiple reference signals based at least in part on application of the timing information to the multiple reference signals, and determining one or more of RSRPs or noise levels for the multiple reference signals. In a tenth aspect, alone or in combination with one or more of the first through ninth aspects, process600includes performing a cell reselection process based at least in part on measurements of the multiple reference signals. In an eleventh aspect, alone or in combination with one or more of the first through tenth aspects, receiving the indication of the single reference signal includes receiving the indication via one or more of RRC signaling or a MAC CE. AlthoughFIG.6shows example blocks of process600, in some aspects, process600may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.6. Additionally, or alternatively, two or more of the blocks of process600may be performed in parallel. FIG.7is a diagram illustrating an example process700performed, for example, by a base station, in accordance with the present disclosure. Example process700is an example where the base station (e.g., base station110and/or the like) performs operations associated with using timing information from a single reference signal for measurements of multiple reference signals of multiple cells. As shown inFIG.7, in some aspects, process700may include determining a single reference signal for a UE to use for determining timing information for measuring multiple reference signals from multiple cells on a common frequency layer (block710). For example, the base station (e.g., using transmit processor220, receive processor238, controller/processor240, memory242, and/or the like) may determine a single reference signal for a UE to use for determining timing information for measuring multiple reference signals from multiple cells on a common frequency layer, as described above. As further shown inFIG.7, in some aspects, process700may include transmitting, to the UE, an indication of the single reference signal (block720). For example, the base station (e.g., using transmit processor220, controller/processor240, memory242, and/or the like) may transmit, to the UE, an indication of the single reference signal, as described above. Process700may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, determining the single reference signal is based at least in part on one or more of a location of the UE, a trajectory of the UE, a capability of the UE to support receiving the indication of the single reference signal to use for determining timing information for measuring multiple reference signals from multiple cells on a common frequency layer, a deployment configuration of neighbor base stations, a neighbor base station that is expected to have a highest RSRP. In a second aspect, alone or in combination with the first aspect, the indication of the single reference signal includes an identification of the single reference signal from a set of reference signals transmitted from one or more base stations. In a third aspect, alone or in combination with one or more of the first and second aspects, process700includes transmitting an additional indication of whether the UE is to use, for measuring the multiple reference signals, a default reception beam or a narrow reception beam that is narrower than the default reception beam. In a fourth aspect, alone or in combination with one or more of the first through third aspects, process700includes transmitting an additional indication of one or more reception beams to use for measuring the multiple reference signals. In a fifth aspect, alone or in combination with one or more of the first through fourth aspects, the additional indication indicates the one or more reception beams based at least in part on a direction relative to a reception beam used to communicate with the UE. In a sixth aspect, alone or in combination with one or more of the first through fifth aspects, the single reference signal includes a single SSB. In a seventh aspect, alone or in combination with one or more of the first through sixth aspects, the multiple reference signals include multiple CSI-RSs transmitted by multiple base stations. In an eighth aspect, alone or in combination with one or more of the first through seventh aspects, transmitting the indication of the single reference signal includes transmitting a prioritized list of reference signals, the single reference signal is a highest priority reference signal, of the prioritized list of reference signals, for which timing information is available to the UE. In a ninth aspect, alone or in combination with one or more of the first through eighth aspects, transmitting the indication of the single reference signal includes transmitting the indication via one or more of RRC signaling or a MAC CE. AlthoughFIG.7shows example blocks of process700, in some aspects, process700may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.7. Additionally, or alternatively, two or more of the blocks of process700may be performed in parallel. FIG.8is a block diagram of an example apparatus800for wireless communication. The apparatus800may be a UE, or a UE may include the apparatus800. In some aspects, the apparatus800includes a reception component802and a transmission component804, which may be in communication with one another (for example, via one or more buses and/or one or more other components). As shown, the apparatus800may communicate with another apparatus806(such as a UE, a base station, or another wireless communication device) using the reception component802and the transmission component804. As further shown, the apparatus800may include a communication manager808. In some aspects, the apparatus800may be configured to perform one or more operations described herein in connection withFIG.5. Additionally, or alternatively, the apparatus800may be configured to perform one or more processes described herein, such as process600ofFIG.6. In some aspects, the apparatus800and/or one or more components shown inFIG.8may include one or more components of the UE described above in connection withFIG.2. Additionally, or alternatively, one or more components shown inFIG.8may be implemented within one or more components described above in connection withFIG.2. Additionally, or alternatively, one or more components of the set of components may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. The reception component802may receive communications, such as reference signals, control information, data communications, or a combination thereof, from the apparatus806. The reception component802may provide received communications to one or more other components of the apparatus800. In some aspects, the reception component802may perform signal processing on the received communications (such as filtering, amplification, demodulation, analog-to-digital conversion, demultiplexing, deinterleaving, de-mapping, equalization, interference cancellation, or decoding, among other examples), and may provide the processed signals to the one or more other components of the apparatus806. In some aspects, the reception component802may include one or more antennas, a demodulator, a MIMO detector, a receive processor, a controller/processor, a memory, or a combination thereof, of the UE described above in connection withFIG.2. The transmission component804may transmit communications, such as reference signals, control information, data communications, or a combination thereof, to the apparatus806. In some aspects, one or more other components of the apparatus806may generate communications and may provide the generated communications to the transmission component804for transmission to the apparatus806. In some aspects, the transmission component804may perform signal processing on the generated communications (such as filtering, amplification, modulation, digital-to-analog conversion, multiplexing, interleaving, mapping, or encoding, among other examples), and may transmit the processed signals to the apparatus806. In some aspects, the transmission component804may include one or more antennas, a modulator, a transmit MIMO processor, a transmit processor, a controller/processor, a memory, or a combination thereof, of the UE described above in connection withFIG.2. In some aspects, the transmission component804may be co-located with the reception component802in a transceiver. The reception component802may receive an indication of a single reference signal to use for determining timing information for measuring multiple reference signals from multiple cells on a common frequency layer. The communication manager808may measure the multiple reference signals based at least in part on the timing information. The reception component802may receive an additional indication of whether the UE is to use, for measuring the multiple reference signals, a default reception beam or a narrow reception beam that is narrower than the default reception beam wherein measuring the multiple reference signals comprises: measuring the multiple reference signals using, based at least in part on the additional indication, the default reception beam or the narrow reception beam. The reception component802may receive an additional indication of one or more reception beams to use for measuring the multiple reference signals. The communication manager808may perform a cell reselection process based at least in part on measurements of the multiple reference signals. The number and arrangement of components shown inFIG.8are provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.8. Furthermore, two or more components shown inFIG.8may be implemented within a single component, or a single component shown inFIG.8may be implemented as multiple, distributed components. Additionally, or alternatively, a set of (one or more) components shown inFIG.8may perform one or more functions described as being performed by another set of components shown inFIG.8. FIG.9is a block diagram of an example apparatus900for wireless communication. The apparatus900may be a base station, or a base station may include the apparatus900. In some aspects, the apparatus900includes a reception component902and a transmission component904, which may be in communication with one another (for example, via one or more buses and/or one or more other components). As shown, the apparatus900may communicate with another apparatus906(such as a UE, a base station, or another wireless communication device) using the reception component902and the transmission component904. As further shown, the apparatus900may include a communication manager908. In some aspects, the apparatus900may be configured to perform one or more operations described herein in connection withFIG.5. Additionally, or alternatively, the apparatus900may be configured to perform one or more processes described herein, such as process700ofFIG.7. In some aspects, the apparatus900and/or one or more components shown inFIG.9may include one or more components of the base station described above in connection withFIG.2. Additionally, or alternatively, one or more components shown inFIG.9may be implemented within one or more components described above in connection withFIG.2. Additionally, or alternatively, one or more components of the set of components may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. The reception component902may receive communications, such as reference signals, control information, data communications, or a combination thereof, from the apparatus906. The reception component902may provide received communications to one or more other components of the apparatus900. In some aspects, the reception component902may perform signal processing on the received communications (such as filtering, amplification, demodulation, analog-to-digital conversion, demultiplexing, deinterleaving, de-mapping, equalization, interference cancellation, or decoding, among other examples), and may provide the processed signals to the one or more other components of the apparatus906. In some aspects, the reception component902may include one or more antennas, a demodulator, a MIMO detector, a receive processor, a controller/processor, a memory, or a combination thereof, of the base station described above in connection withFIG.2. The transmission component904may transmit communications, such as reference signals, control information, data communications, or a combination thereof, to the apparatus906. In some aspects, one or more other components of the apparatus906may generate communications and may provide the generated communications to the transmission component904for transmission to the apparatus906. In some aspects, the transmission component904may perform signal processing on the generated communications (such as filtering, amplification, modulation, digital-to-analog conversion, multiplexing, interleaving, mapping, or encoding, among other examples), and may transmit the processed signals to the apparatus906. In some aspects, the transmission component904may include one or more antennas, a modulator, a transmit MIMO processor, a transmit processor, a controller/processor, a memory, or a combination thereof, of the base station described above in connection withFIG.2. In some aspects, the transmission component904may be co-located with the reception component902in a transceiver. The communication manager908may determine a single reference signal for a user equipment (UE) to use for determining timing information for measuring multiple reference signals from multiple cells on a common frequency layer. The transmission component904may transmit, to the UE, an indication of the single reference signal. The transmission component904may transmit an additional indication of whether the UE is to use, for measuring the multiple reference signals, a default reception beam or a narrow reception beam that is narrower than the default reception beam. The transmission component904may transmit an additional indication of one or more reception beams to use for measuring the multiple reference signals. The number and arrangement of components shown inFIG.9are provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.9. Furthermore, two or more components shown inFIG.9may be implemented within a single component, or a single component shown inFIG.9may be implemented as multiple, distributed components. Additionally, or alternatively, a set of (one or more) components shown inFIG.9may perform one or more functions described as being performed by another set of components shown inFIG.9. The following provides an overview of some Aspects of the present disclosure: Aspect 1: A method of wireless communication performed by a user equipment (UE), comprising: receiving an indication of a single reference signal to use for determining timing information for measuring multiple reference signals from multiple cells on a common frequency layer; and measuring the multiple reference signals based at least in part on the timing information. Aspect 2: The method of Aspect 1, wherein the single reference signal is a previously received reference signal, and wherein the indication of the single reference signal comprises: an identification of the single reference signal from a set of previously received reference signals. Aspect 3: The method of any of Aspects 1-2, wherein determining timing information comprises: receiving the single reference signal, deriving or identifying the timing information from the single reference signal, and storing the timing information for use in measuring the multiple reference signals. Aspect 4: The method of any of Aspects 1-3, further comprising: receiving an additional indication of whether the UE is to use, for measuring the multiple reference signals, a default reception beam or a narrow reception beam that is narrower than the default reception beam, wherein measuring the multiple reference signals comprises: measuring the multiple reference signals using, based at least in part on the additional indication, the default reception beam or the narrow reception beam. Aspect 5: The method of any of Aspects 1-4, further comprising: receiving an additional indication of one or more reception beams to use for measuring the multiple reference signals. Aspect 6: The method of Aspect 5, wherein the additional indication indicates the one or more reception beams based at least in part on a direction relative to a reception beam used to communicate with a serving cell. Aspect 7: The method of any of Aspects 1-6, wherein the single reference signal comprises: a single synchronization signal block. Aspect 8: The method of any of Aspects 1-6, wherein the multiple reference signals comprise: multiple channel state information reference signals transmitted by multiple base stations. Aspect 9: The method of any of Aspects 1-8, wherein receiving the indication of the single reference signal comprises: receiving a prioritized list of reference signals, wherein the single reference signal is a highest priority reference signal, of the prioritized list of reference signals, for which timing information is available to the UE. Aspect 10: The method of any of Aspects 1-9, wherein measuring the multiple reference signals comprises: measuring the multiple reference signals based at least in part on application of the timing information to the multiple reference signals; and determining one or more of reference signal receive powers or noise levels for the multiple reference signals. Aspect 11: The method of any of Aspects 1-10, further comprising: performing a cell reselection process based at least in part on measurements of the multiple reference signals. Aspect 12: The method of any of Aspects 1-11, wherein receiving the indication of the single reference signal comprises: receiving the indication via one or more of radio resource control signaling or a medium access control element. Aspect 13: A method of wireless communication performed by a base station, comprising: determining a single reference signal for a user equipment (UE) to use for determining timing information for measuring multiple reference signals from multiple cells on a common frequency layer; and transmitting, to the UE, an indication of the single reference signal. Aspect 14: The method of Aspect 13, wherein determining the single reference signal is based at least in part on one or more of: a location of the UE, a trajectory of the UE, a capability of the UE to support receiving the indication of the single reference signal to use for determining timing information for measuring multiple reference signals from multiple cells on a common frequency layer, a deployment configuration of neighbor base stations, or a neighbor base station that is expected to have a highest reference signal receive power. Aspect 15: The method of any of Aspects 13-14, wherein the indication of the single reference signal comprises: an identification of the single reference signal from a set of reference signals transmitted from one or more base stations. Aspect 16: The method of any of Aspects 13-15, further comprising: transmitting an additional indication of whether the UE is to use, for measuring the multiple reference signals, a default reception beam or a narrow reception beam that is narrower than the default reception beam. Aspect 17: The method of any of Aspects 13-16, further comprising: transmitting an additional indication of one or more reception beams to use for measuring the multiple reference signals. Aspect 18: The method of Aspect 17, wherein the additional indication indicates the one or more reception beams based at least in part on a direction relative to a reception beam used to communicate with the UE. Aspect 19: The method of any of Aspects 13-18, wherein the single reference signal comprises: a single synchronization signal block. Aspect 20: The method of any of Aspects 13-18, wherein the multiple reference signals comprise: multiple channel state information reference signals transmitted by multiple base stations. Aspect 21: The method of Aspect 13, wherein transmitting the indication of the single reference signal comprises: transmitting a prioritized list of reference signals, wherein the single reference signal is a highest priority reference signal, of the prioritized list of reference signals, for which timing information is available to the UE. Aspect 22: The method of any of Aspects 13-21, wherein transmitting the indication of the single reference signal comprises: transmitting the indication via one or more of radio resource control signaling or a medium access control control element. Aspect 23: An apparatus for wireless communication at a device, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform the method of one or more Aspects of Aspects 1-22. Aspect 24: A device for wireless communication, comprising a memory and one or more processors coupled to the memory, the memory and the one or more processors configured to perform the method of one or more Aspects of Aspects 1-22. Aspect 25: An apparatus for wireless communication, comprising at least one means for performing the method of one or more Aspects of Aspects 1-22. Aspect 26: A non-transitory computer-readable medium storing code for wireless communication, the code comprising instructions executable by a processor to perform the method of one or more Aspects of Aspects 1-22. Aspect 27: A non-transitory computer-readable medium storing a set of instructions for wireless communication, the set of instructions comprising one or more instructions that, when executed by one or more processors of a device, cause the device to perform the method of one or more Aspects of Aspects 1-22. The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the aspects to the precise forms disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the aspects. As used herein, the term “component” is intended to be broadly construed as hardware and/or a combination of hardware and software. “Software” shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, and/or functions, among other examples, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. As used herein, a processor is implemented in hardware and/or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the aspects. Thus, the operation and behavior of the systems and/or methods were described herein without reference to specific software code—it being understood that software and hardware can be designed to implement the systems and/or methods based, at least in part, on the description herein. As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various aspects. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various aspects includes each dependent claim in combination with every other claim in the claim set. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the terms “set” and “group” are intended to include one or more items (e.g., related items, unrelated items, or a combination of related and unrelated items), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).
68,921
11863489
Throughout the drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures. DETAILED DESCRIPTION The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness. The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the disclosure is provided for illustration purpose only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents. It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces. In describing embodiments of the disclosure, descriptions related to technical contents well-known in the art and not associated directly with the disclosure will be omitted. Such an omission of unnecessary descriptions is intended to prevent obscuring of the main idea of the disclosure and more clearly transfer the main idea. For the same reason, in the accompanying drawings, some elements may be exaggerated, omitted, or schematically illustrated. Further, the size of each element does not completely reflect the actual size. In the drawings, identical or corresponding elements are provided with identical reference numerals. The advantages and features of the disclosure and ways to achieve them will be apparent by making reference to embodiments as described below in detail in conjunction with the accompanying drawings. However, the disclosure is not limited to the embodiments set forth below, but may be implemented in various different forms. The following embodiments are provided only to completely disclose the disclosure and inform those skilled in the art of the scope of the disclosure, and the disclosure is defined only by the scope of the appended claims. Throughout the specification, the same or like reference numerals designate the same or like elements. Here, it will be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer usable or computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer usable or computer-readable memory produce an article of manufacture including instruction means that implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks. Further, each block of the flowchart illustrations may represent a module, segment, or portion of code, which includes one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. As used herein, the “unit” refers to a software element or a hardware element, such as a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC), which performs a predetermined function. However, the “unit” does not always have a meaning limited to software or hardware. The “unit” may be constructed either to be stored in an addressable storage medium or to execute one or more processors. Therefore, the “unit” includes, for example, software elements, object-oriented software elements, class elements or task elements, processes, functions, properties, procedures, sub-routines, segments of a program code, drivers, firmware, micro-codes, circuits, data, database, data structures, tables, arrays, and parameters. The elements and functions provided by the “unit” may be either combined into a smaller number of elements, or a “unit”, or divided into a larger number of elements, or a “unit”. Moreover, the elements and “units” or may be implemented to reproduce one or more central processing units (CPUs) within a device or a security multimedia card. Further, the “unit” in the embodiments may include one or more processors. Hereinafter, the operation principle of the disclosure will be described in detail in conjunction with the accompanying drawings. In the following description of the disclosure, a detailed description of known functions or configurations incorporated herein will be omitted when it may make the subject matter of the disclosure rather unclear. The terms which will be described below are terms defined in consideration of the functions in the disclosure, and may be different according to users, intentions of the users, or customs. Therefore, the definitions of the terms. Hereinafter, the base station is an entity for performing resource allocation to the terminal, and may be at least one of a gNode B, an eNode B, a Node B, a base station (BS), a wireless access unit, a base station controller, or a node in a network. The terminal may include a user equipment (UE), a mobile station (MS), a cellular phone, a smartphone, a computer, or a multimedia system capable of performing communication functions. The base station and the terminal are not limited to the above examples. Hereinafter, the disclosure describes a technique for a terminal to receive broadcast information from a base station in a wireless communication system. The disclosure relates to a communication technique for merging, with IoT (Internet of Things) technology, a 5G (5thgeneration) communication system for supporting a higher data rate, subsequent to a 4G (4thgeneration) system, and a system thereof. The disclosure may be applied to an intelligent service (for example, smart home, smart building, smart city, smart car or connected car, healthcare, digital education, retail, security and safety-related services, and the like), based on a 5G communication technology and an IoT-related technology. In the following description, terms referring to broadcast information, terms referring to control information, terms referring to communication coverage, terms referring to a change in the state (e.g., event), terms referring to network entities, terms referring to messages, terms referring to elements of a device, and the like are illustrative words for the convenience of explanation. Accordingly, the disclosure is not limited to the terms described below, and other terms having equivalent technical meanings may be used. Hereinafter, terms and names defined in the 3rd-generation partnership project long-term evolution (3GPP LTE) standard will be used for the convenience of explanation. However, the disclosure is not limited to the above terms and names, and may be applied to systems conforming to other standards in the same manner. The wireless communication system is advancing to a broadband wireless communication system for providing high-speed and high-quality packet data services using communication standards, such as high-speed packet access (HSPA) of 3GPP, LTE {long-term evolution or evolved universal terrestrial radio access (E-UTRA)}, LTE-Advanced (LTE-A), LTE-Pro, high-rate packet data (HRPD) of 3GPP2, ultra-mobile broadband (UMB), IEEE 802.16e, and the like, as well as typical voice-based services. As a typical example of the broadband wireless communication system, an LTE system employs an orthogonal frequency division multiplexing (OFDM) scheme in a downlink (DL) and employs a single carrier frequency division multiple access (SC-FDMA) scheme in an uplink (UL). The uplink refers to a radio link through which the terminal {user equipment (UE) or mobile station (MS)} transmits data or control signals to the base station (BS) (eNode B), and downlink refers to a radio link through which the base station transmits data or control signals to the terminal. The above multiple access scheme separates data or control information of the respective users by allocating and operating time-frequency resources to transmit the data or control information for each user so as to avoid overlapping each other (that is, so as to establish orthogonality). Since a 5G communication system, which is a communication system subsequent to LTE, must freely reflect various requirements of users, service providers, and the like, services satisfying various requirements must be supported. The services considered for the 5G communication system include enhanced mobile broadband (eMBB) communication, massive machine-type communication (mMTC), ultra-reliability low-latency communication (URLLC), and the like. According to some embodiments, the eMBB aims at providing a data rate higher than that supported by existing LTE, LTE-A, or LTE-Pro. For example, in the 5G communication system, the eMBB must provide a peak data rate of 20 Gbps in the downlink and a peak data rate of 10 Gbps in the uplink for a single base station. Furthermore, the eMBB must provide an increased user perceived data rate to the terminal. In order to satisfy such requirements, transmission/reception technologies including a further enhanced multi-input multi-output (MIMO) transmission technique are required to be improved. In addition, the data rate required for the 5G communication system may be obtained using a frequency bandwidth more than 20 MHz in a frequency band of 3 to 6 GHz or 6 GHz or more, instead of the band of 2 GHz used in the current LTE. In addition, mMTC is being considered to support application services such as the Internet of Things (IoT) in the 5G system. The mMTC has requirements, such as support of connection of large numbers of terminals in the cell, enhancement of the terminal coverage, improved battery time, and a reduction in the cost of a terminal, in order to effectively provide the Internet of Things. Since the Internet of Things provides communication functions while being provided to various sensors and various devices, it must support a large number of terminals (e.g., 1,000,000 terminals/km2) in the cell. In addition, the terminals supporting the mMTC may require wider coverage than those of other services provided by the 5G communication system because the terminals are likely to be located in a shadow area, such as a basement of a building, which is not covered by a cell due to the nature of the service. The terminals supporting the mMTC is required to be configured to be inexpensive, and may require a very long battery life time because it is difficult to frequently replace the battery of the terminal. Lastly, the URLLC, which is a cellular-based mission-critical wireless communication service, is used for remote control for robots or machines, industrial automation, unmanned aerial vehicles, remote health care, emergency alert, or the like, and must provide communication with ultra-low latency and ultra-reliability. For example, a service supporting the URLLC must satisfy an air interface latency of less than 0.5 ms, and also requires a packet error rate of 10-5 or less. Therefore, for the services supporting the URLLC, the 5G system must provide a transmit time interval (TTI) shorter than those of other services, and also requires a design for allocating large amount of resources in the frequency band. However, the above-mentioned mMTC, URLLC, and eMBB are only examples of different types of services, and the disclosure is not limited to the types of services described above. The above-mentioned services considered in the 5G communication system must converge to a single framework to then be provided. That is, the respective services are preferably integrated into a single system to then be controlled and transmitted, instead of operating the services independently, for efficient resource management and control. In addition, although the embodiments of the disclosure will be described below by way of example as LTE, LTE-A, LTE-Pro, or NR systems, the embodiments of the disclosure are able to be applied to other communication systems having similar technical backgrounds or channel forms. Further, the embodiments of the disclosure are able to be applied to other communication systems through some modifications thereof without departing from the scope of the disclosure according to judgment by those skilled in the art. The disclosure relates to a method and an apparatus for repeatedly transmitting data and control signals between a plurality of transmission nodes and terminals performing cooperative communication to improve communication reliability. According to the disclosure, in the case where network cooperative communication is used in a wireless communication system, the reliability of data/control signals received by the terminal is able to be improved. Hereinafter, a frame structure of a 5G system will be described in more detail with reference to the accompanying drawings. FIG.1is a diagram illustrating the time-frequency domain transmission structure of a subframe1-10of LTE, LTE-A, NR, or wireless communication systems similar thereto according to an embodiment of the disclosure. Referring toFIG.1,FIG.1shows a basic structure of a time-frequency domain that is a wireless resource domain for transmitting data or control channels in a 5G system. Referring toFIG.1, the horizontal axis represents a time domain, and the vertical axis represents a frequency domain. The basic unit of a resource in the time-frequency domain is a resource element (RE)1-01, which may be defined as one orthogonal frequency division multiplexing (OFDM) symbol1-02on the time axis and one subcarrier1-03on the frequency axis. Consecutive N_sc{circumflex over ( )}RB (e.g., 12) REs may constitute one resource block (RB)1-04in the frequency domain. FIG.2is a diagram illustrating a slot structure considered in a 5G system according to an embodiment of the disclosure. Referring toFIG.2,FIG.2illustrates an example of the structure of a frame2-00, a subframe2-01, and a slot2-02. One frame2-00may be defined as 10 ms. One subframe2-01may be defined as 1 ms, and thus, one frame2-00may include a total of 10 subframes2-01. One slot2-02or2-03may be defined as 14 OFDM symbols {that is, the number of symbols per slot (Nsymbslot)=14}. One subframe2-01may include one or more slots2-02and2-03, and the number of slots2-02and2-03for each subframe2-01may vary depending on a configuration value μ of subcarrier spacing2-04or2-05. The example inFIG.2shows the case of μ=0 (2-04) and the case of μ=1 (2-05) as a configuration value of subcarrier spacing. In the case of μ=0 (2-04), one subframe2-01may include one slot2-02, and in the case of μ=1 (2-05), one subframe2-01may include two slots2-03. That is, the number of slots for each subframe (Nslotsubframe,μ) may vary depending on the configuration value μ of subcarrier spacing, and the number of slots for each frame (Nslotsubframe,μ) may vary according thereto. Nslotsubframe,μand Nslotframe,μaccording to each configuration value μ of subcarrier spacing may be defined as shown in Table 1 below. TABLE 1μNsymbslotNslotframe,μNslotsubframe,μ0141011142022144043148084141601651432032 In NR, one component carrier (CC) or serving cell may include up to 250 RBs. Therefore, in the case where a terminal always receives signals through the overall bandwidth of a serving cell, such as LTE, there may be a large amount of power consumption by the terminal, and in order to solve this problem, the base station may configure one or more bandwidth parts (BWPs) for the terminal such that the terminal may change the reception area in the cell. In NR, the base station may configure an “initial BWP”, which is the bandwidth of control resource set (CORESET) #0 {or common search space (CSS)}, for the terminal through an MIB. Thereafter, the base station may configure an initial BWP (first BWP) of the terminal through radio resource control (RRC) signaling, and may transmit a notification of one or more pieces of BWP configuration information that may be indicated through downlink control information (DCI) later. Afterwards, the base station may transmit a BWP ID through DCI, thereby indicating the band to be used by the terminal. If the terminal does not receive the DCI in the currently allocated BWP for a specific period of time or more, the terminal returns to a “default BWP” and attempts to receive the DCI. FIG.3illustrates an example of configuration of a bandwidth part (BWP) in a wireless communication system according to an embodiment of the disclosure. FIG.3is a diagram illustrating an example of configuration of a bandwidth part in a 5G communication system. Referring toFIG.3,FIG.3illustrates an example in which a UE bandwidth3-00is configured to have two bandwidth parts, that is, bandwidth part #1 (3-05) and bandwidth part #2 (3-10). The base station may configure one or more bandwidth parts for the terminal, and may configure information on each bandwidth part as shown in Table 2 below. TABLE 2ConfigurationBandwidth of bandwidth part (number of PRBsinformation 1constituting bandwidth part)ConfigurationFrequency location of bandwidth part (Thisinformation 2information may be offset value compared to referencepoint. Reference point may be, for example, centerfrequency of carrier, synchronization signal,synchronization signal raster, etc.)ConfigurationNumerology of bandwidth part (e.g., subcarrierinformation 3spacing, cyclic prefix length, etc.)Others In addition to the configuration information described in Table 2, various parameters related to the bandwidth part may be configured for the terminal. The base station may transmit the above information to the terminal through higher layer signaling, for example, RRC signaling. At least one of the configured bandwidth parts may be activated. Information on whether or not to activate the configured bandwidth part may be transmitted from the base station to the terminal semi-statically through RRC signaling or dynamically through a MAC control element (CE) or DCI. The configuration of the bandwidth part supported by the above-described 5G communication system may be used for various purposes. For example, in the case where the bandwidth supported by the terminal is smaller than the system bandwidth, the bandwidth supported by the terminal may be supported by configuring the bandwidth part. For example, in Table 2, the frequency location (configuration information 2) of the bandwidth part may be configured for the terminal, so that the terminal may transmit and receive data in a specific frequency location within a system bandwidth. As another example, the base station may configure a plurality of bandwidth parts for the terminal for the purpose of supporting different numerologies. For example, in order to support a certain terminal to transmit and receive data using both a subcarrier spacing of 15 kHz and a subcarrier spacing of 30 kHz, two bandwidth parts may be configured so as to use subcarrier spacing of 15 kHz and 30 kHz, respectively. Different bandwidth parts may be frequency-division-multiplexed (FDM), and in the case where data is to be transmitted and received in a specific subcarrier spacing, the bandwidth part configured as the corresponding subcarrier spacing may be activated. As another example, the base station may configure a bandwidth part having different bandwidths for the terminal for the purpose of reducing power consumption by the terminal. For example, if the terminal supports a very large bandwidth, e.g., a bandwidth of 100 MHz, and transmits and receives data only in the corresponding bandwidth, it may cause a large amount of power consumption. In particular, in terms of power consumption, it is very inefficient for the terminal to monitor an unnecessary downlink control channel for a large bandwidth of 100 MHz in the absence of traffic. Therefore, in order to reduce power consumption by the terminal, the base station may configure a bandwidth part having a relatively small bandwidth, for example, a 20 MHz bandwidth part, for the terminal. The terminal may perform a monitoring operation in a 20 MHz bandwidth part in the absence of traffic, and if data is produced, the terminal may transmit and receive data using a 100 MHz bandwidth part according to the indication from the base station.FIG.4is a diagram illustrating an example of indication and switching of a bandwidth part in a wireless communication system according to an embodiment. FIG.4is a diagram illustrating a method of dynamically changing the configuration of a bandwidth part according to an embodiment of the disclosure. Referring toFIG.4, as described in Table 2 above, the base station may configure one or more bandwidth parts for the terminal, and may transmit, to the terminal, information on the bandwidth of the bandwidth part, the frequency location of the bandwidth part, numerology of the bandwidth part, or the like, as the configuration for each bandwidth part. FIG.4illustrates an example in which two bandwidth parts, i.e., bandwidth part #1 (BPW #1)4-05and bandwidth part #2 (BWP #2)4-10are configured within a terminal bandwidth4-00for a terminal. One or more bandwidth parts may be activated in the configured bandwidth, and an example in which one bandwidth part is activated will be considered inFIG.4. InFIG.4, the bandwidth part #1 (4-05) is in an active state among the bandwidth parts configured in slot #0 (4-25), and the terminal may monitor a physical downlink control channel (PDCCH) in a control resource set #1 (4-45) configured in bandwidth part #1 (4-05), and may transmit and receive data4-55in bandwidth part #1 (4-05). The control resource set in which the terminal monitors the PDCCH may be different according to the bandwidth part that is activated among the configured bandwidth parts, and the bandwidth in which the terminal monitors the PDCCH may vary according thereto. The base station may further transmit, to the terminal, an indicator for switching the configuration of the bandwidth part. “Switching” the configuration of the bandwidth part may be regarded as the operation of activating a specific bandwidth part (for example, switching the activation from bandwidth part A to bandwidth part B). The base station may transmit a configuration switching indicator to the terminal in a specific slot, and the terminal may receive the configuration switching indicator from the base station, may then apply the configuration changed according to the configuration switching indicator at a specific time to determine the bandwidth part to be activated, and may monitor the PDCCH in the control resource set configure in the activated bandwidth part. InFIG.4, the base station may transmit, to the terminal, a configuration switching indicator4-15for instructing the terminal to switch the activated bandwidth part from existing bandwidth part #1 (4-05) to bandwidth part #2 (4-10) in slots #1 (4-30). Upon receiving the indicator, the terminal may activate bandwidth part #2 (4-10) according to the content of the indicator. In this case, a transition time4-20for switching the bandwidth part may be required, and the time for switching and applying the bandwidth part to be activated may be determined according thereto. FIG.4illustrates the case in which a transition time4-20of one slot elapses after receiving the configuration switching indicator4-15. Data may not be transmitted and received during the transition time4-20(4-60). Accordingly, bandwidth part #2 (4-10) is activated in slot #2 (4-35), so that control channels and data may be transmitted and received in the corresponding bandwidth part. The base station may preconfigure one or more bandwidth parts for the terminal using higher layer signaling (e.g., RRC signaling), and may indicate activation in such a manner that the configuration switching indicator4-15is mapped to one of the bandwidth parts preconfigured by the base station. For example, the indicator of log2N bits may indicate to select one of N preconfigured bandwidth parts. An example of indicating configuration information on a bandwidth part using a 2-bit indicator is shown in Table 3 below. TABLE 3IndicatorvaluesBandwidth part configuration00Bandwidth configuration A configured through higher layersignaling01Bandwidth configuration B configured through higher layersignaling10Bandwidth configuration C configured through higher layersignaling11Bandwidth configuration D configured through higher layersignaling The configuration switching indicator4-15for the bandwidth part described inFIG.4may be transmitted from the base station to the terminal using medium access control (MAC) control element (CE) signaling or L1 signaling (e.g., common DCI, group-common DCI, or terminal-specific DCI). According to the configuration switching indicator4-15for the bandwidth part described inFIG.4, the time at which activation of the bandwidth part is applied may be determined as follows. The time at which switching of the configuration is applied may follow a predefined value (e.g., applying the switching of the configuration N (=1) slots after receiving the configuration switching indicator), may be configured by the base station for the terminal using higher layer signaling (e.g., RRC signaling), or may be included, in part, in the content of the configuration switching indicator4-15to then be transmitted. Alternatively, the time at which switching of the configuration is applied may be determined by a combination of the above-described methods. After receiving the configuration switching indicator4-15for the bandwidth part, the terminal may apply the switched configuration from the time obtained by the above-described methods. Hereinafter, a downlink control channel in a 5G communication system will be described in more detail with reference to the accompanying drawings. FIG.5is a diagram illustrating an example of a control resource set configuration in a downlink control channel in a wireless communication system according to an embodiment. FIG.5is a diagram illustrating an example of a control resource set (CORESET) in which a downlink control channel is transmitted in a 5G wireless communication system according to an embodiment of the disclosure. Referring toFIG.5,FIG.5illustrates an example in which two control resource sets {control resource set #1 (5-01) and control resource set #2 (5-02)} are configured in a UE bandwidth part5-10on the frequency axis and one slot5-20on the time axis. The control resource set5-01or5-02may be configured in a specific frequency resource5-03within the entire UE bandwidth part5-10on the frequency axis. The control resource set5-01or5-02may be configured using one or more OFDM symbols on the time axis, and may be defined as control resource set duration5-04. In the example shown inFIG.5, control resource set #1 (5-01) is configured to have control resource set duration of two symbols, and control resource set #2 (5-02) is configured to have control resource set duration of one symbol. The control resource sets in 5G described above may be configured for the terminal by the base station through higher layer signaling {e.g., system information, a master information block (MIB), or radio resource control (RRC) signaling). Configuring the control resource set for the terminal means providing the terminal with information such as a control resource set identity, the frequency location of the control resource set, the symbol duration of the control resource set, and the like. For example, the information may include items shown in Table 4. TABLE 4ControlResourceSet ::=SEQUENCE {-- Corresponds to L1 parameter ‘CORESET-ID’controlResourceSetIdControlResourceSetId,(control region identity)frequencyDomainResourcesBIT STRING (SIZE (45)),(frequency doamin resource allocation information)durationINTEGER (1..maxCoReSetDuration),time domain resource allocation information)cce-REG-MappingTypeCHOICE {(CCE-to-REG mapping type)interleavedSEQUENCE {reg-BundleSizeENUMERATED {n2, n3, n6},(REG bundle size)precoderGranularityENUMERATED{sameAsREG-bundle, allContiguousRBs},interleaverSizeENUMERATED {n2, n3, n6}(interleaver size)shiftIndexINTEGER(0..MaxNrofPhysicalResourceBlocks-1)OPTIONAL(interleaver shift)},nonInterleavedNULL},tci-StatesPDCCHSEQUENCE {SIZE(1..maxNrofTCI-StatesPDCCH)) OF TCI-StateIdOPTIONAL,(QCL configuration information)tci-PresentInDCIENUMERATED {enabled}OPTIONAL, -- Need S} In Table 4, “tci-StatesPDCCH” (simply referred to as “transmission configuration indication (TCI) state”) configuration information includes information on one or more synchronization signals (SS)/physical broadcast channel (PBCH) block indexes or channel state information reference signal (CSI-RS) indexes having a QCL (quasi co-located) relationship with a demodulation reference signal (DMRS) transmitted in the corresponding control resource set. Now, methods for allocating time and frequency resources for transmission of data in NR will be described. NR may provide detailed frequency domain resource allocation (FD-RA) methods as follows, in addition to frequency domain resource candidate allocation through BWP indication. FIG.6is a diagram illustrating an example of PDSCH frequency domain resource allocation in a wireless communication system according to an embodiment of the disclosure. FIG.6illustrates three frequency domain resource allocation methods of type 0 (6-00), type 1 (6-05), and dynamic switch6-10, which may be configured through a higher layer in NR. Referring toFIG.6, in the case where a terminal is configured to use only resource type 0 through higher layer signaling (6-00), some downlink control information (DCI) for allocating PDSCHs to the terminal has a bitmap of NRBG bits. The conditions for this will be described later. In this case, NRBG indicates the number of resource block groups (RBGs) determined, as shown in Table 5 below, according to the size of a BWP allocated by a BWP indicator and the higher layer parameter “rbg-Size”, and data is transmitted in the RBG represented as “1” using a bitmap. TABLE 5Bandwidth Part SizeConfiguration 1Configuration 21-362437-724873-144816145-2751616 In the case where the terminal is configured to use only resource type 1 through higher layer signaling (6-05), some DCI for allocating PDSCHs to the terminal has frequency domain resource allocation information including ┌log2(NRBDL,BWP(NRBDL,BWP+1)/2)┐ bits. The conditions for this will be described again later. The base station may configure starting VRB6-20and the length6-25of the frequency domain resource subsequent thereto. If the terminal is configured to use both resource type 0 and resource type 1 through higher layer signaling (6-10), some DCI for allocating the PDSCHs to the corresponding terminal has frequency domain resource allocation information including bits of a large value6-35among the payload6-15for configuring resource type 0 and the payloads6-20and6-25for configuring resource type 1. The conditions for this will be described again later. In this case, one bit may be added to the foremost part (MSB) of the frequency domain resource allocation information in the DCI, and bit0indicates that resource type 0 is used, and bit1indicates that resource type 1 is used. FIG.7is a diagram illustrating an example of PDSCH time domain resource allocation in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.7, the base station may indicate the time domain location of a PDSCH resource according to subcarrier spacing (SCS) (μPDSCHand μPDCCH) of a data channel and a control channel configured using a higher layer, a scheduling offset value (K0), the starting location7-00of OFDM symbols in one slot dynamically indicated through DCI, and the length7-05thereof. FIG.8is a diagram illustrating an example of time domain resource allocation according to subcarrier spacing of a data channel and a control channel in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.8, if the subcarrier spacing of the data channel is the same as the subcarrier spacing of the control channel (μPDSCH=μPDCCH) (8-00), the slot numbers for the data and the control are the same. Accordingly, the base station and the terminal may recognize the occurrence of scheduling offset according to a predetermined slot offset (K0). On the other hand, if the subcarrier spacing of the data channel is different from the subcarrier spacing of the control channel (μPDSCH≠μPDCCH) (8-05), the slot numbers for the data and the control are different from each other. Accordingly, the base station and the terminal may recognize the occurrence of scheduling offset according to a predetermined slot offset (K0), based on the subcarrier spacing of the PDCCH. NR provides various types of DCI formats as shown in Table 6 below in order for the terminal to efficiently receive a control channel. TABLE 6DCI formatUsage0_0Scheduling of PUSCH in one cell0_1Scheduling of PUSCH in one cell1_0Scheduling of PDSCH in one cell1_1Scheduling of PDSCH in one cell2_0Notifying a group of UEs of the slot format2_1Notifying a group of UEs of the PRB(s) and OFDMsymbol(s) where UE may assume no transmission isintended for the UE2_2Transmission of TPC commands for PUCCH and PUSCH2_3Transmission of a group of TPC commands for SRStransmissions by one or more UEs For example, the base station may use DCI format 0_0 or DCI format 0_1 in order to schedule PDSCHs for a single cell. DCI format 0_1 includes at least the following information in the case where DCI format 0_1 is transmitted together with CRC scrambled by a cell radio network temporary identifier (C-RNTI), a configured scheduling RNTI (CS-RNTI), or a new-RNTI.Identifier for DCI formats (1 bits): This is a DCI format indicator, which is always configured as “1”.Frequency domain resource assignment (NRBG bits or ┌log2(NRBDL,BWP(NRBDL,BWP+1)/2)┐ bits): This indicates frequency domain resource allocation, and if DCI format 1_0 is monitored in the UE specific search space, NRBDL,BWPindicates the size of an active DL BWP. Otherwise, NRBDL,BWPindicates the size of an initial DL BWP. The NRBG is the number of resource block groups. Refer to the above frequency domain resource allocation for details thereof.Time domain resource assignment (0 to 4 bits): This indicates time domain resource allocation according to the above description.VRB-to-PRB mapping (1 bit): “0” indicates non-interleaved VRB-to-PRB mapping, and “1” indicates interleaved VRP-to-PRB mapping.Modulation and coding scheme (5 bits): This indicates a modulation order and a coding rate used in transmission of a PDSCH.New data indicator (1 bit): This indicates whether the PDSCH corresponds to initial transmission or retransmission according to toggling.Redundancy version (2 bits): This indicates the redundancy version used for the transmission of a PDSCH.HARQ process number (4 bits): This indicates an HARQ process number used in transmission of a PDSCH.Downlink assignment index (2 bits): This is a DAI indicator.TPC command for scheduled PUCCH (2 bits): This is a PUCCH power control indicatorPUCCH resource indicator (3 bits): This is a PUCCH resource indicator and indicates one of eight resources configured using a higher layer.PDSCH-to-HARQ_feedback timing indicator (3 bits): This is a HARQ feedback timing indicator and indicates one of eight feedback timing offsets configure using a higher layer. DCI format 1_1 includes at least the following information in the case where DCI format 1_1 is transmitted together with CRC scrambled by a cell radio network temporary identifier (C-RNTI), a configured scheduling RNTI (CS-RNTI) or a new-RNTI.Identifier for DCI formats (1 bit): This is a DCI format indicator, which is always configured as “1”.Carrier indicator (0 or 3 bits): This indicates a CC (or a cell) in which the PDSCH allocated by a corresponding DCI is transmitted.Bandwidth part indicator (0, 1, or 2 bits): This indicates the BWP through which the PDSCH allocated by a corresponding DCI is transmitted.Frequency domain resource assignment (a payload is determined according to the frequency domain resource allocation): This indicates frequency domain resource allocation, and NRBDL,BWPindicates the size of an active DL BWP. Refer to the frequency domain resource allocation for details thereof.Time domain resource assignment (0 to 4 bits): This indicates time domain resource allocation according to the above description.VRB-to-PRB mapping (0 or 1 bit): “0” indicates non-interleaved VRB-to-PRB mapping, and “1” indicates interleaved VRP-to-PRB mapping. This is 0 bit in the case where the frequency domain resource allocation is configured as resource type 0.PRB bundling size indicator (0 or 1 bit): This is 0 bit if a higher layer parameter “prb-BundlingType” is not configured or is configured as “static”, and is 1 bit if a higher layer parameter “prb-BundlingType” is configured as “dynamic”.Rate matching indicator (0 or 1 or 2 bits): This indicates a rate matching pattern.ZP CSI-RS trigger (0 or 1 or 2 bits): This is an indicator for triggering an aperiodic ZP CSI-RS. For transport block 1:Modulation and coding scheme (5 bits): This indicates a modulation order and a coding rate used for the transmission of a PDSCH.New data indicator (1 bit): This indicates whether the PDSCH corresponds to initial transmission or retransmission according to toggling.Redundancy version (2 bits): This indicates the redundancy version used in transmission of a PDSCH. For transport block 2:Modulation and coding scheme (5 bits): This indicates a modulation order and a coding rate used for the transmission of a PDSCH.New data indicator (1 bit): This indicates whether the PDSCH corresponds to initial transmission or retransmission according to toggling.Redundancy version (2 bits): This indicates the redundancy version used in transmission of a PDSCH.HARQ process number (4 bits): This indicates an HARQ process number used in transmission of a PDSCH.Downlink assignment index (0, 2, or 4 bits): This is a DAI indicator.TPC command for scheduled PUCCH (2 bits): This is a PUCCH power control indicatorPUCCH resource indicator (3 bits): This is a PUCCH resource indicator and indicates one of eight resources configured using a higher layer.PDSCH-to-HARQ_feedback timing indicator (3 bits): This is a HARQ feedback timing indicator and indicates one of eight feedback timing offsets configured using a higher layer.Antenna port (4, 5, or 6 bits): This indicates a DMRS port and a code-division-multiplexed (CDM) group without data.Transmission configuration indication (0 or 3 bits): a TCI indicator.SRS request (2 or 3 bits): an SRS transfer request indicatorCBG transmission information (0, 2, 4, 6, or 8 bits): This is an indicator indicating whether or not to transmit code block groups in the allocated PDSCH. “0” indicates that a corresponding CBG is not to be transmitted, and “1” indicates that a corresponding CBG is to be transmitted.CBG flushing out information (0 or 1 bit): This is an indicator indicating whether or not previous CBGs are contaminated. “0” indicates that the previous CBGs might be contaminated, and “1” indicates that the previous CBGs are combinable when receiving a retransmission.DMRS sequence initialization (0 or 1 bit): a DMRS scrambling ID selection indicator The number of DCIs having different sizes that the terminal is capable of receiving for each slot in a corresponding cell is up to 4. The number of DCIs having different sizes, which are scrambled with a C-RNTI that can be received by the terminal for each slot in a corresponding cell, is up to 3. Here, the antenna port indication may be indicated through the following Tables 7 to 10. TABLE 7Antenna port(s) (1000 + DMRS port), dmrs-Type = 1, maxLength = 1One Codeword:Codeword 0 enabled,Codeword 1 disabledNumber of DMRSCDM group(s)DMRSValuewithout dataport(s)010111210, 1320421522623720, 1822, 3920-21020-31120, 212-15ReservedReserved TABLE 8Antenna port(s) (1000 + DMRS port), dmrs-Type = 1, maxLength = 2Number ofDMRS CDMNumber ofgroup(s)DMRSfront-loadValuewithout dataport(s)symbolsOne Codeword:Codeword 0 enabled,Codeword 1 disabled01011111210, 113201421152216231720, 11822, 31920-211020-311120, 2112202132121422215232162421725218262192722020, 122122, 322224, 522326, 722420, 422522, 622620, 1, 422722, 3, 622820, 1, 4, 522922, 3, 6, 723020, 2, 4, 6231ReservedReservedReservedTwo Codewords:Codeword 0 enabled,Codeword 1 enabled020-42120, 1, 2, 3, 4, 62220, 1, 2, 3, 4, 5, 62320, 1, 2, 3, 4, 5, 6, 724-31reservedreservedreserved TABLE 9Antenna port(s) (1000 + DMRS port), dmrs-Type = 2, maxLength = 1One codeword:Two codewordsCodeword 0 enabled,Codeword 0 enabledCodeword 1 disabledCodeword 1 enabledNumber ofNumber ofDMRS CDMDMRS CDMgroups(s)DMRSgroup(s)DMRSValuewithout dataport(s)Valuewithout dataport(s)010030-4111130-5210, 12-31reservedreserved320421522623720, 1822, 3920-21020-31130123113321433153416351730, 11832, 31934, 52030-22133-52230-32320, 224-31ReservedReserved TABLE 10Antenna port(s) (1000 + DMRS port), dmrs-Type = 2, maxLength = 2One Codeword:Codeword 0 enabled,Codeword 1 disabledNumber ofDMRS CDMNumber ofgroup(s)DMRSfront-loadValuewithout dataport(s)symbols01011111210, 113201421152216231720, 11822, 31920-211020-311130112311133211433115341163511730, 111832, 311934, 512030-212133-512230-312320, 21243022531226322273322834229352303623137232382333923431023531123630, 123732, 323834, 523936, 724038, 9241310, 1124230, 1, 624332, 3, 824434, 5, 1024530, 1, 6, 724632, 3, 8, 924734, 5, 10, 112481024911250162511725210, 125316, 725420, 125522, 325626, 725728, 9258-63ReservedReservedReservedTwo Codewords:Codeword 0 enabled,Codeword 1 enabledNumber ofDMRS CDMNumber ofgroup(s)DMRSfront-loadValuewithout dataport(s)symbols030-41130-51220, 1, 2, 3, 62320, 1, 2, 3, 6, 82420, 1, 2, 3, 6, 7, 82520, 1, 2, 3, 6, 7, 8, 926-63ReservedReservedReserved Table 7 is used when “dmrs-type” is indicated as 1 and “maxLength” is indicated as 1, and Table 8 is used when “dmrs-type”=1 and “maxLength”=2. In addition, the port of the DMRS to be used is indicated using Table 9 when “dmrs-type”=2 and “maxLength”=1, and Table 10 when “dmrs-type”=2 and “maxLength”=2. The numbers 1, 2, and 3 indicated by “Number of DMRS CDM group(s) without data” in the tables denote CDMR groups {0}, {0, 1}, and {0, 1, 2}, respectively. “DMRS port(s)” show indexes of the used ports arranged in sequence. The antenna port is indicated as “DMRS port+1000”. The CDM group of the DMRS is associated with a method of generating a DMRS sequence and the antenna ports as shown in Tables 11 and 12. Table 11 shows parameters when “dmrs-type”=1, and Table 12 shows parameters when “dmrs-type”=2. TABLE 11Parameters for PDSCH DM-RS dmrs-type = 1CDMgroupwf(k′)wt(l′)pλΔk′ = 0k′ = 1l′ = 0l′ = 1100000+1+1+1+1100100+1−1+1+1100211+1+1+1+1100311+1−1+1+1100400+1+1+1−1100500+1−1+1−1100611+1+1+1−1100711+1−1+1−1 TABLE 12Parameters for PDSCH DM-RS dmrs-type = 2CDMgroupwf(k′)wt(l′)pλΔk′ = 1k′ = 1l′ = 0l′ = 1100000+1+1+1+1100100+1−1+1+1100212+1+1+1+1100312+1−1+1+1100424+1+1+1+1100524+1−1+1+1100600+1+1+1−1100700+1−1+1−1100812+1+1+1−1100912+1−1+1−1101024+1+1+1−1101124+1−1+1−1 The sequence of DMRSs according to respective parameters is determined using Equation 1 below. ak,l(p,u)=βPDSCHDMRS⁢wf(k′)⁢wt(l′)⁢r⁡(2⁢n+k′)k={4⁢n+2⁢k′+ΔConfiguration⁢type⁢16⁢n+k′+ΔConfiguration⁢type⁢2k′=0,1l=l_+l′n=0,1,…Equation⁢1 If only one codeword is enabled in Table 7 and Table 8, the lines 2, 9, 10, 11, and 30 are used only for single-user MIMO. That is, in this case, the terminal may not perform a multi-user MIMO reception operation, such as canceling, nulling, or whitening the multi-user interference, without assuming that another terminal is co-scheduled. If only one codeword is enabled in Table 9 and Table 10, the lines 2, 10, and 23 are used only for single-user MIMO. That is, in this case, the terminal may not perform a multi-user MIMO reception operation, such as canceling, nulling, or whitening the multi-user interference, without assuming that another terminal is co-scheduled. FIG.9is a diagram illustrating the radio protocol structure of a base station and a terminal in the case of a single cell, carrier aggregation, and dual connectivity, respectively, according to an embodiment of the disclosure. Referring toFIG.9, the radio protocol of the next-generation mobile communication system includes NR service data adaption protocol (SDAP)9-25or9-70, NR packet data convergence protocol (PDCP)9-30or9-65, NR radio link control (RLC)9-35or9-60, and NR medium access control (MAC)9-40or9-55in a terminal and an NR base station, respectively. The primary functions of the NR SDAP9-25or9-70may include some of the following functions.Transfer of user plane dataMapping between QoS flow and DRB for DL and ULMarking QoS flow ID in both DL and UL packetsMapping reflective QoS flow to DRB for UL SDAP PDUs With regard to the SDAP layer entity, the terminal may receive a configuration indicating whether or not to use a header of the SDAP layer entity or whether or not to use functions of the SDAP layer entity for each PDCP layer entity, for each bearer, or for each logical channel through an RRC message. In the case where the SDAP header is configured, a 1-bit NAS reflective QoS configuration indicator and a 1-bit AS reflective QoS configuration indicator of the SDAP header may instruct the terminal to update or reconfigure mapping information between the QoS flow and the data bearers in uplink and downlink. The SDAP header may include QoS flow ID information indicating the QoS. The QoS information may be used as data processing priority, scheduling information, or the like in order to support effective services. The primary functions of the NR PDCP9-30or9-65may include some of the following functions.Header compression and decompression (ROHC only)Transfer of user dataIn-sequence delivery of upper layer PDUsOut-of-sequence delivery of upper layer PDUsSequence reordering (PDCP PDU reordering for reception)Duplicate detection of lower layer SDUsRetransmission of PDCP SDUsCiphering and decipheringTimer-based SDU discard in uplink The above reordering function of the NR PDCP entity denotes a function of reordering PDCP PDUs received from a lower layer, based on a PDCP sequence number (SN), may include a function of transmitting data to a higher layer in the reordered order, may include a function of directly transmitting data without consideration of an order, may include a function of reordering the sequence and recording lost PDCP PDUs, may include a function of sending a status report of the lost PDCP PDUs to the transmitting end, and may include a function of making a request for retransmission of the lost PDCP PDUs. The primary functions of the NR RLC9-35or9-60may include some of the following functions.Data transfer function (transfer of upper layer PDUs)In-sequence delivery of upper layer PDUsOut-of-sequence delivery of upper layer PDUsARQ function (error correction through ARQ)Concatenation, segmentation, and reassembly of RLC SDUsRe-segmentation of RLC data PDUsReordering of RLC data PDUsDuplicate detectionProtocol error detectionRLC SDU discardRLC re-establishment The above in-sequence delivery function of the NR RLC entity denotes a function of transferring RLC SDUs received from a lower layer to a higher layer in sequence, may include a function of, if one original RLC SDU is divided into a plurality of RLC SDUs and received, reassembling and transmitting the same, may include a function of reordering the received RLC PDUs, based on an RLC sequence number (SN) or a PDCP sequence number (SN), may include a function of reordering the sequence and recording lost RLC PDUs, may include a function of sending a status report of the lost RLC PDUs to the transmitting end, may include a function of making a request for retransmission of the lost RLC PDUs, may include a function of, if there is a lost RLC SDU, transmitting only the RLC SDUs prior to the lost RLC SDU to a higher layer in sequence, may include a function of, if a predetermined timer expires even though there is a lost RLC SDU, transmitting all RLC SDUs received before the timer starts to a higher layer in sequence, or may include a function of, if a predetermined timer expires even though there is a lost RLC SDU, transmitting all RLC SDUs received until the present to a higher layer in sequence. In addition, the RLC PDUs may be processed in the order of reception (in the order of arrival regardless of a serial number or a sequence number thereof), and may be transmitted to the PDCP entity in an out-of-sequence delivery manner. In the case of segments, the segments, which are stored in the buffer or will be received later, may be received and reconfigured into one complete RLC PDU, and the RLC PDU may be processed and transmitted to the PDCP entity. The NR RLC layer may not include a concatenation function, which may be performed in the NR MAC layer or may be replaced with a multiplexing function of the NR MAC layer. The out-of-sequence delivery of the NR RLC entity denotes a function of directly transmitting RLC SDUs received from a lower layer to a higher layer regardless of sequence, may include a function of, if one original RLC SDU is divided into a plurality of RLC SDUs and is received, reassembling and transmitting the same, and may include a function of storing and ordering RLC SNs or PDCP SNs of the received RLC PDUs, thereby recording the lost RLC PDUs. The NR MAC9-40or9-55may be connected to a plurality of NR RLC entities configured in a single terminal, and the primary functions of the NR MAC may include some of the following functions.Mapping between logical channels and transport channelsMultiplexing/demultiplexing of MAC SDUsScheduling information reportingHARQ function (error correction through HARQ)Priority handling between logical channels of one UEPriority handling between UEs by means of dynamic schedulingMBMS service identificationTransport format selectionPadding The NR PHY layers9-45and9-50may perform operations of channel-coding and modulating the higher layer data into OFDM symbols and transmitting the same through a radio channel, or operations of demodulating and channel-decoding the OFDM symbols received through the radio channel and transmitting the same to the higher layer. The detailed structures of the radio protocols may be changed in various ways according to a carrier (or cell) operating scheme. For example, in the case where a base station transmits data to a terminal, based on a single carrier (or cell), the base station and the terminal use a single protocol structure for the respective layers as shown in9-00. On the other hand, in the case where a base station transmits data to a terminal, based on carrier aggregation (CA) using multiple carriers in a single TRP, the base station and the terminal use a protocol structure in which a single structure is provided until the RLC layer and the PHY layer is multiplexed through the MAC layer as shown in9-10. As another example, in the case where a base station transmits data to a terminal, based on dual connectivity (DC) using multiple carriers in multiple TRPs, the base station and the terminal use a protocol structure in which a single structure is provided until the RLC layer and the PHY layer is multiplexed through the MAC layer as shown in9-20. In LTE and NR, the terminal has a procedure of reporting capability supported by the terminal to a corresponding base station while being connected to a serving base station, which will be referred to as “UE capability (reporting)” in the following description. The base station may transmit a UE capability enquiry message requesting capability reporting to a terminal in a connected state. The message may include a request for terminal capability for each RAT type by the base station. The request for each RAT type may include information on a requested frequency band. In addition, the UE capability enquiry message may be transmitted while requesting a plurality of RAT types through a single RRC message container, or a plurality of UE capability enquiry messages including requests for respective RAT types may be included to then be transmitted to the terminal. That is, the UE capability enquiry may be repeated multiple times, and the terminal may configure a UE capability information message corresponding thereto, and may report the same multiple times. In the next-generation mobile communication system, a request for terminal capability may be performed for MR-DC, as well as NR, LTE, and EN-DC. For reference, the UE capability enquiry message is generally transmitted in the initial stage after the terminal is connected, but the base station is capable of requesting the UE capability under any condition as necessary. In the above step, the terminal receiving the request for reporting the UE capability from the base station configures terminal capability according to the RAT type requested by the base station and band information. A method of configuring UE capability by the terminal in an NR system will be summarized below. 1. If the terminal receives a list of LTE and/or NR bands through a UE capability request from the base station, the terminal configures a band combination (BC) for EN-DC and NR stand-alone (SA). In other words, the terminal configures a candidate list of BCs for EN-DC and NR SA, based on the bands requested by the base station using “FreqBandList”. In addition, the bands have priority in the order as described in “FreqBandList”. 2. If the base station requests UE capability reporting by setting “eutra-nr-only” flag or “eutra” flag, the terminal completely removes the NR SA BCs from the configured candidate list of BCs. This operation may be performed only when an LTE base station (eNB) requests “eutra” capability. 3. Thereafter, the terminal removes fallback BCs from the candidate list of BCs configured in the above step. The fallback BC corresponds to the case in which the band corresponding to at least one SCell is removed from certain super set BCs, and the fallback BC may be omitted because the super configure BCs are capable of covering the fallback BC. This step is also applied to the MR-DC, i.e., LTE bands. The remaining BCs after this step constitute a final “candidate BC list”. 4. The terminal selects the BCs to be reported, which conform to the requested RAT type, from the final “candidate BC list”. In this step, the terminal configures “supportedBandCombinationList” in a predetermined order. In other words, the terminal configures the BCs and UE capability to be reported in the order of the preconfigured RAT types (nr→eutra-nr→eutra). In addition, the terminal configures “featureSetCombination” for the configured “supportedBandCombinationList” and configures a list of “candidate feature set combinations” from the candidate BC list from which the list of the fallback BCs (including capabilities in the equal or lower level) is removed. The “candidate feature set combination” may include the feature set combinations for BCs both of NR and EUTRA-NR, and may be obtained from the feature set combinations of the “UE-NR-Capabilities” and “UE-MRDC-Capabilities” containers. 5. In addition, if the requested RAT type is “eutra-nr” and has affects, “featureSetCombinations” is included in both containers of “UE-MRDC-Capabilities” and “UE-NR-Capabilities”. However, the feature set of NR is included only in “UE-NR-Capabilities”. After the terminal capability is configured, the terminal may transmit a UE capability information message including the UE capability to the base station. Then, the base station performs an appropriate scheduling and transmission/reception management for the terminal, based on the UE capability received from the terminal. For the convenience of description below, Tables 7 to 10 will be referred to as “first antenna port indication (or antenna port indication of the related art)”, and the tables obtained by modifying some or all of the code points in Tables 7 to 10 will be referred to as “second antenna port indication (new antenna port indication)”. In order to support non-coherent joint transmission (NC-JT) for providing data to a terminal at one or more transmission points at the same time, it is necessary to 1) allocate the PDSCHs transmitted at two (or more) different transmission points through a single PDCCH or 2) allocate the PDSCHs transmitted at two or more different transmission points through multiple PDCCHs. The terminal is capable of acquiring a QCL connection relationship between respective reference signals or between channels, based on L1/L2/L3 signaling and efficiently estimating the large-scale parameters of each reference signal or channel according thereto. If the transmission point of a certain reference signal or channel is different, it is difficult for the large-scale parameters to be shared with each other. Therefore, the base station needs to simultaneously inform the terminal of quasi co-location information about two or more transmission points through two or more TCI states when performing cooperative transmission. If the non-coherent cooperative transmission is supported through multiple PDCCHs, that is, if two or more PDCCHs allocate two or more PDSCHs to the same serving cell and the same bandwidth part at the same time, the two or more TCI states may be allocated to the respective PDSCHs or DMRS ports through the respective PDCCHs. On the other hand, if the non-coherent cooperative transmission is supported through a single PDCCH, that is, if one PDCCH allocates two or more PDSCHs to the same serving cell and the same bandwidth part at the same time, the two or more TCI states must be allocated to the respective PDSCHs or DMRS ports through a single PDCCH. If it is assumed that the DMRS ports allocated to the terminal are divided into DMRS port group A transmitted at transmission point A and DMRS port group B transmitted at transmission point B at a specific time, the two or more TCI states may be connected to the respective DMRS port groups to estimate channels, based on different QCL assumptions for the respective groups. Meanwhile, different DMRS ports may be code-division-multiplexed (CDM), frequency-division-multiplexed (FDM), or time-domain-multiplexed (TDM) in order to increase the channel measurement accuracy and reduce transmission burden. If the DMRS ports to be multiplexed using CDM, among the above DMRS ports, are collectively referred to as a “CDM group”, it may be important to ensure that the DMRS ports existing in the same CDM group do not have different TCI states because when the DMRS ports in the CDM group have similar characteristics of channels for the respective ports, the code-based multiplexing is performed well (that is, in the case where the characteristics of channels for the respective ports are similar, distinction using orthogonal cover code (OCC) is easily performed). The disclosure provides a method of indicating, to a terminal, DMRS ports and a CDM group without data in order to satisfy the characteristics described above. Hereinafter, for the convenience of explanation, the allocation of the DMRS ports and the CDM group without data will be referred to as “DMRS allocation”. Referring to the first antenna port indication as shown in Tables 7 to 10 (hereinafter, referred to as “antenna port indication of the related art”), it can be seen that some of the code points that are able to be used for the NC-JT, that is, the points for allocating two or more DMRSs, do not satisfy a DMRS allocation condition for the NC-JT (i.e., a condition in which the DMRS ports existing in the same CDM group do not have different TCI states from each other). For example, in the case where a single codeword is used in Table 9, it can be seen that lines {2, 7, 8, 17, 18, 19}, which are some of lines {2, 7, 8, 9, 10, 17, 18, 19, 20, 21, 22, 23} that allocate two or more DMRS ports, allocate one of the DMRS port pairs {0, 1}, {2, 3}, and {4, 5}, and that the DMRS port pairs belong to a single CDM group according to Table 12. This means that lines {2, 7, 8, 17, 18, 19} are not suitable for DMRS allocation for the NC-JT in Table 9. This makes it impossible to use about half of the possible code points, and the antenna port indications of the related art are required to be changed. In the above description, “allocating” DMRS ports and CDM groups for the NC-JT may be understood as allocation of the DMRS ports and the CDM groups at the time at which the terminal recognizes that two or more PDSCHs are able to be allocated to the same serving cell and the same bandwidth part using one PDCCH (or two or more DMRS port groups or allocated TCI code points are associated with two or more TCI states) by various methods such as a size of DCI, a payload of a specific field in DCI, and the type of RNTI used for the CRC scrambling of the PDCCH including the DCI. Similar to the above description, in the case where a single codeword is used in Table 7, it can be seen that lines {2, 7, 8}, which are some of lines {2, 7, 8, 9, 10, 11} that allocate two or more DMRS ports, allocate one of the DMRS port pairs {0, 1} and {2, 3}, and that the DMRS port pairs belong to a single CDM group according to Table 11. This means that lines {2, 7, 8} are not suitable for DMRS allocation for the NC-JT in Table 9. Similar to the above description, in the case where a single codeword is used in Table 8, it can be seen that lines {2, 7, 8, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29}, which are some of lines {2, 7, 8, 9, 10, 11, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30} that allocate two or more DMRS ports, allocate one of the DMRS port pairs {0, 1}, {2, 3}, {0, 4}, {2, 6}, {0, 1, 4}, {2, 3, 6}, {0, 1, 4, 5}, and {2, 3, 6, 7}, and that the DMRS port pairs belong to a single CDM group according to Table 11. This means that lines {2, 7, 8, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29} are not suitable for DMRS allocation for the NC-JT in Table 9. Similar to the above description, in the case where a single codeword is used in Table 10, it can be seen that lines {2, 7, 8, 17 to 19, 36 to 47, 52}, which are some of lines {2, 7, 8, 9, 10, 17 to 23, 36 to 47, 52} that allocate two or more DMRS ports, allocate one subset of the DMRS port pairs {0, 1, 6, 7}, {2, 3, 8, 9}, and {4, 5, 10, 11}, and that the DMRS port pairs belong to a single CDM group according to Table 12. This means that lines {2, 7, 8, 17 to 19, 36 to 47, 52} are not suitable for DMRS allocation for the NC-JT in Table 10. The following embodiments provide a method of performing second antenna port indication (hereinafter, referred to as “new antenna port indication”) by modifying some or all of the code points of the antenna port indication of the related art and a method of selecting one of the antenna port indication of the related art and the new antenna port indication, based on the above method. First Embodiment: New Antenna Port Indication Method 1 A first embodiment proposes a method of performing new antenna port indication by correcting the code points having the problems as described above in the antenna port indication of the related art. As one of the methods for solving the problems described above, it is possible to divide the DMRS ports indicated under specific conditions into two or more groups, and to modify the values of the DMRS ports belonging to a second group through a specific operation. The above specific conditions may be at least one of 1) that the number of DMRS CDM groups indicated by the antenna port indication is 2 or more, 2) that the number of DMRS ports indicated by the antenna port indication is 2 or more, and 3) that the number of codewords indicated by the antenna port indication is 2 or more, or may be a combination thereof (for example, the case where both condition 1 and condition 2 are satisfied). Dividing the DMRS ports into two or more groups may be, for example, dividing the DMRS ports into two or more groups having an equal number of DMRS ports in descending or ascending order, based on DMRS port numbers assigned to the respective DMRS ports (in the case where the DMRS ports are unable to be divided into an equal number of DMRS ports, the DMRS ports may be divided such that the last group has a smaller number of DMRS ports or such that any group has a smaller number of DMRS ports). The specific operation may be adding or subtracting a specific value X (e.g., X=1 or 2), which is predetermined or configured through higher layer signaling. Alternatively, the specific operation may be taking a modulo operation such that the value obtained by adding the value X does not exceed a specific range (e.g., the maximum DMRS port number that is able to be indicated by the corresponding antenna port indication). A method of changing the second half of the DMRS port set indicated by the problematic code points, among the antenna port indication code points of the related art, using a rule “(second half of DMRS port set+2) % Max DMRS port” may be considered as a method of performing configuration such that the DMRS ports associated with different TCI states are transmitted in different CDM groups. This may be expressed in detail as shown in Table 13-1 to Table 13-4 below. For example, referring to Table 13-1, if existing DMRS ports are 0 and 1, DMRS port 1 corresponding to the second half may be changed to (1+2) % 4=3. For example, referring to Table 13-2, if existing DMRS ports are 0, 1, 4, and 5, DMRS ports 4 and 5 corresponding to the second half may be changed to 6 and 7 by applying the same equation. The same principle is applied to Tables 13-3 and 13-4, and if three DMRS ports are used, the first half indicates the first two DMRS ports, and the second half indicates the third DMRS port. The port indexes after change may be used while being sorted in the order of a small index or without sorting the same. A method of indicating the changed code points may include a method of pre-storing the changed code points in a memory of the terminal and then using the same, a method of updating values for the respective code points through RRC, and a method of indicating a change rule, which is actually used, among one or more change rules using RRC. TABLE 13-1DMRS indication table for antenna port(s) (1000 +DMRS port), dmrs-Type = 1, maxLength = 1One Codeword:Codeword 0 enabled,Codeword 1 disabledNumber ofDMRS CDMgroup(s)DMRSValuewithout dataport(s)010111210, 1320421522623720, 3822, 1920-21020-31120, 212-15ReservedReserved TABLE 13-2DMRS indication table for antenna port(s) (1000 + DMRSport), dmrs-Type = 1, maxLength = 2One Codeword:Codeword 0 enabled,Codeword 1 disabledNumber ofDMRS CDMNumber ofgroup(s)DMRSfront-loadValuewithout dataport(s)symbols01011111210, 113201421152216231720, 31822, 51920-211020-311120, 2112202132121422215232162421725218262192722020, 322122, 522224, 722326, 922420, 622522, 022620, 1, 622722, 3, 022820, 1, 6, 722922, 3, 0, 123020, 2, 4, 6231ReservedReservedReservedTwo Codewords:Codeword 0 enabled,Codeword 1 enabledNumber ofDMRS CDMNumber ofgroup(s)DMRSfront-loadValuewithout dataport(s)symbols020-42120, 1, 2, 3, 4, 62220, 1, 2, 3, 4, 5, 62320, 1, 2, 3, 4, 5, 6, 724-31reservedreservedreserved TABLE 13-3DMRS indication table for antenna port(s) (1000 + DMRSport), dmrs-Type = 2, maxLength = 1One codeword:Two codewords:Codeword 0 enabled,Codeword 0 enabled,Codeword 1 disabledCodeword 1 enabledNumber ofNumber ofDMRS CDMDMRS CDMgroup(s)DMRSgroup(s)DMRSValuewithout dataport(s)Valuewithout dataport(s)010030-4111130-5210, 12-31reservedreserved320421522623720, 3822, 5920-21020-31130123113321433153416351730, 31832, 51934, 72030-22133-52230-32320, 224-31ReservedReserved TABLE 13-4DMRS indication table for antenna port(s) (1000 + DMRSport), dmrs-Type = 2, maxLength = 2One Codeword:Codeword 0 enabled,Codeword 1 disabledNumber ofDMRS CDMNumber ofgroup(s)DMRSfront-loadValuewithout dataport(s)symbols01011111210, 113201421152216231720, 31822, 51920-211020-311130112311133211433115341163511730, 311832, 511934, 712030-212133-512230-312320, 21243012531226322273322834229352303623137232382333923431023531123630, 323732, 523834, 723936, 924038, 11241310, 124230, 1, 824332, 3, 1024434, 5, 024530, 1, 8, 924632, 3, 10, 1124734, 5, 0, 12481024911250162511725210, 125316, 725420, 125522, 325626, 725728, 9258-63ReservedReservedReservedTwo Codewords:Codeword 0 enabled,Codeword 1 enabledNumber ofDMRS CDMNumber ofgroup(s)DMRSfront-loadValuewithout dataport(s)symbols030-41130-51220, 1, 2, 3, 62320, 1, 2, 3, 6, 82420, 1, 2, 3, 6, 7, 82520, 1, 2, 3, 6, 7, 8, 926-63ReservedReservedReserved A method of changing the second half of the DMRS port set indicated by the problematic code points, among the antenna port indication code points of the related art, using a rule of “second half of DMRS port set+2” and, if the value is negative, adding the Max DMRS port, may be considered as another method of performing configuration such that the DMRS ports associated with different TCI states are transmitted in different CDM groups. This may be expressed in detail as shown in Table 14-1 to Table 14-4 below. For example, if existing DMRS ports are 0 and 1 in Table 14-1, DMRS port 1 corresponding to the second half may be changed to (1−2)=−1, and since the value is negative, it may be changed to −1+4=3. For example, in Table 14-2, if existing DMRS ports are 0, 2, 6, and 7 at existing code point 29, DMRS ports 6 and 7 corresponding to the second half may be changed to 4 and 5 by applying the same equation. The same principle is applied to Tables 14-3 and 14-4, and if three DMRS ports are used, the first half indicates the first two DMRS ports, and the second half indicates the third DMRS port. The port indexes according to the change may be used while being sorted in the order of a small index or without sorting the same. A method of indicating the changed code points may include a method of pre-storing the changed code points in a memory of the terminal and then using the same, a method of updating values for the respective code points through RRC, and a method of indicating a change rule, which is actually used, among one or more change rules using RRC. TABLE 14-1DMRS indication table for antenna port(s) (1000 +DMRS port), dmrs-Type = 1, maxLength = 1One Codeword:Codeword 0 enabled,Codeword 1 disabledNumber ofDMRS CDMgroup(s)DMRSValuewithout dataport(s)010111210, 1320421522623720, 3822, 1920-21020-31120, 212-15ReservedReserved TABLE 14-2DMRS indication table for antenna port(s) (1000 + DMRSport), dmrs-Type = 1, maxLength = 2One Codeword:Codeword 0 enabled,Codeword 1 disabledNumber ofDMRS CDMNumber ofgroup(s)DMRSfront-loadValuewithout dataport(s)symbols01011111210, 113201421152216231720, 71822, 11920-211020-311120, 2112202132121422215232162421725218262192722020, 722122, 122224, 322326, 522420, 222522, 422620, 1, 222722, 3, 422820, 1, 2, 322922, 3, 4, 523020, 2, 4, 6231ReservedReservedReservedTwo Codewords:Codeword 0 enabled,Codeword 1 enabledNumber ofDMRS CDMNumber ofgroup(s)DMRSfront-loadValuewithout dataport(s)symbols020-42120, 1, 2, 3, 4, 62220, 1, 2, 3, 4, 5, 62320, 1, 2, 3, 4, 5, 6, 724-31reservedreservedreserved TABLE 14-3DMRS indication table for antenna port(s) (1000 + DMRSport), dmrs-Type = 2, maxLength = 1One codeword:Two codewords:Codeword 0 enabled,Codeword 0 enabled,Codeword 1 disabledCodeword 1 enabledNumber ofNumber ofDMRS CDMDMRS CDMgroup(s)DMRSgroup(s)DMRSValuewithout dataport(s)Valuewithout dataport(s)010030-4111130-5210, 12-31reservedreserved320421522623720, 5822, 1920-21020-31130123113321433153416351730, 51832, 11934, 32030-22133-52230-32320, 224-31ReservedReserved TABLE 14-4DMRS indication table for antenna port(s) (1000 + DMRSport), dmrs-Type = 2, maxLength = 2One Codeword:Codeword 0 enabled,Codeword 1 disabledNumber ofDMRS CDMNumber ofgroup(s)DMRSfront-loadValuewithout dataport(s)symbols01011111210, 113201421152216231720, 111822, 11920-211020-311130112311133211433115341163511730, 1111832, 111934, 312030-212133-512230-312320, 21243022531226322273322834229352303623137232382333923431023531123630, 1123732, 123834, 323936, 524038, 7241310, 924230, 1, 424332, 3, 624434, 5, 824530, 1, 4, 524632, 3, 6, 724734, 5, 8, 92481024911250162511725210, 125316, 725420, 125522, 325626, 725728, 9258-63ReservedReservedReservedTwo Codewords:Codeword 0 enabled,Codeword 1 enabledNumber ofDMRS CDMNumber ofgroup(s)front-loadValuewithout dataDMRS port(s)symbols030-41130-51220, 1, 2, 3, 62320, 1, 2, 3, 6, 82420, 1, 2, 3, 6, 7, 82520, 1, 2, 3, 6, 7, 8, 926-63ReservedReservedReserved Second Embodiment: New Antenna Port Indication Method 2 The second embodiment proposes a method of further supporting the antenna port indication for NC-JT transmission while maintaining the code point indicated by the antenna port indication of the related art in order to maintain compatibility with the terminal in a network of the related art. The antenna port indication of the related art does not support some of the DMRS port allocation for the NC-JT. For example, the antenna port indication of the related art does not support the code points for allocating one DMRS port to CDM group 0 and allocating two DMRS ports to CDM group 1. 1) If two or more TCI states are indicated, 2) if reordering the indicated TCI states is not supported (for example, {TCI state A, TCI state B} is supported, but {TCI state B, TCI state A} is not supported), and 3) if a connection relationship between the TCI state and the CDM group is configured to be static/semi-static (for example, connection is performed as TCI state A→CDM group 0 and TCI state B→CDM group 1), allocating one DMRS port to TRP A corresponding to TCI state A and allocating two DMRS ports to TRP B corresponding to TCI state B are not supported. As a first method for supporting the DMRS port allocation according to the above-described second embodiment, a code point indicated as a reserved one in the antenna port indication of the related art, which is not used previously, may be used as an additional code point for NC-JT. The port allocation indicated by the additional code point for NC-JT may be a combination of numbers of antenna ports for respective TRPs that are not supported by the antenna port indication of the related art. This may be expressed in detail as shown in Table 15-1 to Table 15-4-2 below. For example, referring to Table 15-1, code point 12 previously indicated as a reserved one may switch to a code point that allocates one DMRS port to CDM group 0 and allocates two DMRS ports to CDM group 1. A similar principle may be used in Table 15-2. TABLE 15-1DMRS indication table for antenna port(s) (1000 +DMRS port), dmrs-Type = 1, maxLength = 1One Codeword:Codeword 0 enabled,Codeword 1 disabledNumber ofDMRS CDMgroup(s)DMRSValuewithout dataport(s)010111210, 1320421522623720, 1822, 3920-21020-31120, 21220, 2, 313-15ReservedReserved TABLE 15-2DMRS indication table for antenna port(s) (1000 + DMRS port), dmrs-Type = 1, maxLength = 2One Codeword:Two Codewords:Codeword 0 enabled,Codeword 0 enabled,Codeword 1 disabledCodeword 1 enabledNumberNumberofofDMRSNumberDMRSNumberCDMofCDMofgroup(s)front-group(s)front-withoutDMRSloadwithoutDMRSloadValuedataport(s)symbolsValuedataport(s)symbols0101020-421111120, 1, 2, 3, 4, 62210, 11220, 1, 2, 3, 4, 5, 623201320, 1, 2, 3, 4, 5, 6, 7242114-31reservedreservedreserved52216231720, 11822, 31920-211020-311120, 2112202132121422215232162421725218262192722020, 122122, 322224, 522326, 722420, 422522, 622620, 1, 422722, 3, 622820, 1, 4, 522922, 3, 6, 723020, 2, 4, 623120, 2, 31 Meanwhile, the following two cases are considered in Table 15-3-1 and Table 15-3-2. 1) Table 15-3-1: In the case where TRP A is connected to CDM group 0 and TRP B is connected to CDM groups 1 and 2, (a) a code point that allocates one DMRS port to TRP A and allocates two DMRS ports to TRP B and (b) a code point that allocates one DMRS port to TRP A and allocates three DMRS ports to TRP B are supported. Some of the code points may be omitted depending on the channel characteristics between the TRP-terminals or the like. For example, the average channel gains from the respective TRPs to a terminal may have similar characteristics, and thus the channel ranks from the respective TRPs to the terminal may be similar. In this case, the code point of (b), which has a relatively large difference in the number of DMRS ports between two TRPs, may be omitted. 2) Table 15-3-2: In the case where TRP A is connected to CDM groups 0 and 1, and TRP B is connected to CDM group 2, (a) a code point that allocates one DMRS port to TRP A and two DMRS ports to TRP B, (b) a code point that allocates two DMRS ports to TRP A and TRP B, respectively, and (c) a code point that allocates one DMRS port to TRP A and three DMRS ports to TRP B are all supported. Some of the code points may be omitted depending on the channel characteristics between the TRP-terminals or the like. For example, the average channel gains from the respective TRPs to a terminal may have similar characteristics, and thus the channel ranks from the respective TRPs to the terminal may be similar. In this case, the code point for (c), which has a relatively large difference in the number of DMRS ports between two TRPs, may be omitted. Table 15-4-1 and Table 15-4-2 also use the principles similar to Table 15-3-1 and Table 15-3-2, respectively. TABLE 15-3-1DMRS indication table for antenna port(s) (1000 + DMRS port), dmrs-Type = 1One Codeword:Two Codewords:Codeword 0 enabled,Codeword 0 enabled,Codeword 1 disabledCodeword 1 enabledNumberNumberofofDMRSNumberDMRSNumberCDMofCDMofgroup(s)front-group(s)front-withoutDMRSloadwithoutDMRSloadValuedataport(s)symbolsValuedataport(s)symbols0101020-421111120, 1, 2, 3, 4, 62210, 11220, 1, 2, 3, 4, 5, 623201320, 1, 2, 3, 4, 5, 6, 7242114-31reservedreservedreserved52216231720, 11822, 31920-211020-311120, 2112202132121422215232162421725218262192722020, 122122, 322224, 522326, 722420, 422522, 622620, 1, 422722, 3, 622820, 1, 4, 522922, 3, 6, 723020, 2, 4, 623120, 2, 31 TABLE 15-3-2DMRS indication table for antenna port(s) (1000 + DMRSport), dmrs-Type = 2, maxLength = 1One codeword:Two codewords:Codeword 0 enabled,Codeword 0 enabled,Codeword 1 disabledCodeword 1 enabledNumber ofNumber ofDMRS CDMDMRS CDMgroup(s)DMRSgroup(s)DMRSValuewithout dataport(s)Valuewithout dataport(s)010030-4111130-5210, 12-31reservedreserved320421522623720, 1822, 3920-21020-31130123113321433153416351730, 11832, 31934, 52030-22133-52230-32320, 22420, 1, 42530, 1, 42620, 4, 52730, 4, 52820, 1, 4, 52930, 1, 4, 53030, 1, 2, 431ReservedReserved TABLE 15-4-1DMRS indication table for antenna port(s) (1000 + DMRS port), dmrs-Type = 2, maxLength = 2One codeword:Two Codewords:Codeword 0 enabled,Codeword 0 enabled,Codeword 1 disabledCodeword 1 enabledNumberNumberofofDMRSNumberDMRSNumberCDMofCDMofgroup(s)front-group(s)front-withoutDMRSloadwithoutDMRSloadValuedataport(s)symbolsValuedataport(s)symbols0101030-411111130-51210, 11220, 1, 2, 3, 623201320, 1, 2, 3, 6, 824211420, 1, 2, 3, 6, 7, 825221520, 1, 2, 3, 6, 7, 8, 9262316-63ReservedReservedReserved720, 11822, 31920-211020-311130112311133211433115341163511730, 111832, 311934, 512030-212133-512230-312320, 21243022531226322273322834229352303623137232382333923431023531123630, 123732, 323834, 523936, 724038, 9241310, 1124230, 1, 624332, 3, 824434, 5, 1024530, 1, 6, 724632, 3, 8, 924734, 5, 10, 112481024911250162511725210, 125316, 725420, 125522, 325626, 725728, 925820, 2, 315930, 2, 316030, 2, 3, 416120, 2, 326230, 2, 326330, 2, 3, 42 TABLE 15-4-2DMRS indication table for antenna port(s) (1000 + DMRS port), dmrs-Type = 2, maxLength = 2One codeword:Two Codewords:Codeword 0 enabled,Codeword 0 enabled,Codeword 1 disabledCodeword 1 enabledNumberNumberofofDMRSNumberDMRSNumberCDMofCDMofgroup(s)front-group(s)front-withoutDMRSloadwithoutDMRSloadValuedataport(s)symbolsValuedataport(s)symbols0101030-411111130-51210, 11220, 1, 2, 3, 623201320, 1, 2, 3, 6, 824211420, 1, 2, 3, 6, 7, 825221520, 1, 2, 3, 6, 7, 8, 9262316-63ReservedReservedReserved720, 11822, 31920-211020-311130112311133211433115341163511730, 111832, 311934, 512030-212133-512230-312320, 21243022531226322273322834229352303623137232382333923431023531123630, 123732, 323834, 523936, 724038, 9241310, 1124230, 1, 624332, 3, 824434, 5, 1024530, 1, 6, 724632, 3, 8, 924734, 5, 10, 112481024911250162511725210, 125316, 725420, 125522, 325626, 725728, 925820, 1, 415930, 1, 416020, 4, 516130, 4, 516220, 1, 4, 516330, 1, 4, 51 As a second method for supporting DMRS port allocation according to the above-described second embodiment, a base station may indicate the order of TCI states to be activated. For example, the base station may indicate {TCI state A, TCI state B} and {TCI state B, TCI state A} in order for a terminal to distinguish therebetween. To this end, two methods may be considered as will be described inFIGS.15A and15B. FIG.15Ashows a method for indicating an order between TCI states according to method 1 according to an embodiment of the disclosure. Referring toFIG.15A, according to the method 1, the order of the indicated TCI states can be distinguished on the DCI code point. FIG.15Bshows a method for indicating an order between TCI states according to method 2 according to an embodiment of the disclosure. Referring toFIG.15B, according to the method 2, the order of the indicated TCI states can be distinguished on the MAC-CE. In the case of method 1, the number of DCI code points may be larger than the number of sets of TCI states activated using MAC-CE, and in the case of method 2, the number of sets of TCI states activated using MAC-CE may be the same as the number of DCI code points. Third Embodiment: New Antenna Port Indication Method 3 A third embodiment proposes a method of performing new antenna port indication by designing new code points so as to exclude the problem described in the first embodiment, based on a series of rules. The terminal may recognize whether or not NC-JT is performed through a method other than the DMRS port indication, for example, through one of the methods listed below or a combination thereof.The number of indicated TCI states: If the number of TCI states configured as a DCI code point is two or more, NC-JT is performed, and if the number of TCI states configured as a DCI code point is one, single-TRP transmission is performed.RNTI value: The case in which an RNTI for NC-JT and an RNTI for single-TRP transmission are distinguished. If the terminal determines that the current transmission is NC-JT according to the above method, the tables listed below may be used as tables indicating the antenna ports on the DCI. The tables listed below may be designed by one of the rules, which will be proposed below, or a combination thereof. Rule A) The proposed rule A is a method of always using fixed CDM groups 0 and 1, regardless of the type of DMRS, and the respective CDM groups are mapped to different TRPs. Rule A-1) In Table 17-1<DMRS type1, maxlength=1>, a total of two to four DMRS ports are allocated, and at least one DMRS port is allocated to each CDM group. The allocated DMRS ports are ranged from 0 to 3. If a total of two DMRS ports are used, the respective DMRS ports become DMRS ports having the same frequency domain cyclic shift or frequency domain OCC in different CDM groups. For example, two DMRS ports may be ports 0 and 2 that have the same frequency domain OCC w_f=[1,1] in CDM groups 0 and 1, respectively, and may be ports 1 and 3 that use w_f=[1, −1]. On the other hand, since the DMRS ports 0 and 3 have different frequency domain OCCs w_f=[1,1] and w_f′=[1, −1] in CDM groups 0 and 1, the corresponding ports are unable to be combined. If a total of three DMRS ports are used, the case in which two DMRS ports are used in CDM group 0 and one DMRS port is used in CDM group 1 (code point 2), and the reverse case thereof (code point 3) will be considered. If four DMRS ports are used, two DMRS ports are used both in CDM Group 0 and in CDM group 1. TABLE 17-1DMRS indication table for antenna port(s) (1000 +DMRS port), dmrs-Type = 1, maxLength = 1One Codeword:Codeword 0 enabled,Codeword 1 disabledNumber ofDMRS CDMgroup(s)DMRSValuewithout dataport(s)020, 2121, 3220, 1, 2320, 2, 3420-35-15ReservedReserved Rule A-2) In Table 17-2<DMRS type1, maxlength=2, one codeword>, a total of 2 to 4 DMRS ports are allocated, and at least one DMRS port is allocated to each CDM group. According to the number of front-loaded symbols, in the case of one front-loaded symbol, allocation is performed on DMRS ports 0 to 3, and in the case of two front-loaded symbols, allocation is performed on DMRS ports 0 to 7. If a total of two DMRS ports are used, like Rule A-1, the frequency domain OCCs in the respective CDM groups must be the same. Meanwhile, the time domain OCCs of the respective CDM groups may be the same or different. For example, both DMRS port 0 and DMRS port 2, which use the same time domain OCC, may be used in CDM groups {0, 1}, respectively, and both DMRS port 0 and DMRS port 6, which use different time domain OCCs, may also be used therein. Rule A-1 is applied to the case where three or more DMRS ports are used in total. In this case, the time domain OCCs applied to the respective ones of CDM groups {0,1} may be the same or different.Rule A-3) A total of 5 to 8 DMRS ports are allocated in Table 17-2<DMRS type1, maxlength=2, two codewords>, and the DMRS ports to be used are limited to a union of the DMRS ports corresponding to two or more code points in Table <xx-a2, DMRS type1, maxlength=2, one codeword>. For example, since the DMRS ports {0, 3, 4, 5, 6} are not a union of the DMRS ports corresponding to the code points in Table <xx-a2, DMRS type1, maxlength=2, one codeword>, the corresponding ports are unable to be used. TABLE 17-2DMRS indication table for antenna port(s) (1000 + DMRS port), dmrs-Type = 1, maxLength = 2One Codeword:Two Codewords:Codeword 0 enabled,Codeword 0 enabled,Codeword 1 disabledCodeword 1 enabledNumberNumberofofDMRSNumberDMRSNumberCDMofCDMofgroup(s)front-group(s)front-withoutDMRSloadwithoutDMRSloadValuedataport(s)symbolsValuedataport(s)symbols020, 21020, 1, 2, 3, 42121, 31120, 1, 2, 3, 4, 62220, 1, 21220, 1, 2, 3, 4, 5, 62320, 2, 31320, 1, 2, 3, 4, 5, 6, 72420-314-31reservedreservedreserved520, 22621, 32724, 62825, 72920, 621021, 721122, 421223, 521320, 1, 221420, 2, 321520, 1, 621620, 6, 721722, 4, 521822, 3, 421924, 5, 622024, 6, 722120-322224-722320, 1, 6, 722422, 3, 4, 5225-31ReservedReservedReservedRule A-4) In Table 17-3<DMRS type2, maxlength=1>, a total of two to four DMRS ports are allocated when one codeword is used, and at least one DMRS port is allocated to CDM groups 0 and 1. In addition, the frequency domain OCC condition mentioned in Rule A-1 is applied thereto. If two codewords are used, the DMRS port union condition mentioned in Rule A-3 is applied thereto. TABLE 17-3DMRS indication table for antenna port(s) (1000 + DMRSport), dmrs-Type = 2, maxLength = 1One codeword:Two codewords:Codeword 0 enabledCodeword 0 enabled,Codeword 1 disabledCodeword 1 enabledNumber ofNumber ofDMRS CDMDMRS CDMgroup(s)DMRSgroup(s)DMRSValuewithout dataport(s)Valuewithout dataport(s)020, 2030-4121, 3130-5220, 1, 22-31reservedreserved320, 2, 3420-3530, 2631, 3730, 1, 2830, 2, 3930-310-31ReservedReservedRule A-5) In Table 17-4<DMRS type2, maxlength=2>, a total of 2 to 4 DMRS ports are allocated, and at least one DMRS port is allocated to CDM groups 0 and 1. If one codeword is used, the frequency domain OCC condition mentioned in Rule A-1 and the time domain OCC condition mentioned in Rule A-2 are applied thereto. If two codewords are used, the DMRS port union condition mentioned in Rule A-3 is applied thereto. TABLE 17-4DMRS indication table for antenna port(s) (1000 + DMRS port), dmrs-Type = 2, maxLength = 2One codeword:Two Codewords:Codeword 0 enabled,Codeword 0 enabled,Codeword 1 disabledCodeword 1 enabledNumberNumberofofDMRSNumberDMRSNumberCDMofCDMofgroup(s)front-group(s)front-withoutDMRSloadwithoutDMRSloadValuedataport(s)symbolsValuedataport(s)symbols020, 21030-41121, 31130-51220, 1, 21220, 1, 2, 3, 62320, 2, 31320, 1, 2, 3, 6, 82420-31420, 1, 2, 3, 6, 7, 82530, 21520, 1, 2, 3, 6, 7, 8, 92631, 316-63ReservedReservedReserved7301.21830, 2, 31930-311020, 221121, 321220, 1, 221320, 2, 321420-321530, 221631, 321730, 1, 221830, 2, 321930-322026. 822127, 922226, 7, 822326, 8, 922426-922536, 822537, 922736, 7, 822836, 8, 922936-923030, 823131, 923232, 623333, 723430, 1, 823530, 6, 923632, 3, 623732, 6, 723830, 1, 8, 923932, 3, 6, 7240-63ReservedReservedReserved Rule B) The proposed Rule B is a method of always using two CDM groups, using CDM groups 0 and 1 in the case of DMRS type 1, and dynamically selecting and using two CDM groups from among CDM groups 0, 1, and 2 in the case of DMRS type 2. In this case, the respective selected CDM groups may be mapped to different TRPs.Rule A-1) described above is applied to Table 18-1<DMRS type1, maxlength=1>. TABLE 18-1DMRS indication table for antenna port(s) (1000 +DMRS port), dmrs-Type = 1, maxLength = 1One Codeword:Codeword 0 enabled,Codeword 1 disabledNumber ofDMRS CDMgroup(s)DMRSValuewithout dataport(s)020, 2121, 3220, 1, 2320, 2, 3420-35-15ReservedReservedRule A-2) is applied to Table 18-2<DMRS type1, maxlength=2> in the case of one codeword, and Rule A-3) is applied to the same in the case of two codewords. TABLE 18-2DMRS indication table for antenna port(s) (1000 + DMRS port), dmrs-Type = 1, maxLength = 2One Codeword:Two Codewords:Codeword 0 enabled,Codeword 0 enabled,Codeword 1 disabledCodeword 1 enabledNumberNumberofofDMRSNumberDMRSNumberCDMofCDMofgroup(s)front-group(s)front-withoutDMRSloadwithoutDMRSloadValuedataport(s)symbolsValuedataport(s)symbols020, 21020, 1, 2, 3, 42121, 31120, 1, 2, 3, 4, 62220, 1, 21220, 1, 2, 3, 4, 5, 62320, 2, 31320, 1, 2, 3, 4, 5, 6, 72420-314-31reservedreservedreserved520, 22621, 32724, 62825, 72920, 621021, 721122, 421223, 521320, 1, 221420, 2, 321520, 1, 621620, 6, 721722, 4, 521822, 3, 421924, 5, 622024, 6, 722120-322224-722320, 1, 6, 722422, 3, 4, 5225-31ReservedReservedReservedRule B-1) In Table 18-3<DMRS type2, maxlength=1, one codeword>, if the number of DMRS CDM groups without data is 2, CDM group set {0, 1} is used, and if the number of DMRS CDM groups without data is 3, one of CDM group sets {0, 1}, {0, 2}, and {1, 2} is selected and used. After the CDM group set is selected, Rule A-4) is applied. Rule A-4) is applied to Table 18-3<DMRS type2, maxlength=1, two codewords>. TABLE 18-3DMRS indication table for antenna port(s) (1000 + DMRSport), dmrs-Type = 2, maxLength = 1One codeword:Two codewords:Codeword 0 enabledCodeword 0 enabled,Codeword 1 disabledCodeword 1 enabledNumber ofNumber ofDMRS CDMDMRS CDMgroup(s)DMRSgroup(s)DMRSValuewithout dataport(s)Valuewithout dataport(s)020, 2030-4121, 3130-5220, 1, 22-31reservedreserved320, 2, 3420-3530, 2631, 3730, 1, 2830, 2, 3930-31030, 41131, 51230, 1, 41330, 4, 51430, 1, 4, 51532, 41633, 51732, 3, 41832, 4, 51932, 3, 4, 520-31ReservedReservedRule B-2) In Table 18-4<DMRS type2, maxlength=2, one codeword>, like Rule B-1) above, a CDM group set is selected. After the CDM group set is selected, the respective CDM groups are restricted so as to use only the same time-domain OCC in order to prevent an increase in the DCI payload required to support the different time domain OCCs for the respective CDM groups. Otherwise, Rule A-5) is applied. Rule A-5 described above is applied to Table 18-4<DMRS type2, maxlength=2, two codewords>. TABLE 18-4DMRS indication table for antenna port(s) (1000 + DMRS port), dmrs-Type = 2, maxLength = 2One codeword:Two Codewords:Codeword 0 enabled,Codeword 0 enabled,Codeword 1 disabledCodeword 1 enabledNumberNumberofofDMRSNumberDMRSNumberCDMofCDMofgroup(s)front-group(s)front-withoutDMRSloadwithoutDMRSloadValuedataport(s)symbolsValuedataport(s)symbols020, 21030-41121, 31130-51220, 1, 21220, 1, 2, 3, 62320, 2, 31320, 1, 2, 3, 6, 82420-31420, 1, 2, 3, 6, 7, 82530, 21520, 1, 2, 3, 6, 7, 8, 92631, 316-63ReservedReservedReserved730, 1, 21830, 2, 31930-311030, 411131, 511230, 1, 411330, 4, 511430, 1, 4, 511532, 411633, 511732, 3, 411832, 4, 511932, 3, 4, 512020, 222121, 322220, 1, 222320, 2, 322420-322530, 222631, 322730, 1, 222830, 2, 322930-323030, 423131, 523230, 1, 423330, 4, 523430, 1, 4, 523532, 423633, 523732, 3, 423832, 4, 523932, 3, 4, 524026, 824127, 924226, 7, 824326, 8, 924426-924536, 824637, 924736, 7, 824836, 8, 924936-925036, 1025137, 1125236, 7, 1025336, 10, 1125436, 7, 10, 1125538, 1025639, 1125738, 9, 1025838, 10, 1125938, 9, 10, 11260-63ReservedReservedReserved Some DCI code points in Rule A or Rule B may be used to support multi-user MIMO transmission between NC-JT terminals or multi-user MIMO transmission between an NC-JT terminal and a single-TRP terminal. For example, code points 0 and 1 in Table 18-1 may be indicated to different terminals A and B, respectively, and the base station may provide services to terminals A and B by an NC-JT multi-user MIMO transmission method. Alternatively, code point 5 in Table 18-3 may be indicated to terminal C that receives data by an NC-JT method, and code point 15 in Table 15-3-1 may be indicated to terminal D that receives data by a single-TRP method, and the base station may provide services to terminals C and D by a multi-user MIMO transmission method. The code points supporting the multi-user MIMO transmission between the NC-JT terminals, among the DCI code points according to Rule A or Rule B, may be all the code points in which one or more CDM groups overlap each other. Meanwhile, the code points supporting the multi-user MIMO transmission between the NC-JT terminal and the single-TRP terminal, among the code points, may be all the code points in which a value of the field “CDM group(s) without data” is larger than the number of CDM groups that are actually used. Meanwhile, the NC-JT may be used in the case where a plurality of TRPs serves a single UE because the traffic load is relatively low, and the multi-user MIMO may be used in the case where a single TRP serves a plurality of UEs because the traffic load is relatively high. Accordingly, in the case where the NC-JT is used, it is possible to not consider the multi-user MIMO transmission between the NC-JT terminals or between the NC-JT terminal and the single-TRP terminal, and in this case, the DCI code points for the multi-user MIMO transmission may be omitted. According to an embodiment, only some of the DCI code points may be omitted. That is, i) only the DCI code points for the multi-user MIMO between the NC-JT terminals may be omitted, ii) only the DCI code points for the multi-user MIMO between the NC-JT terminal and the single-TRP terminal may be omitted, or iii) all of the DCI code points for the multi-user MIMO may be omitted. One of the methods for case i) is omitting all code points having one or more CDM groups overlapping each other and having the same total number of DMRS ports, except for one thereof. One of the methods for case ii) is omitting all code points having a value of the field “CDM group(s) without data” larger than the number of CDM groups actually used. Fourth Embodiment: Method of Selecting One of Antenna Port Indication of the Related Art and New Antenna Port Indication A fourth embodiment provides methods for the terminal to determine whether to use antenna port indication of the related art or new antenna port indication according to the situation. New antenna port indication according to some of the embodiments is to convert the content indicated by some or all of the antenna port indication code points of the related art (i.e., the number of DMRS CDM groups without data or DMRS port numbers) to new content in order to efficiently support NC-JT. If the new antenna port indication is used, some or all of the functions of the antenna port indication of the related art are unable to be used. This means that the degree of freedom of multi-user MIMO transmission or single-user MIMO transmission may deteriorate at a specific time for supporting NC-JT, compared to the existing transmission, and thus it is necessary to modify the antenna port indication method according to the situation and apply the same. Specifically, the base station and the terminal may agree with each other such that when a certain PDCCH allocates NC-JT PDSCHs (that is, when a single PDCCH allocates two or more PDSCHs to the same serving cell and the same bandwidth part at the same time), the antenna port indication method is determined according to values of some fields in the DCI included in the corresponding PDCCH. The agreement between the base station and the terminal may be performed on the terminal that reports to the base station that single-PDCCH-based NC-JT reception is possible. FIG.10is a diagram illustrating a method of determining antenna port indication according to an embodiment of the disclosure. Referring toFIG.10, the terminal attempts to detect a PDCCH (10-00) and determines whether there are two or more TCI states related to the TCI code point indicated by detected DCI (10-05). If there is only one TCI state related to the indicated TCI code point, the terminal assumes that antenna port indication of the related art is used (10-10). On the other hand, if there are two (or two or more) TCI states related to the indicated TCI code point, the terminal assumes that new antenna port indication is used (10-15). This may be understood that if the TCI code point related to the single TCI state is indicated, the terminal does not expect the use of the new antenna port indication or expects to use only the code point indicating the same content as the antenna port indication of the related art in the new antenna port indication. According to this, even in the case of using the NC-JT DCI that is distinguished by RNTI allocation, DCI formats, the payload of a specific field in DCI, content thereof, or the like (e.g., the NC-JT DCI capable of indicating TCI code points related to two or more TCI states at a time), it is possible to use all functions of the antenna port indication of the related art. As another example, the base station and the terminal may agree with each other so as to determine the antenna port indication method according to a higher layer signaling configuration value such as RRC or MAC CE. The agreement between the base station and the terminal may be performed on the terminal that reports to the base station that the single-PDCCH-based NC-JT reception is possible. FIG.11is a diagram illustrating a method of determining antenna port indication according to an embodiment of the disclosure. Referring toFIG.11, the terminal attempts to detect a PDCCH (11-00) and determines whether or not there is the case in which two or more TCI states activated through MAC CE are associated with one TCI code point (11-05). If there is no case in which two or more activated TCI states are associated with one TCI code point (i.e., the case in which all the TCI code points have only one associated TCI state), the terminal assumes that antenna port indication of the related art is used (11-10). On the other hand, if there is the case in which two or more activated TCI states are associated with one TCI code point (i.e., the case in which at least one TCI code point has two or more associated TCI states), the terminal assumes that new antenna port indication is used (11-15). Accordingly, even if new antenna port indication is indicated through RRC configuration, the terminal is capable of using all of the functions of the antenna port indication of the related art, based on MAC CE signaling, thereby performing more flexible scheduling. As another example, the base station and the terminal may agree with each other so as to determine the antenna port indication method according to the usage of NC-JT DCI distinguished by RNTI allocation, DCI formats, the payload of a specific field in DCI, content thereof, or the like (e.g., the NC-JT DCI capable of indicating TCI code points related to two or more TCI states at a time). The agreement between the base station and the terminal may be performed on the terminal that reports to the base station that the single-PDCCH-based NC-JT reception is possible. FIG.12is a diagram illustrating a method of determining antenna port indication according to an embodiment of the disclosure. Referring toFIG.12, the terminal attempts to detect a PDCCH (12-00) and determines whether or not the detected PDCCH includes the DCI for NC-JT (12-05). If the detected DCI is not intended for the single-PDCCH-based NC-JT, the terminal assumes that antenna port indication of the related art is used (12-10). On the other hand, if the detected DCI is intended for the single-PDCCH-based NC-JT, the terminal assumes that new antenna port indication is used (12-15). Accordingly, the terminal is capable of dynamically selecting the antenna port indication method depending on the type of DCI, thereby performing more flexible scheduling. FIG.13is a block diagram illustrating the structure of a terminal in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.13, a terminal may be configured to include a transceiver13-00and13-10, and a processing unit13-05including a memory and a processor. The transceiver13-00and13-10and the processing unit13-05of the terminal may operate according to the communication method of the terminal as described above. However, the elements of the terminal are not limited to the above-described examples. For example, the terminal may include more elements or fewer elements than the aforementioned elements. In addition, the transceiver13-00and13-10, and the processing unit13-05may be implemented in a single chip. The transceiver13-00and13-10may transmit and receive signals to and from a base station. The signal may include control information and data. To this end, the transceiver13-00and13-10may be configured to include an RF transmitter for up-converting and amplifying the frequency of a transmitted signal, and an RF receiver for low-noise-amplifying a received signal and down-converting the frequency thereof. However, this is only an example of the transceiver13-00and13-10, and the elements of the transceiver13-00and13-10are not limited to the RF transmitter and the RF receiver. In addition, the transceiver13-00and13-10may receive a signal through a wireless channel, may output the signal to the processing unit13-05, and may transmit a signal output from the processing unit13-05through a wireless channel. The processing unit13-05may store programs and data necessary for the operation of the terminal. In addition, the processing unit13-05may store control information or data included in the signal obtained from the terminal. The processing unit13-05may include a memory configured as a storage medium, such as ROM, RAM, a hard disk, CD-ROM, and a DVD, or a combination thereof. In addition, the processing unit13-05may control a series of processes such that the terminal may operate according to the above-described embodiment. According to some embodiments, the processing unit13-05may determine whether or not to apply a new antenna port indication method, and may control the elements of the terminal so as to apply the new antenna port indication according thereto. FIG.14is a block diagram illustrating the structure of a base station in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.14, a base station may be configured to include a transceiver14-00and14-10, and a processing unit14-05including a memory and a processor. The transceiver14-00and14-10and the processing unit14-05of the base station may operate according to the communication method of the base station as described above. However, the elements of the base station are not limited to the above-described examples. For example, the base station may include more elements or fewer elements than the aforementioned elements. In addition, the transceiver14-00and14-10and the processing unit14-05may be implemented in a single chip. The transceiver14-00and14-10may transmit and receive signals to and from a terminal. The signal may include control information and data. To this end, the transceiver14-00and14-10may be configured to include an RF transmitter for up-converting and amplifying the frequency of a transmitted signal, and an RF receiver for low-noise-amplifying a received signal and down-converting the frequency thereof. However, this is only an example of the transceiver14-00and14-10, and the elements of the transceiver14-00and14-10are not limited to the RF transmitter and the RF receiver. In addition, the transceiver14-00and14-10may receive a signal through a wireless channel, may output the signal to the processing unit14-05, and may transmit a signal output from the processing unit14-05through a wireless channel. The processing unit14-05may store programs and data necessary for the operation of the base station. In addition, the processing unit14-05may store control information or data included in the signal obtained from the base station. The processing unit14-05may include a memory configured as a storage medium, such as ROM, RAM, a hard disk, CD-ROM, and a DVD, or a combination thereof. In addition, the processing unit14-05may control a series of processes such that the base station may operate according to the above-described embodiment. According to some embodiments, the processing unit14-05may determine whether or not to apply a new antenna port indication method, and may control the respective elements of the base station so as to apply the new antenna port indication according thereto. Methods disclosed in the claims and/or methods according to various embodiments described in the specification of the disclosure may be implemented by hardware, software, or a combination of hardware and software. When the methods are implemented by software, a computer-readable storage medium for storing one or more programs (software modules) may be provided. The one or more programs stored in the computer-readable storage medium may be configured for execution by one or more processors within the electronic device. The at least one program may include instructions that cause the electronic device to perform the methods according to various embodiments of the disclosure as defined by the appended claims and/or disclosed herein. The programs (software modules or software) may be stored in non-volatile memories including a random access memory and a flash memory, a read only memory (ROM), an electrically erasable programmable read only memory (EEPROM), a magnetic disc storage device, a compact disc-ROM (CD-ROM), digital versatile discs (DVDs), or other type optical storage devices, or a magnetic cassette. Alternatively, any combination of some or all of them may form a memory in which the program is stored. Further, a plurality of such memories may be included in the electronic device. In addition, the programs may be stored in an attachable storage device which may access the electronic device through communication networks such as the Internet, Intranet, Local Area Network (LAN), Wide LAN (WLAN), and Storage Area Network (SAN) or a combination thereof. Such a storage device may access the electronic device via an external port. Further, a separate storage device on the communication network may access a portable electronic device. In the above-described detailed embodiments of the disclosure, an element included in the disclosure is expressed in the singular or the plural according to presented detailed embodiments. However, the singular form or plural form is selected appropriately to the presented situation for the convenience of description, and the disclosure is not limited by elements expressed in the singular or the plural. Therefore, either an element expressed in the plural may also include a single element or an element expressed in the singular may also include multiple elements. In the drawings in which methods of the disclosure are described, the order of the description does not always correspond to the order in which steps of each method are performed, and the order relationship between the steps may be changed or the steps may be performed in parallel. Alternatively, in the drawings in which methods of the disclosure are described, some elements may be omitted and only some elements may be included therein without departing from the essential spirit and scope of the disclosure. Further, in methods of the disclosure, some or all of the contents of each embodiment may be combined without departing from the essential spirit and scope of the disclosure. While the disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents.
111,440
11863490
DETAILED DESCRIPTION The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the present disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness. The terms used, in the following description, for indicating access nodes, network entities, messages, interfaces between network entities, and diverse identity information is provided for convenience of explanation. Accordingly, the terms used in the following description are not limited to specific meanings but may be replaced by other terms equivalent in technical meanings. In the following descriptions, the terms and definitions given in the 3GPP standards are used for convenience of explanation. However, the present disclosure is not limited by use of these terms and definitions and other arbitrary terms and definitions may be employed instead. Table 1 lists the acronyms used throughout the present disclosure. TABLE 1AcronymFull nameAcronymFull name5GC5G Core NetworkRACHRandom AccessChannelACKAcknowledgementRANRadio Access NetworkAMAcknowledged ModeRA-RNTIRandom Access RNTIAMFAccess and MobilityRATRadio Access Tech-Management FunctionnologyARQAutomatic RepeatRBRadio BearerRequestASAccess StratumRLCRadio Link ControlASN.1Abstract Syntax Nota-RNARAN-based Notificationtion OneAreaBSRBuffer Status ReportRNAURAN-based NotificationArea UpdateBWPBandwidth PartRNTIRadio NetworkTemporary IdentifierCACarrier AggregationRRCRadio Resource ControlCAGClosed Access GroupRRMRadio Resource Man-agementCGCell GroupRSRPReference SignalReceived PowerC-RNTICell RNTIRSRQReference SignalReceived QualityCSIChannel State Informa-RSSIReceived SignaltionStrength IndicatorDCIDownlink ControlSCellSecondary CellInformationDRB(user) Data Radio BearerSCSSubcarrier SpacingDRXDiscontinuous ReceptionSDAPService Data AdaptationProtocolHARQHybrid AutomaticSDUService Data UnitRepeat RequestIEInformation elementSFNSystem Frame NumberLCGLogical Channel GroupS-GWServing GatewayMACMedium Access ControlSISystem InformationMIBMaster InformationSIBSystem InformationBlockBlockNASNon-Access StratumSpCellSpecial CellNG-RANNG Radio AccessSRBSignalling Radio BearerNetworkNRNR Radio AccessSRSSounding ReferenceSignalPBRPrioritised Bit RateSSBSS/PBCH blockPCellPrimary CellSSSSecondary Synchroni-sation SignalPCIPhysical Cell IdentifierSULSupplementary UplinkPDCCHPhysical DownlinkTMTransparent ModeControl ChannelPDCPPacket Data Conver-UCIUplink Controlgence ProtocolInformationPDSCHPhysical DownlinkUEUser EquipmentShared ChannelPDUProtocol Data UnitUMUnacknowledged ModePHRPower Headroom ReportCRPCell Reselection PriorityPLMNPublic Land MobileLPPLTE positioningNetworkprotocolPRACHPhysical Random AccessposSIBpositioning SIBChannelPRBPhysical Resource BlockposSIpositioning SystemInformationPSSPrimary SynchronisationTRPTransmission-ReceptionSignalPointPUCCHPhysical Uplink ControlDL-Downlink Time Differ-ChannelTDOAence Of ArrivalPUSCHPhysical Uplink SharedChannel Table 2 lists the terminologies and their definition used throughout the present disclosure. TABLE 2TerminologyDefinitionallowedCG-ListList of configured grants for the corresponding logical channel.This restriction applies only when the UL grant is a configuredgrant. If present, UL MAC SDUs from this logical channel canonly be mapped to the indicated configured grant configuration.If the size of the sequence is zero, then UL MAC SDUs from thislogical channel cannot be mapped to any configured grantconfigurations. If the field is not present, UL MAC SDUs fromthis logical channel can be mapped to any configured grantconfigurations.allowedSCS-ListList of allowed sub-carrier spacings for the corresponding logicalchannel. If present, UL MAC SDUs from this logical channel canonly be mapped to the indicated numerology. Otherwise, ULMAC SDUs from this logical channel can be mapped to anyconfigured numerology.allowedServingCellsList of allowed serving cells for the corresponding logicalchannel. If present, UL MAC SDUs from this logical channel canonly be mapped to the serving cells indicated in this list.Otherwise, UL MAC SDUs from this logical channel can bemapped to any configured serving cell of this cell group.Carrier frequencycenter frequency of the cell.Cellcombination of downlink and optionally uplink resources. Thelinking between the carrier frequency of the downlink resourcesand the carrier frequency of the uplink resources is indicated inthe system information transmitted on the downlink resources.Cell Groupin dual connectivity, a group of serving cells associated witheither the MeNB or the SeNB.Cell reselectionA process to find a better suitable cell than the current servingcell based on the system information received in the currentserving cellCell selectionA process to find a suitable cell either blindly or based on thestored informationDedicated signallingSignalling sent on DCCH logical channel between the networkand a single UE.discardTimerTimer to control the discard of a PDCP SDU. Starting when theSDU arrives. Upon expiry, the SDU is discarded.FThe Format field in MAC subheader indicates the size of theLength field.FieldThe individual contents of an information element are referred toas fields.Frequency layerset of cells with the same carrier frequency.Global cell identityAn identity to uniquely identifying an NR cell. It is consisted ofcellIdentity and plmn-Identity of the first PLMN-Identity inplmn-IdentityList in SIB 1.gNBnode providing NR user plane and control plane protocolterminations towards the UE, and connected via the NG interfaceto the 5GC.Handoverprocedure that changes the serving cell of a UE inRRC_CONNECTED.Information elementA structural element containing single or multiple fields isreferred as information element.LThe Length field in MAC subheader indicates the length of thecorresponding MAC SDU or of the corresponding MAC CELCID6 bit logical channel identity in MAC subheader to denote whichlogical channel traffic or which MAC CE is included in the MACsubPDUMAC-1Message Authentication Code-Integrity. 16 bit or 32 bit bitstring calculated by NR Integrity Algorithm based on the securitykey and various fresh inputsLogical channela logical path between a RLC entity and a MAC entity. There aremultiple logical channel types depending on what type ofinformation is transferred e.g. CCCH (Common ControlChannel), DCCH (Dedicate Control Channel), DTCH (DedicateTraffic Channel), PCCH (Paging Control Channel)LogicalChannelConfigThe IE LogicalChannelConfig is used to configure the logicalchannel parameters. It includes priority, prioritisedBitRate,allowedServingCells, allowedSCS-List, maxPUSCH-Duration,logicalChannelGroup, allowedCG-List etclogicalChannelGroupID of the logical channel group, as specified in TS 38.321, whichthe logical channel belongs toMAC CEControl Element generated by a MAC entity. Multiple types ofMAC CEs are defined, each of which is indicated bycorresponding LCID. A MAC CE and a corresponding MACsub-header comprises MAC subPDUMaster Cell Groupin MR-DC, a group of serving cells associated with the MasterNode, comprising of the SpCell (PCell) and optionally one ormore SCells.maxPUSCH-Restriction on PUSCH-duration for the corresponding logicalDurationchannel. If present, UL MAC SDUs from this logical channel canonly be transmitted using uplink grants that result in a PUSCHduration shorter than or equal to the duration indicated by thisfield. Otherwise, UL MAC SDUs from this logical channel canbe transmitted using an uplink grant resulting in any PUSCHduration.NRNR radio accessPCellSpCell of a master cell group.PDCP entityThe process triggered upon upper layer request. It includes thereestablishmentinitialization of state variables, reset of header compression andmanipulating of stored PDCP SDUs and PDCP PDUs. Thedetails can be found in 5.1.2 of 38.323PDCP suspendThe process triggered upon upper layer request. When triggered,transmitting PDCP entity set TX_NEXT to the initial value anddiscard all stored PDCP PDUs. The receiving entity stop andreset t-Reordering, deliver all stored PDCP SDUs to the upperlayer and set RX_NEXT and RX_DELIV to the initial valuePDCP-configThe IE PDCP-Config is used to set the configurable PDCPparameters for signalling and data radio bearers. For a data radiobearer, discardTimer, pdcp-SN-Size, header compressionparameters, t-Reordering and whether integrity protection isenabled are configured. For a signaling radio bearer, t-Reorderingcan be configuredPLMN ID Checkthe process that checks whether a PLMN ID is the RPLMNidentity or an EPLMN identity of the UE.Primary CellThe MCG cell, operating on the primary frequency, in which theUE either performs the initial connection establishment procedureor initiates the connection re-establishment procedure.Primary SCG CellFor dual connectivity operation, the SCG cell in which the UEperforms random access when performing the Reconfigurationwith Sync procedure.priorityLogical channel priority, as specified in TS 38.321. an integerbetween 0 and 7. 0 means the highest priority and 7 means thelowest priorityPUCCH SCella Secondary Cell configured with PUCCH.Radio BearerLogical path between a PDCP entity and upper layer (i.e. SDAPentity or RRC)RLC bearerRLC and MAC logical channel configuration of a radio bearer inone cell group.RLC bearerThe lower layer part of the radio bearer configuration comprisingconfigurationthe RLC and logical channel configurations.RX_DELIVThis state variable indicates the COUNT value of the first PDCPSDU not delivered to the upper layers, but still waited for.RX_NEXTThis state variable indicates the COUNT value of the next PDCPSDU expected to be received.RX_REORDThis state variable indicates the COUNT value following theCOUNT value associated with the PDCP Data PDU whichtriggered t-Reordering.Serving CellFor a UE in RRC_CONNECTED not configured with CA/DCthere is only one serving cell comprising of the primary cell. Fora UE in RRC_CONNECTED configured with CA/ DC the term′serving cells′ is used to denote the set of cells comprising of theSpecial Cell(s) and all secondary cells.SpCellprimary cell of a master or secondary cell group.Special CellFor Dual Connectivity operation the term Special Cell refers tothe PCell of the MCG or the PSCell of the SCG, otherwise theterm Special Cell refers to the PCell.SRBSignalling Radio Bearers″ (SRBs) are defined as Radio Bearers(RBs) that are used only for the transmission of RRC and NASmessages.SRB0SRB0 is for RRC messages using the CCCH logical channelSRB1SRB1 is for RRC messages (which may include a piggybackedNAS message) as well as for NAS messages prior to theestablishment of SRB2, all using DCCH logical channel;SRB2SRB2 is for NAS messages and for RRC messages which includelogged measurement information, all using DCCH logicalchannel. SRB2 has a lower priority than SRBI and may beconfigured by the network after AS security activation;SRB3SRB3 is for specific RRC messages when UE is in (NG)EN-DCor NR-DC, all using DCCH logical channelSRB4SRB4 is for RRC messages which include application layermeasurement reporting information, all using DCCH logicalchannel.Suitable cellA cell on which a UE may camp. Following criteria applyThe cell is part of either the selected PLMN or the registeredPLMN or PLMN of the Equivalent PLMN listThe cell is not barredThe cell is part of at least one TA that is not part of the list of″Forbidden Tracking Areas for Roaming″ (TS 22.011 [18]),which belongs to a PLMN that fulfils the first bullet above.The cell selection criterion S is fulfilled (i.e. RSRP and RSRQare better than specific values In the present invention, “trigger” or “triggered” and “initiate” or “initiated” may be used in the same meaning. In the present invention, “radio bearers allowed for the second resume procedure”, “radio bearers for which the second resume procedure is set”, and “radio bearers for which the second resume procedure is enabled” may all have the same meaning. FIG.1Ais a diagram illustrating the architecture of an 5G system and a NG-RAN to which the disclosure may be applied. 5G system consists of NG-RAN1a-01and 5GC1a-02. An NG-RAN node is either:A gNB, providing NR user plane and control plane protocol terminations towards the UE; orAn ng-eNB, providing E-UTRA user plane and control plane protocol terminations towards the UE. The gNBs1a-05or1a-06and ng-eNBs1a-03or1a-04are interconnected with each other by means of the Xn interface. The gNBs and ng-eNBs are also connected by means of the NG interfaces to the 5GC, more specifically to the AMF (Access and Mobility Management Function) and to the UPF (User Plane Function). AMF1a-07and UPF1a-08may be realized as a physical node or as separate physical nodes. A gNB1a-05or1a-06or an ng-eNBs1a-03or1a-04hosts the functions listed below. Functions for Radio Resource Management such as Radio Bearer Control, Radio Admission Control, Connection Mobility Control, Dynamic allocation of resources to UEs in uplink, downlink and sidelink(scheduling); and IP and Ethernet header compression, uplink data decompression and encryption of user data stream; and Selection of an AMF at UE attachment when no routing to an MME can be determined from the information provided by the UE; and Routing of User Plane data towards UPF; and Scheduling and transmission of paging messages; and Scheduling and transmission of broadcast information (originated from the AMF or O&M); and Measurement and measurement reporting configuration for mobility and scheduling; and Session Management; and QoS Flow management and mapping to data radio bearers; and Support of UEs in RRC_INACTIVE state; and Radio access network sharing; and Tight interworking between NR and E-UTRA; and Support of Network Slicing. The AMF1a-07hosts the functions such as NAS signaling, NAS signaling security, AS security control, SMF selection, Authentication, Mobility management and positioning management. The UPF1a-08hosts the functions such as packet routing and forwarding, transport level packet marking in the uplink, QoS handling and the downlink, mobility anchoring for mobility etc. FIG.1Bis a diagram illustrating a wireless protocol architecture in an 5G system to which the disclosure may be applied. User plane protocol stack consists of SDAP1b-01or1b-02, PDCP1b-03or1b-04, RLC1b-05or1b-06, MAC1b-07or1b-08and PHY1b-09or1b-10. Control plane protocol stack consists of NAS1b-11or1b-11b-, RRC1b-13or1b-14, PDCP, RLC, MAC and PHY. Each protocol sublayer performs functions related to the operations listed in the Table 3. TABLE 3SublayerFunctionsNASauthentication, mobility management, security control etcRRCSystem Information, Paging, Establishment, maintenance andrelease of an RRC connection, Security functions, Establish-ment, configuration, maintenance and release of SignallingRadio Bearers (SRBs) and Data Radio Bearers (DRBs), Mobility,QoS management, Detection of and recovery from radio linkfailure, NAS message transfer etc.SDAPMapping between a QoS flow and a data radio bearer, MarkingQoS flow ID (QFI) in both DL and UL packets.PDCPTransfer of data, Header compression and decompression,Ciphering and deciphering, Integrity protection and integrityverification, Duplication, Reordering and in-order delivery, Out-of-order delivery etc.RLCTransfer of upper layer PDUs, Error Correction through ARQ,Segmentation and re-segmentation of RLC SDUs, Reassemblyof SDU, RLC re-establishment etc.MACMapping between logical channels and transport channels,Multiplexing/demultiplexing of MAC SDUs belonging to one ordifferent logical channels into/from transport blocks (TB)delivered to/from the physical layer on transport channels,Scheduling information reporting, Priority handling betweenUEs, Priority handling between logical channels of one UE etc.PHYChannel coding, Physical-layer hybrid-ARQ processing, Ratematching, Scrambling, Modulation, Layer mapping, DownlinkControl Information, Uplink Control Information etc. FIG.1Cis a diagram illustrating a structure of a positioning system according to an embodiment of the present disclosure. The terminal1c-03is connected to the LMF1c-33through the gNB1c-13and the AMF1c-23. Hereinafter, gNB is also referred to as a base station, AMF as an access mobility function, and LMF as a location management function. The base station provides the TRP function. AMF stores the capability of the terminal related to location confirmation and relays the signaling between the location management function and the terminal. AMF may be connected to several base stations. One AMF can be connected to several LMFs. The AMF may initially select the LMF for any terminal. The AMF may select another LMF when the terminal moves to a new cell. The LMF manages the support of different location services for target UEs, including positioning of UEs and delivery of assistance data to UEs. The LMF may interact with a target UE in order to deliver assistance data if requested for a particular location service, or to obtain a location estimate if that was requested. For positioning of a target UE, the LMF decides on the position methods to be used The positioning methods may yield a location estimate for UE-based position methods and/or positioning measurements for UE-assisted and network-based position methods. The LMF may combine all the received results and determine a single location estimate for the target UE (hybrid positioning). Additional information like accuracy of the location estimate and velocity may also be determined. FIG.1Dis a diagram illustrating a protocol hierarchical structure for signaling between a location management function and a terminal according to an embodiment of the present disclosure. The terminal and LMF exchange signaling through LPP1d-03. LPP defines various control messages related to positioning. The LPP control message is included in the NAS1d-13message and delivered to the AMF, and the AMF delivers the LPP control message included in the NAS message to the LMF. LPP is a protocol applied to both LTE and NR. Hereinafter, LPP is also called positioning protocol. FIG.2Ashows the types of positioning method. The positioning methods are GNSS positioning2a-01, OTDOA positioning2a-05, Barometric pressure sensor positioning2a-03, DL-AoD positioning2a-07, DL-TDOA positioning2a-09, UL-TDOA positioning2a-11, etc. GNSS positioning and barometric pressure sensor positioning are positioning methods independent of radio access technology, OTDOA positioning is a positioning method using an LTE downlink signal, and DL-AoD positioning and DL-TDOA positioning are positioning methods using a specific NR downlink signal. The specific NR downlink signal is a positioning reference signal (PRS). UL-TDOA positioning is a positioning method using a specific NR uplink signal. The specific NR uplink signal is a sounding reference signal (SRS). FIG.2Bis a diagram illustrating positioning assistance data. Assistance data may be transmitted to the positioning device so that each positioning can be performed more quickly and accurately. The assistance data may be provided through system information or transmitted through an LPP message. The positioning device may be a terminal or a base station. Assistance data is transmitted while being included in assistanceDataElement (assitanceDataElement). One assitanceDataElement contains specific information related to a specific positioning method. For example, GNSS-ReferenceTime assitanceDataElement includes reference time information of GNSS and is transmitted through the positioning SIB called posSibType1-1 or delivered to the terminal through the LPP control message called ProvideAssistanceData. When provided through the positioning SIB, assitanceDataElement is mapped to a specific positioning SIB type. GNSS-related assitanceDataElements2b-01to2b-03are mapped to positioning SIB type 1 and positioning SIB type 2. OTDOA-related assitanceDataElement2b-05is mapped to positioning SIB type 3, barometric pressure sensor positioning-related assistanceDataElement2b-07is mapped to positioning SIB type 4, and DL-AoD and DL-TDOA-related assistanceDataElement2b-11are mapped with positioning SIB type 6. Most of the assistanceDataElements are immediately applicable upon receipt. However, specific information, such as PRS-related assistance data, can be divided into those that are immediately applicable and those that are applicable when a predetermined condition is met that are transmitted through the SIB. For example, NR-DL-PRS-AssistanceData2b-13includes assistance data that is applied immediately, and NR-DL-PRS-ConditionalAssistanceData2b-15includes assistance data that is applied when a predetermined condition is satisfied or is selectively applied. Assistance data immediately applicable is called type 1 assistance data, and assistance data applicable when predetermined conditions are met is called type 2 assistance data. FIG.2Cis a diagram illustrating the structure of NR-DL-PRS-AssistanceData. Definitions of each type of IEs used inFIG.2Cfollow specification 37.355, unless otherwise defined. NR-DL-PRS-AssistanceData provides information on PRS as assistance data for DL-TDOA or DL-AOD. NR-DL-PRS-AssistanceData is provided to the terminal through positioning SIB type 6-1 or through ProvideAssistanceData. One NR-DL-PRS-AssistanceData2c-01is composed of one nr-DL-PRS-ReferenceInfo2c-03and one nr-DL-PRS-AssistanceDataList2c-05. The nr-DL-PRS-ReferenceInfo2c-03provides information on the identifier and frequency of the TRP that provides a reference for nr-DL-PRS-SFN0-Offset or dl-PRS-ResourceSlotOffset, etc. The nr-DL-PRS-AssistanceDataList2c-05is composed of a plurality of NR-DL-PRS-AssistanceDataPerFreq2c-07. One NR-DL-PRS-AssistanceDataPerFreq2c-07provides information on PRS provided at a specific frequency and is composed of nr-DL-PRS-PositioningFrequencyLayer2c-09and nr-DL-PRS-AssistanceDataPerFreq2c-11. NR-DL-PRS-AssistanceDataPerFreq2c-07and nr-DL-PRS-AssistanceDataPerFreq2c-11are different IEs. The nr-DL-PRS-AssistanceDataPerFreq2c-11is composed of a plurality of NR-DL-PRS-AssistanceDataPerTRP2c-13. The nr-DL-PRS-PositioningFrequencyLayer2c-09is common information applied to a plurality of NR-DL-PRS-AssistanceDataPerTRP2c-13. This is composed of information such as the subcarrier interval, the bandwidth of the PRS resource, the PRB from which the PRS resource starts. One NR-DL-PRS-AssistanceDataPerTRP2c-13provides information on PRS provided by a specific TRP. TRP may be a cell. NR-DL-PRS-AssistanceDataPerTRP2c-13consists of information commonly applied to multiple nr-DL-PRS-ResourceSet2c-17and multiple nr-DL-PRS-ResourceSet2c-17. The. Information commonly applied to the plurality of nr-DL-PRS-ResourceSets2c-17includes dl-PRS-ID, a cell identifier corresponding to the TRP and the time offset of the SFN #0 slot #0 for the given TRP with respect to SFN #0 slot #0 of the assistance data reference. One nr-DL-PRS-ResourceSet2c-17consists of one dl-PRS-ResourceList2c-19, and dl-PRS-ResourceList2c-19consists of a plurality of dl-PRS-Resources. One dl-PRS-Resource has an identifier, code sequence information applied to the corresponding PRS, and the starting slot of the DL-PRS Resource with respect to the corresponding DL-PRS-Resource Set Slot Offset and QCL information (beam information) of the corresponding PRS. The PRS-ResourceSet is composed of a plurality of PRSs using the same frequency resource and is a set of PRS resources grouped for beam sweeping. Consequently, one nr-DL-PRS-AssistanceDataList2c-05includes assistance data for a plurality of frequencies. The assistance data for each frequency includes assistance data for a plurality of TRPs. The assistance data for each TRP may provide information on a plurality of DL-PRS-ResourceSets. One DL-PRS-ResourceSet is composed of a plurality of DL-PRS-Resources. The terminal may perform positioning measurement by measuring the plurality of DL-PRS-Resources indicated in the nr-DL-PRS-AssistanceDataList2c-05. NR-DL-PRS-AssistanceData is assistance data that is applied immediately. DL-PRS included in NR-DL-PRS-AssistanceData are continuously transmitted from the time point when the terminal receives NR-DL-PRS-AssistanceData until the terminal stops measuring positioning using DL-PRS, and the terminal immediately use the immediately applied assistance data when positioning measurement using the assistance data is necessary. FIG.2Dis a diagram illustrating the structure of PRS-ConditionalAssistanceData. The PRS-ConditionalAssistanceDataSet (hereinafter, conditional assistance data set)2d-01is composed of a PRS-ConditionalAssistanceDataList2d-03including a plurality of PRS-ConditioanlAssistanceData2d-05(hereinafter, conditional assistance data). Each conditional assistance data2d-05includes PRS-AssistanceData2d-13(hereinafter, assistance data) that is currently being transmitted or that can be started when a terminal request it. The conditional assistance data set includes type 2 assistance data and is provided to the terminal through positioning SIB type 6-4 or through ProvideAssistanceData. Positioning SIB type 6-1 includes only one type 1 assistance data2c-01, and positioning SIB type 6-4 includes one or more type 2 assistance data2d-13. Conditional assistance data2d-05is composed of PRS-ConditionalAssistanceDataId2d-07(hereinafter assistance data id), PRS-ConditionalAssistanceDataStatus2d-09(hereinafter assistance data status), PRS-ConditionalAssistanceDataValidity2d-11(assistance data validity), ReportConfig (hereinafter, Report Configuration), and PRS-AssistanceData2d-13(hereinafter, assistance data). The assistance data id2d-07is an identifier of the related conditional assistance data2d-05or the related assistance data2d-13and is an integer between 0 and 15. The assistance data status2d-09is 1-bit information indicating whether the related assistance data2d-13is being transmitted (or provided). The fact that the assistance data2d-13is being transmitted means that the PRSs specified in the assistance data2d-13are currently being transmitted. If the assistance data status related to the assistance data exists (or the assistance data status is set to the first value), the terminal determines that the PRSs specified in the assistance data are currently being transmitted and performs the necessary operation. If the assistance data status related to the assistance data does not exist (or if the assistance data status is set to the second value), the terminal determines that the PRSs specified in the assistance data are not currently being transmitted. The terminal if necessary, requests the LMF to start transmission of the PRS The assistance data validity2d-11indicates under what conditions the relevant conditional assistance data2d-05or the relevant assistance data2d-13are valid. Alternatively the assistance data validity indicates which conditions to be fulfilled for UE to initiate measurement on the relevant PRS and to report measurement results. The assistance data validity2d-11may include an NR CGI (Cell Global Identifier) List or time interval information. The time interval information is composed of the first time point and the second time point. In the terminal, if the NR CGI of the current cell belongs to the NR CGI List, and the current time expressed in UTC (Universal Coordinate Time) belongs to the time interval information expressed in the first time point and the second time point, the related conditional assistance data2d-05or related assistance data2d-13is considered valid. If the assistance data status2d-09of the conditional assistance data2d-05determined to be valid is set to ‘available’, ‘transmit’ or ‘broadcast’, the terminal performs positioning measurement for the related PRS and report measurement results to the LMF. If the assistance data status2d-09of the conditional assistance data2d-05determined to be valid is set to ‘unavailable’, ‘not transmitted’, or ‘non-broadcast’, the terminal requests LMF to activate the conditional assistance data2d-05. Activation of the conditional assistance data means that the PRSs specified in the conditional assistance data are transmitted. The conditional assistance data set2d-01may be provided through a positioning SIB or may be provided through an LPP control message. The assistance data status2d-09is included only in the conditional assistance data set2d-01provided through the positioning SIB, and the assistance data validity is included only in the conditional assistance data set provided through the LPP control message. Alternatively, assistance data status is used only for type 2 assistance data provided through positioning SIB, and assistance data validity is used only for type 2 assistance data provided through assistanceDataProvide. ReportConfig2d-12(hereinafter Report Configuration) is parameters related to positioning measurement result reporting and consists of maxDL-PRS-RSTD-MeasurementsPerTRPPair and timingReportingGranularityFactor. maxDL-PRS-RSTD-MeasurementsPerTRPPair indicates the maximum number of. DL-PRS RSTD measurements for downlink PRS RSTD (Reference Signal Time Difference). timingReportingGranularityFactor indicates recommended reporting granularity for the DL RSTD measurements. The terminal reports the measurement result according to the above ReportConfig when the validity condition of the conditional assistance data is met. The assistance data2d-13of the conditional assistance data2d-05is an IE having the same structure as the PRS-AssistanceData2c-01. The conditional assistance data is classified into conditional assistance data1 received through the positioning SIB and conditional assistance data2 received through the LPP control message. The assistance data status IE is essentially present in conditional assistance data1, but the assistance data status IE does not exist in conditional assistance data2. In conditional assistance data2, assistance data validity exists, but in conditional assistance data1, data validity condition does not exist. The purpose of conditional assistance data1 is to inform the terminal of PRSs in which transmission can be activated in the corresponding cell. The terminal may determine the PRSs required for its own positioning measurement among the PRSs indicated in conditional assistance data1 and may request the LMF to activate the corresponding conditional assistance data. The purpose of conditional assistance data2 is to inform the terminal of PRSs to be measured when a predetermined condition is met. The terminal may measure the PRSs that satisfy the condition among the PRSs specified in conditional assistance data2 and report the results to the LMF. FIG.2Eis a diagram illustrating a system information acquisition process. System Information Block (hereinafter referred to as SIB) includes general SIB and positioning SIB. Types of general SIB include SIB1, SIB2, SIB3, SIB4, SIB5, SIB6, SIB7, SIB8, and SIB9. SIB1 includes information related to scheduling of other system information and radio resource configuration information commonly applied to all terminals. SIB2 includes cell reselection information. SIB3 includes information about neighboring cells for intra-frequency cell resection. SIB4 includes information for inter-frequency cell resection. SIB5 includes E-UTRA frequency information and the like for inter-RAT cell reselection. SIB6 includes ETWS (Earthquake Tsunami Warning System) main notification. SIB7 includes the ETWS sub-notification. SIB8 contains CMAS notifications. SIB9 includes information related to GPS time and Coordinated Universal Time (UTC). The assistance data mapped with the type of positioning SIB is as shown inFIG.2b. One or a plurality of SIBs having the same transmission period are included in one system information (System Information, SI) and transmitted. scheduling information of SI related to general SIB is indicated in SI scheduling Information. The scheduling information of the SI related to the positioning SIB is indicated in the positioning SI scheduling Information. SI scheduling Information and positioning SI scheduling Information are included in SIB1. The SI scheduling Information includes one or more scheduling information and one SI window length. The scheduling information consists of SI broadcast status, SI periodicity, and SIB mapping information. SI broadcast status indicates whether the corresponding SI message is being broadcast. SI periodicity is the period of the corresponding SI message. The SI window length is the length of the SI scheduling window. The SIB mapping information consists of one or a plurality of SIB type information. The SIB type information includes type information indicating one of sibType2, sibType3, sibType4, sibType5, sibType6, sibType7, sibType8, sibType9, sibType10, sibType11, sibType12, sibType13, and sibType14, and a value tag indicating one of integers between 0 and 31. The positioning SI scheduling Information is composed of one or more positioning scheduling information and the like. The positioning scheduling information consists of positioning SI broadcast status, positioning SI periodicity, and positioning SIB mapping information. The positioning SI broadcast status indicates whether the corresponding positioning SI message is being broadcast. The positioning SI periodicity is the period of the positioning SI message. The positioning SIB mapping information consists of one or a plurality of positioning SIB type information. positioning SIB type information consist of a type information indicating one of posSibType1-1, posSibType1-2, posSibType1-3, posSibType1-4, posSibType1-5, posSibType1-6, posSibType1-7, posSibType1-8, posSibType2-1, posSibType2-2, posSibType2-3, posSibType2-4, posSibType2-5, posSibType2-6, posSibType2-7, posSibType2-8, posSibType2-9, posSibType2-10, posSibType2-11, posSibType2-12, posSibType2-13, posSibType2-14, posSibType2-15, posSibType2, posSibType2-17, posSibType2-18, posSibType2-19, posSibType2-20, posSibType2-21, posSibType2-22, posSibType2-23, posSibType3-1, posSibType4-1, posSibType5-1, posSibType6-1, posSibType6-2, posSibType6-2, posSibType6-3 and posSibType6-4. In step2e-11, the terminal2e-01receives SIB1 from the base station (2e-03). SI scheduling Information of SIB1 is set as in2e-13. The positioning SI scheduling Information of SIB1 is set as in2e-15. SI with SI broadcast status set to being broadcast and positioning SI with positioning SI broadcast status set to being broadcast are transmitted according to the order included in SI scheduling Information and positioning SI scheduling Information. For example, it is transmitted in the order of the first SI, the second SI, and the first positioning SI. SI and positioning SI are transmitted within the SI scheduling window and the positioning SI scheduling window. The length of the SI scheduling window and the length of the positioning SI scheduling window are determined by the SI window length of SI scheduling Information. In step2e-17, the terminal receives the first SI in the SI scheduling window for the first SI. The first SI contains only SIB2 as shown in2e-13. As shown in2e-19, the first SI includes one IE called sib-TypeAndInfo, and sib-TypeAndInfo includes SIB2. In step2e-21, the terminal receives the second SI in the SI scheduling window for the second SI. The second SI denotes SIB3 and SIB4 as shown in2e-13. As shown in2e-23, the second SI includes two sib-TypeAndInfo IEs, the first sib-TypeAndInfo includes SIB3, and the second sib-TypeAndInfo includes SIB4. In step2e-25, the terminal receives the first positioning SI in the positioning SI scheduling window for the first positioning SI. The first positioning SI includes positioning SIB 6-1 and positioning SIB 6-2 as shown in2e-15. As shown in2e-27, the first positioning SI includes two posSIB-TypeAndInfo IEs, the first posSIB-TypeAndInfo includes positioning SIB 6-1, and the second posSIB-TypeAndInfo includes positioning SIB 6-2. As shown in2e-29, one positioning SIB is composed of value tag2, expiration time, and assistanceDataElement. value tag2 indicates one of integers between 0 and 63 and indicates whether broadcast assistance data has been changed. value tag2 is set by LMF. The expiration time indicates the time point at which the contents of the broadcast assistance data expire in UTC. assistanceDataElement is a field containing actual assistance data. General SIB indicates one of the integers between 0 and 31, and the change is indicated by the value tag set by the base station. The positioning SIB indicates one of the integers between 0 and 63 and value tag2 set by the LMF. indicates whether the change has been made or not. Value tag is included in SIB1 and broadcast, and value tag2 is included in positioning SI and broadcast. As shown in2e-15, the second positioning scheduling information is not broadcast. The terminal performs a system information request procedure to receive non-broadcast positioning scheduling information. The terminal should always store valid system information. The terminal maintains the validity of the system information by reacquiring the system information when a predetermined event occurs. When the short message included in the DCI addressed to the P-RNTI indicates systemInfoModification, the terminal receives SIB1, determines the first type SIBs in which the value tag is changed, and receive the first type SIBs in which the value tag is changed and store it. The terminal receives and stores positioning SIs including the second type SIB again without considering the value tag. First type SIB is a general SIB, and second type SIB is a positioning SIB. When 3 hours have elapsed since the terminal successfully received the first type SIB, the terminal discards the first type SIB and initiates a procedure for acquiring the SI including the first type SIB. When the terminal successfully receives the second type SIB, it stores the second type SIB. Then, in a systemInfoModification period starting just before the expiration time of the second type SIB, terminal starts a procedure for acquiring the SI including the second type. The systemInfoModification period is a time interval that occurs sequentially. During one systemInfoModification period, system information cannot be changed. When it is necessary to change the system information, the base station transmits new system information from the time point at which the next systemInfoModification period starts. FIG.2Fis a diagram illustrating a system information request procedure. The terminal can request system information that is not broadcast by using the RRC control message. The RRC_IDLE terminal or RRC_INACTIVE terminal transmits positioning system information request1, and the terminal in RRC_CONNECTED state transmits positioning system information request2. In step2f-11, the RRC_IDLE terminal or RRC_INACTIVE terminal transmits positioning system information request1, which is an RRC control message for requesting positioning system information, to the base station. The positioning system information request1 includes the requested positioning SI list. The requested positioning SI list is a list of SI messages requested by the terminal to be provided to the base station. The requested SI list is a 32-bit bitmap. Each bit of the requested positioning SI list corresponds to each entry according to the order of the entries included in the positioning SI scheduling Information. For example, the first bit corresponds to the first positioning SI of the positioning SI scheduling information. In step2f-13, the RRC_CONNECTED terminal transmits positioning system information request2, which is an RRC control message for requesting positioning system information, to the base station. The positioning system information request2 includes the requested positioning SIB list. The requested positioning SIB list is a list of positioning SIB s requested by the terminal to be provided to the base station, and includes a plurality of positioning SIB type information. The positioning SIB type information indicates the type of positioning SIB requested by the terminal. In step2f-15, the terminal that has transmitted the positioning system information request1 or positioning system information request2 receives SIB1 from the base station. The terminal checks whether the requested positioning SI or SI including the positioning SIB is broadcast. In step2f-17, the terminal receives the positioning SI requested by the terminal or the positioning SI including the positioning SIB requested by the terminal. Positioning system information request1 is transmitted via SRB0 and CCCH. The positioning system information request2 is transmitted via SRB1 and DCCH. Since the size of the control message transmitted through the CCCH is limited, positioning system information request1 reduces the size of transmitted information by indicating the requested SI type information in a bitmap format instead of directly indicating it. On the other hand, since a relatively large message can be transmitted through the DCCH, the positioning system information request2 directly indicates the requested positioning SIB. FIG.2Gis a diagram illustrating the structure of an uplink MAC PDU including an inactive positioning measurement result. The uplink MAC PDU including the inactive positioning measurement result consists of three MAC subPDUs. The MAC SDU (the first SDU)2g-15including the ResumeRequest message belonging to SRB0 is located at the front of the MAC PDU (2g-11), and the MAC SDU (the second SDU) including the LPP segment message belonging to SRB2 (the second SDU)2g-19is located next. The first BSR2g-27is located at the rearmost part. That is, the first MAC subPDU including SRB0 data, the second MAC subPDU including SRB2 data, and the third MAC subPDU including the first BSR are included in the order. The MAC sub-header of the first MAC subPDU and the third MAC subPDU consists of two reserved bits and an LCID field. The MAC sub-header of the second MAC subPDU consists of one reserved bit, an F field, an LCID field, and an L field. This is so that the base station receiving the MAC PDU processes the ResumeRequest first, so that the MAC PDU is recognized as a MAC PDU related to the small data transfer procedure as quickly as possible. The remaining part2g-15excluding the MAC sub-header in the first MAC subPDU and the remaining part2g-27excluding the MAC sub-header in the third MAC subPDU are plain text that is not ciphered. In the second MAC subPDU, the remaining part2g-19except for the MAC sub-header includes data ciphered with a predetermined security key. The MAC sub-header is not ciphered. The reason for locating the MAC subPDUs as described above is that the first MAC subPDU and the second MAC subPDU include data processed by RRC, and the third MAC subPDU includes data processed by MAC, so it is to facilitate the processing operation of the terminal by locating the unciphered data first and locating the ciphered data later. FIG.2His a diagram illustrating the structure of a buffer status report MAC CE. The first BSR MAC CE consists of one logical channel group identifier field2h-01and one first buffer size field2h-03. The logical channel group identifier field2h-01has a 3-bit size and indicates one of the logical channel group identifiers between 0 and 7. The first buffer size field2h-03has a size of 5 bits and indicates one of the first buffer size indexes from 0 to 31. The first buffer size index 0 means that there is no data available for transmission in logical channels belonging to the corresponding logical channel group. The first buffer size index 31 means that the amount of data for transmission of the logical channels belonging to the corresponding logical channel group is greater than the 30th first buffer size. The first buffer size index 1 means that the amount of data for transmission of logical channels belonging to the corresponding logical channel group is greater than 0 and less than or equal to the first buffer size. The first buffer size index n (2<=n<=30) indicates that the amount of data for transmission of the logical channels belonging to the corresponding logical channel group is greater than the n−1st buffer size and less than or equal to the nth first buffer size. The 30 first buffer sizes are defined in the standard. The second BSR MAC CE consists of 8 LCGi bits2h-11and a plurality of the second buffer size fields2h-13. The LCGi bit indicates whether the second buffer size field for logical channel group i exists. For example, it indicates whether the second buffer size field for LCG1 logical channel group 1 exists. If this field is 1, the second buffer size field for the corresponding LCG exists. The second buffer size field has an 8-bit size and indicates one of the second buffer size indexes between 0 and 255. The second buffer size index 0 means that there is no data available for transmission in logical channels belonging to the corresponding logical channel group. The second buffer size index 254 means that the amount of data for transmission of the logical channels belonging to the corresponding logical channel group is greater than the size of the 253-th second buffer size. The second buffer size index 1 means that the amount of data for transmission of the logical channels belonging to the corresponding logical channel group is greater than 0 and less than or equal to the first second buffer size. The second buffer size index n (2<=n<=253) indicates that the amount of data for transmission of the logical channels belonging to the corresponding logical channel group is greater than the (n−1)th buffer size and less than or equal to the nth buffer size. The second buffer size index 255 is not used. The 252 second buffer sizes are defined in the specification. The first BSR MAC CE is referred to as a BSR to which the first format is applied or the first format BSR. The second BSR MAC CE is referred to as a BSR to which the second format is applied or the second format BSR. Logical channel group is configured when logical channel is configured. A logical channel and a logical channel group are configured with an RRC control message. In general, a buffer size index reflecting the amount of data available for transmission of the RLC layer and the amount of data available for transmission of the PDCP layer is set in buffer size field. FIG.3Ais a diagram illustrating the overall operation of a terminal, a base station, and an LMF. In step3a-11, the terminal selects a NR cell and camps on it. The terminal may select an NR cell in which downlink reference signal received power and downlink reference signal received quality exceed a predetermined threshold. The terminal does not consider neighboring cell information included in the System Information Block in cell selection. In step3a-13, the terminal receives system information from the base station in the selected NR cell. The terminal receives the MIB first and receives SIB1 based on the information of the MIB. The terminal receives the remaining system information by referring to the scheduling information of SIB1. In steps3a-15, the terminal establishes an RRC connection with the base station. The terminal and the base station exchange RRCRequest messages, RRCSetup messages, and RRCSetupComplete messages through the random access process. When the terminal receives the RRCSetup message from the base station, the RRC connection is established. A terminal that has established an RRC connection may perform a positioning preparation procedure and a positioning execution procedure with a base station or LMF. The positioning preparation procedure consists of a UE capability reporting phase3a-17and an assistance data delivery phase3a-19. The positioning execution procedure3a-21,3a-23consists of a terminal and a base station performing positioning measurement using an uplink signal and a downlink signal and reporting it to the LMF. The UE capability reporting phase is performed only in the RRC connected state, but the assistance data delivery phase and the positioning execution procedure may be performed not only in the RRC connected state but also in the RRC inactive state. When the terminal receives assistance data and report configuration from the base station or the LMF, it measures for positioning based on the assistance data, and reports the measurement result to the LMF based on the report configuration. The terminal may receive the first type assistance data in assistanceDataProvide and may receive Report Configuration in positioningDataRequest. Upon receiving the positioningDataRequest, the terminal performs positioning measurement based on the assistance data of the first type assistance data of assistanceDataProvide and reports the measurement result to the LMF based on the Report Configuration of positioningDataRequest. The terminal can receive the second type assistance data including Report Configuration and assistance data validity in one assistanceDataProvide. When the validity of the assistance data is satisfied, the terminal performs the measurement for positioning based on the second type assistance data of the assistanceDataProvide and reports the measurement result to the LMF based on the Report Configuration of the same assistanceDataProvide. FIG.3Bis a diagram illustrating a terminal capability reporting procedure. In step3b-11, the first base station3a-03instructs capability reporting by transmitting a UECapabilityEnquiry RRC message to the terminal3a-01. In step3b-13, the terminal reports the capability by sending a UECapabilityInformation RRC message to the first base station. UECapabilityInformation includes the first capability information and the third capability information. The base station may determine the positioning measurement configuration for the terminal by referring to the first capability information and the third capability information. In step3b-15, the first base station delivers the first capability information and the third capability information to the AMF3a-04, and in step3b-17, the AMF stores the first capability information and the third capability information for future use. In step3b-21, the first LMF3a-05instructs capability reporting by sending an LMF message called requestCapabilities to the terminal. The message includes information indicating for which positioning method the terminal should report capability. In step3b-23, the terminal reports the capability by sending the LMF message provideCapabilities to the first LMF. provideCapabilities includes the second capability information and the third capability information. The first LMF refers to the second capability information and the third capability information to instruct positioning measurement for the terminal and provides assistance data required by the terminal. In step3b-25, the first LMF transfers the second capability information and the third capability information to the AMF, and in step3b-27, the AMF stores the second capability information and the third capability information for future use. At future, the terminal establishes an RRC connection at the second base station3a-07. When the location service for the terminal is started, the AMF provides the first capability information and the third capability information to the second base station in step3b-31, instead of the base station and the LMF directly acquiring the relevant capability information to the terminal, and in step3b-33, the AMF provides the stored second capability information and the third capability information to the second LMF. The first capability information is capability information that the terminal reports to the base station through the RRC control message. It is capability information that LMF does not require only base station requires. The following IEs are applicable. The first capability information is information necessary for the base station to establish positioning measurement and is information about capability closely related to the radio interface. The first capability information1: it indicates whether the UE supports parallel transmission of SRS and PUCCH/PUSCH The first capability information2: Information indicating whether the terminal supports SRS for positioning in the connected state (indicating support of SRS for positioning in RRC_CONNECTED) it is defined for each band of the band combination (or defined within the band combination) It is reported as part of the band combination specific capability information. The terminal reports band specific capability information for each band it supports. The terminal reports band combination specific capability information that is valid only for the band combination within the band combination for each band combination supported by the terminal. Whether the connection state positioning SRS is supported is indicated for each band in the band combination. For example, if the terminal supports band A, band B, and band combination [A, B], the terminal reports to the base station band A specific capability information applied to band A and band B specific capability information applied to band B and band A capability information in the band combination [A,B] and band B capability information in the band combination [A,B]. Terminal reports as band capability information of the band combination whether positioning SRS is supported in connected mode The first capability information3: it indicates the maximum number of configured pathloss reference RSs for PUSCH/PUCCH/SRS for pathloss reference RS update. The first capability information4: it indicates measurement gap pattern(s) optionally supported by the UE for PRS measurement. The first capability information5: it indicates support of small data transfer via SRB2. The second capability information is capability information that the terminal reports to the LMF through the LPP control message. It is the capability information that LMF needs and base station does not need. The following IEs are applicable. The second capability information is information required for the LMF to establish positioning measurement and positioning report. It is information on capability closely related to the positioning function. The second capability information1: It indicate several positioning modes using a bit map. positioning mode information indicates a mode supported by the UE among UE-assisted and LMF-based mode, LMF-based mode, LMF-assisted and UE based mode, UE based mode and UE standalone mode. The second capability information2: It indicates the target device's LPP message segmentation capabilities. If bit0 is 1, it indicates that the target device can receive the segmented LPP message. If bit1 is 1, it indicates that the target device can transmit a segmented LPP message. The second capability information3: It indicates whether the target device can perform positioning measurement using PRS for a predetermined positioning method in an inactive state. The predetermined positioning method may be, for example, DL-AoD or DL-TDOA. That is, it indicates whether the terminal can measure PRS in the inactive state. The second capability information4: It indicates whether the target device can report the positioning measurement result in the inactive state. The third capability information is capability information that the terminal reports to the LMF through the LPP control message and to the base station through the RRC control message. It is the capability information required by both the LMF and the base station, and the following IEs are applicable. The third capability information1: It indicates support of SRS for positioning in RRC_INACTIVE. It is defined per band and reported as part of band specific capability information. The third capability information2: It is outer loop power control related information. It indicates whether the UE supports OLPC for SRS for positioning. The third capability information3: It indicates whether the UE supports spatial relations for SRS for positioning. The first capability information2 (indicating whether positioning SRS is supported in CONNECTED state) is reported to base station per band combination (or per feature set). The third capability information1 (indicating whether positioning SRS is supported in INACTIVE state) is reported per band to base station and to LMF. The definition of FeatureSet can be referred to 3GPP specification 38.331 and 38.306. Capability information on positioning SRS in INACTIVE state is reported both to base station and to LMF. Capability information on PRS in INACTIVE state is reported to LMF only. FIG.3Cis illustrating assistance data delivery phase. The assistance data is classified into immediate assistance data (first type assistance data) and conditional assistance data (second type assistance data). The base station may provide assistance data using the positioning SIB. The LMF sets the contents of the assistance data included in the positioning SIB. The LMF can provide assistance data to the terminal using the LPP control message. The terminal may acquire assistance data through system information in the idle state as in steps3a-13or may acquire assistance data through system information after RRC connection state transition3a-15. When the location service is started, the terminal may initiate a procedure for obtaining assistance data. The location service may be started regardless of the RRC state of the terminal. In step3c-11, the terminal receives SIB1 from the base station. The terminal stores SI scheduling Information and positioning SI scheduling Information. The terminal transitions to the connected state through steps3a-15and3a-17and performs the terminal capability reporting step. If the location service is started, the terminal performs steps3a-19to obtain assistance data. In step3c-13, the terminal receives the SI including the positioning SIB from the base station and determines whether required assistance data is provided in the corresponding cell. Required assistance data means assistance data for a positioning method supported by a terminal or assistance data for a positioning method to be used in a disclosed location service. The terminal determines, through the positioning SI scheduling information of SIB1, the required assistance data directly or indirectly provided from the corresponding cell and the required assistance data not provided from the corresponding cell. The assistance data currently being transmitted from the corresponding cell, that is, the assistance data of the positioning SIB in which the positioning SI broadcast status is set to being broadcast, is assistance data directly provided from the corresponding cell. Assistance data that is not currently transmitted from the corresponding cell but may be transmitted in the future, that is, the assistance data of the positioning SIB in which the positioning SI broadcast status is set to non-broadcast, is assistance data that is indirectly provided from the corresponding cell. The terminal receives the positioning SI including the positioning SIB provided directly in step3c-13as follows.1: Determining the time interval in which the positioning SI/positioning SIB can be transmitted based on the SI window length in the SI scheduling information and positioning SIB mapping information and the order of the SI scheduling information in the positioning SI scheduling information obtained from SIB1.2: Monitoring SI-RNTI in the time interval3: Receive a MAC PDU scheduled through SI-RNTI in the time interval4: Acquire positioning SI included in the MAC PDU In order to obtain the necessary positioning SIB provided indirectly, the terminal generates a positioning system information request2 requesting the positioning SIB to the base station. In step3c-15, the terminal sends positioning system information request2 to the base station. The terminal sets the requested positioning SIB list as follows.1: Identifying the positioning SI mapped with the required positioning SIB2: Identifying the positioning SI in which the positioning SI broadcast status is non-broadcast among the positioning SIs3: Determining the positioning SIB mapped to the positioning SI4: Including the positioning SIB type information in the requested positioning SIB list That is, the terminal includes, in the requested positioning SIB list, a positioning SIB mapped to a positioning SI in which the positioning SI broadcast status is set to non-broadcast among the required positioning SIB s. In step3c-17, the terminal receives the requested indirect positioning SIB/positioning SI from the base station. The indirect positioning SIB includes immediate assistance data1. The immediate assistance data1 may be, for example, GNSS-related assistance data included in positioning SIB1-x or positioning SIB2-x. Alternatively, immediate assistance data1 may be NR-DL-PRS-AssistanceData included in positioning SIB 6-1. In step3c-21, the terminal receives the indirect positioning SIB/positioning SI requested from the base station. The indirect positioning SIB includes conditional assistance data1. The conditional assistance data1 may be, for example, a conditional assistance data set included in positioning SIB 6-4. The base station includes the immediate assistance data and the conditional assistance data in different positioning SIBs and maps the positioning SIB corresponding to the immediate assistance data and the positioning SIB corresponding to the conditional assistance data to the different positioning SIs. Through this, terminals requiring only immediate assistance data and terminals requiring only conditional assistance data can receive only required. In addition, assistance data can be provided more flexibly, for example, immediate assistance data is transmitted directly to positioning SIB/direct positioning SI and conditional assistance data is transmitted to indirect positioning SIB/indirect positioning SI. In step3c-23, the terminal transmits an LPP message called requestAssistanceData requesting assistance data to the base station. The LPP message is delivered to the LMF through the base station. requestAssistanceData is transmitted to the base station through SRB2/DCCH. RequestAssistanceData contains the fields below.1: PCI of PCell. The LMF identifies the cell in which the terminal is located by referring to the PCI of the PCell and determines the assistance data valid for the cell and the adjacent area.2: Type of required assistance information. It indicates the type of assistance data requested by the terminal. This field indicates the relevant positioning method. For example, if this field indicates GNSS, the LMF determines that the terminal requests to provide GNSS-related assistance data.3: Identifier of conditional assistance data1 requiring activation. It is an identifier of conditional assistance data1 for which a terminal is desired to be activated among conditional assistance data1 obtained through positioning SIB or the like. The terminal indicates the assistance data id2d-07of the desired conditional assistance data2d-05among the plurality of conditional assistance data2d-05included in the conditional assistance data set2d-01.4: Information indicating that the required (or requested) assistance data is conditional assistance data. The terminal includes this field if the conditional assistance data 1 received from the base station does not include the conditional assistance data for the positioning method it wants. In step3c-25, the LMF transmits an LPP message called ProvideAssistanceData that provides assistance data to the terminal. ProvideAssistanceData contains the fields below.1: immediate assistance data. Among the immediate assistance data requested by the terminal, this is the immediate assistance data that the LMF can provide.2: Activated conditional assistance data id. The activated conditional assistance data is indicated among the conditional assistance data1 for which the terminal has requested activation. It is indicated by the assistance data id.3: conditional assistance data2. Among the conditional assistance data requested by the terminal, this is the conditional assistance data that the LMF can provide. When a predetermined condition is met, the terminal performs positioning measurement by applying conditional assistance data2 and reports the positioning measurement result to the LMF.4: inactive positioning. Information indicating whether the terminal should perform positioning-related operations in the inactive state. It may be at least one of the following three pieces of information.4-1: positioning measurement continuation indicator: 1-bit information indicating whether to continue the currently performed positioning measurement operation after transitioning to the inactive state.4-2: conditional assistance data based positioning measurement: 1-bit information instructing to perform positioning measurement by applying available conditional assistance data when transitioning to an inactive state. The available conditional assistance data may be a plurality of conditional assistance data included in conditional assistance data1 and a plurality of conditional assistance data included in conditional assistance data2.4-3: inactive positioning measurement method list: A list of positioning measurement methods to be performed by the terminal when transitioning to inactive state. It may be composed of a bitmap in which each bit is mapped with a predetermined positioning measurement method. The terminal may perform positioning measurement by measuring PRSs indicated in immediate assistance data and PRSs indicated in activated conditional assistance data1. The terminal reports the PCI to the LMF in requestAssistanceData. The LMF may provide conditional assistance data validity information composed of multiple NR CGIs to the terminal in ProvideAssistanceData. Alternatively, the LMF may provide conditional assistance data validity information composed of a plurality of CellIdentity to the terminal in ProvideAssistanceData. Alternatively, the LMF may provide conditional assistance data validity information composed of a plurality of cell identities and a plurality of base station identifier (gNB identifier) length information to the terminal in ProvideAssistanceData. LMF considers PCI and determines which cell's assistance data to provide to the terminal. The terminal determines in which cell the assistance data is valid by considering the cell identifier provided by the LMF. The NR CGI consists of MCC (Mobile Country Code) and MNC (Mobile Network Code), which are information indicating the PLMN, and Cell Identity, which is information indicating the cell. Cell Identity has a size of 36 bits, and the leftmost n bits are the base station indicator (gNB identifier). The n has a variable size between 22 and 32 and may be known to the terminal as separate information called base station identifier length information. PCI is an integer between 0 and 1007. PCI is an indicator that specifies a cell within a relatively narrow area, NR CGI is an indicator that specifies a cell globally, and Cell Identity is an indicator that specifies a cell within one PLMN. FIG.3Dis a diagram illustrating an uplink positioning process of an inactive terminal. In the uplink positioning process, the terminal in the RRC connected state receives the SRS configuration from the base station and transmits the SRS, the base station measures the SRS and reports the measurement result to the LMF, and the LMF calculates the terminal's position based on the measurement result. Although the SRS measurement can be performed by several base stations, only one base station is illustrated inFIG.3dfor convenience. In step3d-01, the terminal receives an RRCReconfiguration message including SRS configuration from the base station. The SRS configuration may be provided for each UL BWP, and the SRS configuration consists of one or more SRS-PosResourceSet (hereinafter, SRS positioning resource set). One SRS positioning resource set consists of one or more SRS-PosResource (hereinafter, SRS positioning resource). The SRS positioning resource is defined by srs-PosResourceId (SRS positioning resource identifier), startPosition, nrofSymbols, freqDomainShift, freqHopping, periodicityAndOffset-sp, spatialRelationInfoPos, and the like. StartPosition and nrofSymbols indicate the start position of a symbol in which SRS is transmitted and the number of symbols in which SRS is transmitted in the positioning SRS slot. FreqDomainShift and freqHopping define the frequency resource through which the SRS is transmitted in relation to the frequency domain of the corresponding BWP. PeriodicityAndOffset-sp indicates the periodicity and the slot at which the positioning SRS slot starts. The positioning SRS slot means a slot in which a positioning SRS resource is configured or a slot in which a positioning SRS is transmitted. SpatialRelationInfoPos defines a spatial domain transmission filter to be applied to positioning SRS transmission and may be set to a downlink reference signal index of a serving cell, an SSB index of a neighboring cell, and the like. SRS positioning resource set consists of SRS positioning resource set identifier, SRS positioning resource identifier list, ResourceType, alpha, p0, pathlossReferenceRS-Pos. SRS positioning resource identifier list is the list of SRS positioning resource identifiers composing the SRS positioning resource set. ResourceType indicates one of “periodic” and “semi-persistent” and “aperiodic”. In the present disclosure, a semi-persistent SRS positioning resource set will be described as an example. For SRS positioning resource set of which ResourceType is indicated as semi-persistent, SRS transmission of SRS positioning resource set starts only after a specific control message instructs transmission. Alpha, p0 and pathlossReferenceRS-Pos are parameters for transmission power control of positioning SRS. alpha and p0 are power offsets that are added when determining positioning SRS transmission power, and pathlossReferenceRS-Pos provides path loss when determining positioning SRS transmission power. is the reference signal. In step3d-03, the terminal receives a Positioning SRS Activation/Deactivation MAC CE instructing to start transmission of a specific SRS positioning resource set from the base station. The Positioning SRS Activation/Deactivation MAC CE consists of an A/D field, a Cell ID field, a BWP ID field, a SUL field, and a Positioning SRS Resource Set ID. The A/D field indicates whether to activate or deactivate the indicated SRS positioning resource set. The Cell ID field indicates the identifier of the serving cell to which the SRS positioning resource set to be activated/deactivated belongs. The BWP ID field indicates the identifier of the BWP to which the SRS positioning resource set to be activated/deactivated belongs. The SUL field indicates whether the MAC CE is applied to a NUL carrier configuration or a SUL carrier configuration. Or it indicates whether the activated or deactivated SRS positioning resource set is an SRS positioning resource set of SUL or an SRS positioning resource set of NUL. The Positioning SRS Resource Set ID field is an identifier of the SRS positioning resource set to be activated or deactivated. NUL is normal uplink and SUL is supplementary uplink. One serving cell may have only NUL or may have NUL and SUL. The SUL is configured in the low frequency band comparing to the NUL to increase the uplink coverage of the cell. In step3d-05, the terminal transmits a positioning SRS in the activated SRS positioning resource set. The terminal transmits the positioning SRS from SRS positioning resources belonging to the SRS positioning resource set by applying the transmission power control parameter of the SRS resource set. The SRS positioning resources are periodically generated according to periodicityAndOffset-sp. In step3d-07, the terminal receives the RRCRelease message from the base station. The base station may change the state of the terminal to RRC_INACTIVE or RRC_IDLE in consideration of the terminal's traffic condition, cell load condition, and RRM condition of the terminal. If the uplink positioning has not yet been completed, the base station instructs the terminal to transition to the RRC_INACTIVE state while continuing to transmit the positioning SRS. The base station transmits an RRCRelease message including inactive SRS IE, stop condition IE, and SuspendConfig IE to the terminal. The terminal stores the SRS configuration in the Inactive Access Stratum Context. The terminal receiving the message performs cell selection. At this time, the terminal preferentially selects the first cell if it is possible to select the first cell. If the reference signal received quality of the first cell is higher than a predetermined threshold, the terminal preferentially selects the first cell and camps on it. The first cell is one of a serving cell in which the terminal receives the RRCRelease message, a PCell at the time point when the terminal receives the RRCRelease message, or a serving cell in which the SRS positioning resource set is activated. Alternatively, the first cell may be a cell belonging to the first cell list. The first cell list includes a plurality of cell information, and each cell information includes PCI and an Absolute Radio Frequency Channel Number (ARFCN). The first cell list may be included in the RRCRelease message and transmitted to the terminal. ARFCN is defined in specification 38.101, and each AFRCN corresponds to a specific frequency. In step3d-09, the terminal determines whether to continue to perform positioning SRS transmission, if so, which SRS positioning resource set to transmit. The terminal determines whether to transmit the positioning SRS in consideration of the inactive SRS IE and whether the newly reselected cell is the first cell. The inactive SRS IE includes one of an inactive SRS transmission continuation indicator, the first SRS resource set IE and the second SRS resource set IE. The inactive SRS IE may also include an SRS transmission stop condition IE. The inactive SRS IE may also include an SRS transmission condition IE. The inactive SRS transmission continuation indicator is an indicator supporting that the SRS positioning resource set of NUL continues to transmit and the SRS positioning resource set of SUL stops transmission among the currently active SRS positioning resource sets. The terminal performs the above operation if the indicator is included. The first SRS resource set IE consists of an identifier of an SRS positioning resource set, a cell identifier, a BWP identifier, and the like. After the terminal transitions to the inactive state, it transmits the positioning SRS by activating the SRS positioning resource set specified by the cell identifier, the BWP identifier, and the SRS positioning resource set identifier. The SRS positioning resource set to be activated is limited to the SRS positioning resource set in the BWP of the NUL. In other words, when the NUL BWP and the SUL BWP having the same BWP identifier exist, the SRS positioning resource set identifier is an identifier indicating the SRS positioning resource set in the NUL BWP. The identifier of the SRS positioning resource set indicates a specific SRS positioning resource set of a specific BWP of the NUL of a specific serving cell, and the SRS positioning resource set corresponding to the SRS positioning resource set identifier is defined in the SRS configuration provided for the specific BWP. Alternatively, the first SRS resource set IE may include an identifier of an SRS positioning resource set, a cell identifier, a BWP identifier, and a SUL indicator. If the SUL indicator is not included in the first SRS resource set IE, the inactive state terminal transmits a positioning SRS in the NUL, and when the SUL indicator is included in the first SRS resource set IE, the inactive state terminal transmits a positioning SRS in the SUL. The second SRS resource set IE consists of an SRS positioning resource set IE, a cell identifier, a BWP identifier, and the like. After transitioning to the inactive state, the terminal transmits the positioning SRS in the SRS positioning resource specified by the SRS positioning resource set IE in the frequency domain indicated by the cell identifier and the BWP identifier. At this time, if there are two BWPs corresponding to the BWP identifier, a BWP of NUL is selected. Alternatively, the second SRS resource set IE may include an SRS positioning resource set IE, a cell identifier, a BWP identifier, a SUL indicator, and the like. If the SUL indicator is not included in the second SRS resource set IE, the inactive state terminal transmits a positioning SRS in the NUL, and when the SUL indicator is included in the second SRS resource set IE, the inactive state terminal transmits a positioning SRS in the SUL. The SRS transmission stop condition IE defines a condition for stopping the transmission of the positioning SRS, which the terminal was transmitting in the inactive state. The SRS transmission stop condition may be the number of positioning SRS transmissions, a time point to stop positioning SRS transmission, and the like. The SRS transmission condition IE defines the conditions that must be satisfied in order for the terminal to transmit the positioning SRS in the inactive state. The SRS transmission condition may be defined as the first time point and the second time point. The terminal starts transmitting positioning SRS at the first time point in the inactive state and stops transmitting positioning SRS at the second time point. The first time point and the second time point may be indicated by the SFN and subframe number of the first cell. The first time point and the second time point can be expressed in absolute times such as UTC. If the newly selected cell is the first cell and the inactive SRS IE exists, the terminal transmits the positioning SRS as described above even in the inactive state. If the newly selected cell is not the first cell, the terminal removes the SRS configuration from the inactive AS context and does not transmit the positioning SRS in the inactive state. In step3d-11, the terminal periodically transmits the positioning SRS in the inactive state. The terminal continues to transmit the previously activated SRS positioning resource set. Or the terminal deactivates the previously activated SRS positioning resource set, activates the SRS positioning resource set indicated in the first SRS resource set IE and transmits the SRS positioning resource set. Or the terminal deactivates the previously activated SRS positioning resource set, activates the SRS positioning resource set indicated in the second SRS resource set IE and transmits the SRS positioning resource set The base station collects location-related measurement information by receiving the positioning SRS transmitted by the terminal in the inactive state. In step3d-13, the base station transmits a MEASUREMENT RESPONSE message including the SRS measurement result to the LMF. The LMF calculates the position of the terminal using the measurement result. When positioning of the terminal is completed, the LMF notifies the base station that positioning is complete. In step3d-15, the base station receives the message POSITIONING DEACTIVATION from the LMF and recognizes that the uplink positioning has been completed. In step3d-17, the base station transmits a downlink control message to stop transmitting the positioning SRS of the terminal. The downlink control message may be, for example, a paging message. The base station may include the terminal's I-RNTI (inactive wireless network temporary identifier) and positioning SRS transmission stop information in the paging message. The I-RNTI is assigned in the RRCRelease message. The RRCRelease message allocates two I-RNTIs: a full I-RNTI and a short I-RNTI. The terminal determines whether an I-RNTI matching its full I-RNTI is included in the paging. Upon receiving the paging message including its I-RNTI, the terminal determines whether information related to SRS transmission stop, for example, positioning SRS transmission stop information, is included in the paging message. The terminal performs one of the following actions according to its judgment.1: If the paging message including its I-RNTI does not contain information related to SRS stop and inactive SRS transmission is being performed, the terminal stops SRS transmission and initiates the RRC connection resumption procedure.2: If the paging message including its I-RNTI does not include information related to SRS stop and inactive SRS transmission is not being performed, the terminal initiates the RRC connection resumption procedure.3: If information related to SRS stop is included in the paging message including its I-RNTI and inactive SRS transmission is being performed, the terminal stops SRS transmission and does not initiate the RRC connection resumption procedure.4: If the paging message including its I-RNTI includes information related to SRS stop and inactive SRS transmission is not being performed, the terminal ignores the paging message and does not initiate the RRC connection resumption procedure. The terminal performs random access to perform a resumption procedure and transmits a predetermined uplink RRC control message. In step3d-19, the terminal stops inactive SRS transmission or initiates a resumption procedure with reference to the information included in the paging message. A terminal in inactive state stops transmitting positioning SRS in the following cases.1: The cell selected after receiving the RRCRelease message is not the first cell.2: Reselect another cell from the first cell.3: SRS transmission stop condition is satisfied.4: The resumption procedure is started.5: Receives a paging message indicating to stop inactive SRS transmission. One paging message includes a plurality of pagingRecords, and each pagingRecord among the plurality of pagingRecords includes one terminalidentifier field and one second information field. Among the plurality of pagingRecords, in each pagingRecord, the terminalidentifier field is mandatory present and the second information field is optionally present. The terminalidentifier field is set to full I-RNTI and the second information field is enumerated with a single value indicating an SRS stop. Optionally present IE being enumerated with a single value means that the single value is applied if the IE is present and the single value is not applied if the IE is not present. FIG.3Eis a diagram illustrating a downlink positioning process of an inactive terminal. A terminal that has obtained immediate assistance data, conditional assistance data1, and conditional assistance data2 through steps3c-13to3c-25performs an operation related to downlink positioning by using the assistance data. An operation related to downlink positioning is, for example, measuring the reception time difference of PRSs transmitted from a plurality of TRPs and reporting the result to the LMF, or measuring the received power of PRSs transmitted from a plurality of TRPs and reporting to the LMF, etc. In step3e-01, the terminal generates an RRC control message called UEAssistanceInformation to report to the base station that downlink positioning should be performed even in the RRC_INACTIVE state and transmits it to the base station. The control message may include an inactive positioning2 IE indicating the type of positioning method that the terminal can perform in an inactive state. the control message can include a information requesting to configure small data transfer via SRB2. The control message may include time pattern information of PRS for positioning. The terminal performs steps3e-03if the inactive positioning IE is included in the ProvideAssistanceData received in steps3c-25. In step3e-03, the base station sends an RRCRelease message to the terminal. The base station may change the state of the terminal to RRC_INACTIVE or RRC_IDLE in consideration of the terminal's traffic condition, cell load condition, and RRM condition of the terminal. If the base station determines that the terminal needs to measure positioning in the inactive state, the base station may provide information related to downlink positioning measurement while instructing the terminal to transition to the RRC_INACTIVE state. Information related to downlink positioning measurement may include, for example, offset information for moving the paging monitoring period of the terminal so that the paging monitoring time interval of the terminal does not overlap with the PRS measurement period. The base station can configure small data transfer through SRB2 to the terminal. The small data transfer configuration may consist of a list of data bearers for which small data transfer is configured, and 1-bit information indicating whether small data transfer can be configured to SRB2. When small data transfer is applied to SRB2, the terminal can transmit the data of SRB2 to the base station through the small data transfer procedure. The small data transfer procedure is a procedure in which the RRC_INACTIVE terminal transmits small data through the RRC connection resumption procedure without transitioning to RRC_CONNECTED. Upon receiving the RRCRelease message including information related to downlink positioning measurement, the terminal performs cell selection. At this time, if the reference signal received power of the second cell is greater than or equal to a predetermined threshold, the terminal preferentially selects the second cell to camp on. The second cell may be a serving cell receiving the RRCRelease message or a PCell at a time point receiving the RRCRelease message. In step3e-05, the terminal selecting the new cell monitors whether the assistance data validity is met. If the newly selected cell is the second cell, the terminal considers both conditional assistance data1 and conditional assistance data2. The terminal considers only conditional assistance data2 if the newly selected cell is not the second cell. The terminal monitors if at least one assistance data validity is fulfilled among the assistance data validity of which data status is broadcast included in either conditional assistance data1 or conditional assistance data2 In step3e-06, if it is determined that the assistance data of the conditional assistance data for which the assistance data validity is satisfied is determined to be valid, the terminal starts measuring the downlink PRSs specified in the assistance data. The terminal measures the arrival time difference of PRS s transmitted by a plurality of TRPs. When the PRS measurement is completed, the terminal generates an LPP ProvideLocatinoInformation message including the measurement result. The terminal initiates a small data transfer procedure to transmit the LPP message. If necessary, the ProvideLocatinoInformation message can be segmented into a plurality of segments and transmitted. The ProvideLocatinoInformation message includes information on arrival time difference of PRSs transmitted by a plurality of TRPs, one assistance data identifier and a plurality of downlink positioning reference signal identifiers(DL-PRS id). The downlink positioning reference signal identifier is an identifier of the measured PRSs, and the assistance data identifier is an identifier of assistance data providing the configuration of the measured PRSs. If the PRS measurement is made based on the first type assistance data, the ProvideLocatinoInformation message includes a plurality of measurement results and a plurality of downlink positioning reference signal identifiers. If the RRS measurement is made based on the second type assistance data, the ProvideLocatinoInformation message includes a plurality of measurement results and a plurality of downlink positioning reference signal identifiers and one assistance data id. In step3e-07, the terminal transmits a ResumeRequest, an LPP segment message, and a MAC PDU including a Buffer Status Report (BSR) to the base station. The LPP segment message includes the first segment of the LPP ProvideLocatinoInformation message. The BSR includes information on the size of the remaining segments of the LPP ProvideLocatinoInformation message. The ResumeRequest belongs to SRB0 and the LPP segment message belongs to SRB2. The ResumeRequest of SRB0 is not ciphered, the LPP segment message of SRB2 is ciphered, and the BSR is not ciphered. The ciphering is performed with a new security key calculated through the value of NCC received by the terminal in the RRCRelease message and the security key stored by the terminal. In principle, all RRC messages are ciphered, but the RRC message of SRB0 is not ciphered because it is a message that the base station must process without prior information. Since BSR is information processed by the MAC layer of the base station, it is not ciphered. As a result, the MAC PDU transmitted to report the positioning measurement result in the inactive state includes three MAC subPDUs, the first MAC subPDU and the third MAC subPDU include an unciphered payload, and the second MAC subPDU includes a ciphered payload. The terminal reports the amount of data available for transmission through the BSR. The RRC_CONNECTED terminal determines the BSR format in consideration of the number of logical channel groups in which data available for transmission exists. That is, the RRC_CONNECTED terminal uses the first BSR if the number of logical channel groups in which data available for transmission is one and uses the second BSR if it is more than one. The RRC_INACTIVE terminal determines the BSR format without considering the number of logical channel groups in which data available for transmission exists. That is, the RRC_INACTIVE terminal uses the first BSR even if the number of logical channel groups in which data available for transmission exists is more than one. The RRC_INACTIVE terminal sets the identifier of a logical channel group with the highest priority among logical channel groups in which data available for transmission exists in the logical channel group identifier field2h-01, and sets in the first buffer size field2h-03the first buffer size index corresponding to the amount of data available for transmission across all the logical channels. The RRC_INACTIVE terminal uses the logical channel group identifier predefined in the specification instead of the logical channel group identifier configured in the RRC_CONNECTED state. In the RRC_INACTIVE state, the terminal uses the preconfigured configuration instead of the terminal specific configuration because the base station does not know the terminal's buffer status reporting configuration. The RRC_CONNECTED terminal determines the buffer size index to be set in buffer size field of the BSR by considering only the data of the PDCP layer and the data of the RLC layer. If RRC_INACTIVE terminal operates in the same manner, remaining LPP segments stored in LPP layer is not considered. To overcome this problem, the RRC_INACTIVE terminal determines the buffer size index to be set in buffer size field by considering the amount of data of PDCP layer and data of RLC layer and data of LPP layer (or upper layers of PDCP layer or upper layers of RRC layer). That is, a buffer size index corresponding to the sum of all the data amounts is selected. In step3e-09, the base station transmits a locationInformation segment to the LMF. In step3e-11, the terminal transmits the MAC PDU including the LPP segment message and information indicating no more data for transmission. The LPP segment message includes the last segment of the LPP ProvideLocatinoInformation message. Information indicating no more data for transmission may be the first BSR in which buffer size index 0 is set. In step3e-13, the base station transmits a locationInformation segment to the LMF. After receiving the last segment, the LMF assembles the segments to make a location information message and determines the location of the terminal by referring to the positioning measurement result of the location information message. In steps3e-15, the terminal monitors whether the assistance data validity is met. In step3e-16, when it is determined that the assistance data of the conditional assistance data for which the assistance data validity is satisfied is determined to be valid, the terminal starts measuring the downlink PRSs specified in the assistance data. In step3e-17, the terminal transmits the MAC PDU including ResumeRequest, LPP segment message and BSR (Buffer Status Report) to the base station. In step3e-19, the base station transmits a locationInformation segment to the LMF. In step3e-21, the terminal transmits the MAC PDU including the LPP segment message and information indicating no more data for transmission. In step3e-23, the base station transmits an LPP segment message to the LMF. After receiving the last segment, the LMF assembles the segments to generate a location information message and determines the location of the terminal by referring to the positioning measurement result of the locationInformation message. If the terminal transitions to RRC_IDLE or RRC_CONNECTED or the assistance data validity is not met, the terminal stops measuring the downlink PRS for location measurement and reporting the measurement result. FIG.4is a flow diagram illustrating an operation of a terminal. In4a-01, UE transmits to a base station a UECapabilityInformation including a first information indicating support of SRS transmission for positioning in RRC_INACTIVE and a second information indicating support of SRS transmission for positioning in RRC_CONNECTED. In4a-03, UE transmits to a LMF a ProvideCapabilities including a first information indicating support of SRS transmission in RRC_INACTIVE. In4a-05, UE receives form a base station a RRCReconfiguration including a first SRS configuration of a first BWP of a first cell. In4a-07, UE receives from the base station a Positioning SRS MAC CE activating the first SRS configuration of the first BWP of the first cell. In4a-09, UE performs SRS transmission for positioning in the first BWP of the first cell according to the first SRS configuration. In4a-11, UE receives, from the base station, a RRCRelease including a configuration for RRC_INACTIVE and a 3rd information for SRS transmission in RRC_INACTIVE. In4a-13, UE enters into RRC_INACTIVE. In4a-15, UE performs SRS transmission for positioning in RRC_INACTIVE based on the third information. The third information is either the first SRS information or the second SRS information. The first SRS information includes an SRS resource set identifier, a BWP identifier and a serving cell identifier. The second SRS information includes an SRS resource set information. The first SRS information instructs UE to perform SRS transmission, in RRC_INACTIVE, on the PRS resource set indicated by SRS resource set identifier and the second SRS information instructs UE to stop, in RRC_INACTIVE, SRS transmission according to SRS resource set information. UE stores the first SRS information in INACTIVE context if UE receives a RRCRelease including a configuration for RRC_INACTIVE and the 3rd information for SRS transmission in RRC_INACTIVE. FIG.5Ais a block diagram illustrating the internal structure of a UE to which the disclosure is applied. Referring to the diagram, the UE includes a controller5a-01, a storage unit5a-02, a transceiver5a-03, a main processor5a-04and I/O unit5a-05. The controller5a-01controls the overall operations of the UE in terms of mobile communication. For example, the controller5a-01receives/transmits signals through the transceiver5a-03. In addition, the controller5a-01records and reads data in the storage unit5a-02. To this end, the controller5a-01includes at least one processor. For example, the controller5a-01may include a communication processor (CP) that performs control for communication and an application processor (AP) that controls the upper layer, such as an application program. The controller controls storage unit and transceiver such that UE operations illustrated inFIG.2AandFIG.2BandFIG.3Aare performed. The storage unit5a-02stores data for operation of the UE, such as a basic program, an application program, and configuration information. The storage unit5a-02provides stored data at a request of the controller5a-01. The transceiver5a-03consists of a RF processor, a baseband processor and plurality of antennas. The RF processor performs functions for transmitting/receiving signals through a wireless channel, such as signal band conversion, amplification, and the like. Specifically, the RF processor up-converts a baseband signal provided from the baseband processor into an RF band signal, transmits the same through an antenna, and down-converts an RF band signal received through the antenna into a baseband signal. The RF processor may include a transmission filter, a reception filter, an amplifier, a mi10r, an oscillator, a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), and the like. The RF processor may perform MIMO and may receive multiple layers when performing the MIMO operation. The baseband processor performs a function of conversion between a baseband signal and a bit string according to the physical layer specification of the system. For example, during data transmission, the baseband processor encodes and modulates a transmission bit string, thereby generating complex symbols. In addition, during data reception, the baseband processor demodulates and decodes a baseband signal provided from the RF processor, thereby restoring a reception bit string. The main processor5a-04controls the overall operations other than mobile operation. The main processor5a-04process user input received from I/O unit5a-05, stores data in the storage unit5a-02, controls the controller5a-01for required mobile communication operations and forward user data to I/O unit (905). I/O unit5a-05consists of equipment for inputting user data and for outputting user data such as a microphone and a screen. I/O unit5a-05performs inputting and outputting user data based on the main processor's instruction. FIG.5Bis a block diagram illustrating the configuration of a base station according to the disclosure. As illustrated in the diagram, the base station includes a controller5b-01, a storage unit5b-02, a transceiver5b-03and a backhaul interface unit5b-04. The controller5b-01controls the overall operations of the main base station. For example, the controller5b-01receives/transmits signals through the transceiver5b-03, or through the backhaul interface unit5b-04. In addition, the controller5b-01records and reads data in the storage unit5b-02. To this end, the controller5b-01may include at least one processor. The controller controls transceiver, storage unit and backhaul interface such that base station operation illustrated inFIG.2AandFIG.2Bare performed. The storage unit5b-02stores data for operation of the main base station, such as a basic program, an application program, and configuration information. Particularly, the storage unit5b-02may store information regarding a bearer allocated to an accessed UE, a measurement result reported from the accessed UE, and the like. In addition, the storage unit5b-02may store information serving as a criterion to deter mine whether to provide the UE with multi-connection or to discontinue the same. In addition, the storage unit5b-02provides stored data at a request of the controller5b-01. The transceiver5b-03consists of a RF processor, a baseband processor and plurality of antennas. The RF processor performs functions for transmitting/receiving signals through a wireless channel, such as signal band conversion, amplification, and the like. Specifically, the RF processor up-converts a baseband signal provided from the baseband processor into an RF band signal, transmits the same through an antenna, and down-converts an RF band signal received through the antenna into a baseband signal. The RF processor may include a transmission filter, a reception filter, an amplifier, a mi10r, an oscillator, a DAC, an ADC, and the like. The RF processor may perform a down link MIMO operation by transmitting at least one layer. The baseband processor performs a function of conversion between a baseband signal and a bit string according to the physical layer specification of the first radio access technology. For example, during data transmission, the baseband processor encodes and modulates a transmission bit string, thereby generating complex symbols. In addition, during data reception, the baseband processor demodulates and decodes a baseband signal provided from the RF processor, thereby restoring a reception bit string. The backhaul interface unit5b-04provides an interface for communicating with other nodes inside the network. The backhaul interface unit5b-04converts a bit string transmitted from the base station to another node, for example, another base station or a core network, into a physical signal, and converts a physical signal received from the other node into a bit string.
102,312
11863491
DETAILED DESCRIPTION Sparse Superposition Coding (SSC) and Sparse Vector Coding (SVC) are families of transmission schemes that potentially provide increased efficiency for any information message length. The core of any SSC/SVC transmission schemes is a codebook, i.e., a collection of codewords of a same length M. The SSC/SVC transmitter selects a small subset of codewords from the codebook, where selection is based on the information message. The transmitter then generates the signal for transmission by superposition of the selected codewords. To conveniently represent the SSC/SVC encoding procedure, the codebook is arranged in a SSC projection matrix F where each column of the projection matrix F is a codeword. In the SSC/SVC encoder, a K-bit information message m is first mapped to a set of sparse vectors X obtaining a sparse vector x, then the sparse vector is used to select a subset of the columns of the SSC projection matrix and superpose them in the transmitted signal z as follows: z=Fx(1) where F has size M×N, with M<N. In other words equation (1) formulates the multiplication of the selected sparse vector (x) with the projection matrix (F). SSC and SVC differ in the sparse vector set X. SSC has a sparse vector set obtained by Pulse-Position Modulation (PPM), i.e., message m is divided into L segments of size b bits each. Each segment is mapped to one of L subvectors x1, . . . , xLof same length B, where the lthsubvector has h=1 non-zero elements. The locations of the non-zero elements are obtained based on the bits in the lthmessage segment. For SVC, the lthsubvector has h>1 non-zero elements whose locations are obtained based on the message bits in the lthmessage segment. A sparse vector x of length N=LB containing hL<<N non-zero elements is obtained by concatenation of the L sub-vectors x1, . . . , xL. L is the density level of the corresponding SSC/SVC scheme. This method is in other words to select a sparse vector x from a set of sparse vectors X based on the information message m. In order to keep the description more simple, it is assumed that the columns of projection matrix F have constant magnitude, i.e. fi*fi=M for any i=1, . . . , N. However, using projection matrices with non-constant column magnitude is not precluded. Embodiments of the disclosure include devices and corresponding methods for reliable and efficient transmission of information messages in a communication system. The signal for transmission is obtained by superposition of selected columns from a quasi-orthogonal SSC projection matrix F where the columns are selected based on the information message. The QO-SSC projection matrix is in embodiments designed according to a construction based either on sequences obtained from Kerdock codes or based on Zadoff-Chu sequence sets. The QO-SSC matrix design simplifies encoding/decoding and, at the same time, provide higher spectral efficiency compared to conventional solutions. Therefore,FIG.1shows a transmitter device100according to an embodiment. In the embodiment shown inFIG.1, the transmitter device100comprises a processor102, a transmitter104and a memory106. The processor102is coupled to the transmitter104and the memory106by communication means108known in the art. The transmitter device100may be configured for both wireless and wired communications in wireless and wired communication systems, respectively. The wireless communication capability is provided with an antenna or antenna array110coupled to the transmitter104, while the wired communication capability is provided with a wired communication interface112coupled to the transmitter104. That the transmitter device100is configured to perform certain actions can in this disclosure be understood to mean that the transmitter device100comprises suitable means, such as e.g. the processor102and the transmitter104, configured to perform said actions. According to embodiments, the transmitter device100is configured to obtain an information message m for transmission. The transmitter device100is further configured to select a subset of columns of a projection matrix F based on the information message m, wherein the projection matrix F is a concatenation of a plurality of submatrices F=[F1F2. . . FC], wherein each sub-matrix Fchas M number of rows, and wherein two columns in the same sub-matrix are orthogonal, and wherein two columns belonging to different sub-matrices [F1F2. . . FC] have a correlation that is equal to or less than 1/√{square root over (M)}. The transmitter device100is further configured to superpose the selected subset of columns of the projection matrix F so as to obtain a signal for transmission z comprising M number of transmission symbols. FIG.2shows a flow chart of a corresponding method200which may be executed in a transmitter device100, such as the one shown inFIG.1. The method200comprises obtaining202an information message m for transmission. The method200further comprises selecting204a subset of columns of a projection matrix F based on the information message m, wherein the projection matrix F is a concatenation of a plurality of submatrices F=[F1F2. . . FC], wherein each sub-matrix Fchas M number of rows, and wherein two columns in the same sub-matrix are orthogonal, and wherein two columns belonging to different sub-matrices [F1F2. . . FC] have a correlation that is equal to or less than 1/√{square root over (M)}. The method200further comprises superposing206the selected subset of columns of the projection matrix F so as to obtain a signal for transmission z comprising M number of transmission symbols. FIG.3shows a receiver device300according to an embodiment. In the embodiment shown inFIG.3, the receiver device300comprises a processor302, a receiver304and a memory306. The processor302is coupled to the receiver304and the memory306by communication means308known in the art. The receiver device300further comprises an antenna or antenna array310coupled to the receiver304, which means that the receiver device300is configured for wireless communications in a wireless communication system. That the receiver device300is configured to perform certain actions can in this disclosure be understood to mean that the receiver device300comprises suitable means, such as e.g. the processor302and the receiver304, configured to perform said actions. According to embodiments, the receiver device300is configured to receive a signal r=z+n from a transmitter device100, wherein the received signal r comprises M number of symbols associated with an information message m. Hence, the received signal comprises the signal z transmitted from the transmitter device100plus noise and/or interference denoted n. The receiver device300is further configured to obtain a projection matrix F, wherein the projection matrix F is a concatenation of a plurality of submatrices, i.e. F=[F1F2. . . FC], wherein each sub-matrix Fchas M number of rows, and wherein two columns in the same sub-matrix are orthogonal, and wherein two columns belonging to different sub-matrices has a correlation that is equal to or less than 1/√{square root over (M)}. The receiver device300is further configured to perform iterative successive interference cancellation on the received signal r based on the projection matrix F so as to obtain a (selected) subset of the columns of the projection matrix F. The receiver device300is further configured to obtain a recovered information message {circumflex over (m)} based on the (selected) subset of the columns of the projection matrix F. FIG.4shows a flow chart of a corresponding method400which may be executed in a receiver device300, such as the one shown inFIG.3. The method400comprises receiving402a signal r=z+n from a transmitter device100, wherein the received signal r comprises M number of symbols associated with an information message m. The method400further comprises obtaining404a projection matrix F, wherein the projection matrix F is a concatenation of a plurality of submatrices F=[F1F2. . . FC], wherein each sub-matrix Fchas M number of rows, and wherein two columns in the same sub-matrix are orthogonal, and wherein two columns belonging to different sub-matrices has a correlation that is equal to or less than 1/√{square root over (M)}. The method400further comprises performing406iterative successive interference cancellation on the received signal r based on the projection matrix F so as to obtain a (selected) subset of the columns of the projection matrix F. The method400further comprises obtaining408a recovered information message {circumflex over (m)} based on the (selected) subset of the columns of the projection matrix F. FIG.5shows a communication system500according to an embodiment. In the communication system500a network access node800interworks with a client device900. It is illustrated inFIG.5that the network access node800can comprise a transmitter device100and a receiver device300according to embodiments. Likewise, the client device300can also comprise a transmitter device100and a receiver device300according to embodiments. The transmitter device100and/or the receiver device300can be part of another communication device such as mentioned network access node800and client device900. However, the transmitter device100and/or the receiver device300can also be standalone devices cooperating with another communication device. It is further to be noted fromFIG.5that the communication system500inFIG.5is illustrated as a wireless communication system but embodiments of the invention are not limited thereto. The communication system500may be a wireless communication system a wired communication system or a combined wired and wireless communication system. The communication system500can e.g. be long term evolution (LTE), LTE Advanced, and a 3GPP NR system also denoted 5G. FIG.6shows a block diagram of a transmitter device100and a block diagram of a receiver device300in a communication system500according to an embodiment. At the transmitter device100an information message m for transmission is forwarded to a mapping block152. At the mapping block152the information message m is mapped to a sparse vector set X which results in a sparse vector x that is outputted to an interleaver block154. The sparse vector x is interleaved in the interleaver block154and thereafter forwarded to a superposing block156. The interleaver block154inFIG.6is optional and is considered transparent herein, i.e. x={tilde over (x)}, for the rest of this disclosure without limiting the scope of the invention. Hence, interleaver block154is configured to interleave the selected sparse vector x before multiplying the selected sparse vector x with the projection matrix F. It is also noted that the mapping in the mapping block152can be performed according any method known in the art. At the superposing block156the interleaved sparse vector x is multiplied with a QO-SSC projection matrix according to a conventional mathematical matrix-vector product operation, where the ithcolumn of the projection matrix is multiplied by the ithelement of sparse vector so as to obtain a ithmultiplied column, then all the multiplied columns are summed to obtain a superposed signal for transmission z. Finally, the transmitter device100transmits the signal for transmission z to a receiver device300in the communication system500. In any SSC/SVC scheme, the complex symbols produced by the encoder can be mapped to time-frequency-space resource elements in the same way as the symbols of a conventional modulation. Thus, in SSC/SVC the modulation is considered to be joint and included in the encoding. At the receiver device300, a signal is received transmitted from the transmitter device100. The signal r is received in a reception block352and thereafter forwarded to an iterative successive interference cancellation (ISIC) block355. The ISIC block355has also obtained the projection matrix F. In one example the projection matrix F has been obtained through control signaling. For example, in case the receiver device300is part of a client device900the projection matrix F can be dynamically signaled in a downlink control channel, such as the physical downlink/uplink control channel (PDCCH/PUCCH). In another non-limiting example the projection matrix F can be obtained from a library of predefined projection matrices which is known to both transmitter device(s)100and receiver device(s)300in the communication system500. The index of the matrix used by the transmitter device100is dynamically signaled to the receiver device300in a downlink control channel, such as the physical downlink/uplink control channel (PDCCH/PUCCH). In a further non-limiting example, the projection matrix F can semi-statically configured in the transmitter device100and receiver device by higher-layer signaling, such as radio resource control (RRC) signaling. The ISIC block355performs iterative successive interference cancellation on the received signal r based on the obtained projection matrix F so as to obtain a subset of the columns of the projection matrix F. The iterations continues until a set S of submatrices is empty. Therefore, in an embodiment, the receiver device300is configured to initiate the algorithm by determining a set S of submatrices comprising all the submatrices in the projection matrix F. To start the decoding algorithm it is determined at the initiation that an interference cancelation signal rcis equal to the received signal r. Thereafter, the iterations proceed by performing the flowing steps:a) project the interference cancelation signal rconto each column of the submatrices of the set S of submatrices so as to obtain a set of projections,b) select a column of the projection matrix F with the largest projection in the set of projections,c) add the selected column to a subset of the columns of the projection matrix F,d) cancel or subtract the selected column from the interference cancelation signal rcso as to obtain an updated interference cancelation signal rc,e) remove the submatrix comprising the selected column from the set S of submatrices. These steps a) to e) are repeated in the algorithm until the set S of submatrices is empty and output the subset of the columns of the projection matrix F in c). Generally, the design of the projection matrix F is crucial for providing good SSC transmission efficiency. The QO projection matrix design herein disclosed can be based on the following procedure:1. Take a set of mutually-orthogonal sequences and place them in the leftmost columns of projection matrix F;2. Iteratively add new sequence sets into the rightmost columns of projection matrix F, where the sequences in each new set are orthogonal. Moreover, the sequences in each new set are quasi-orthogonal to the sequences already in projection matrix F. To reflect the above design procedure, the SSC projection matrix F is conveniently represented as the column-wise/horizontal concatenation of C submatrices as F=[F1F2. . . FC].  (2) where each submatrix Fc, c=1, . . . , C corresponds to one set of orthogonal sequences which means that the correlation between any two different sequences is zero. Each sub-matrix has size M×D and the columns in each submatrix are orthogonal, i.e., fiHfj=0 for any i,j∈{1, . . . , D}, i≠j. Two columns fp, fqthat belong to different sub-matrices are quasi-orthogonal, i.e. their correlation, defined as: |fpH⁢fq||fp|⁢fq|,p≠q(3) is much smaller than 1. In the context of compressed sensing, matrices with similar properties are being used for other purposes. A key property of any good compressed sensing matrix M is its coherence ρ, conventionally defined as the maximum inner product magnitude between any two of its columns: ρ⁡(M)=maxi,j=1,…⁢,Ni≠j⁢〈mi,mj〉(4⁢a) where mi, mjare two columns of M. A slightly different definition of coherence will be of interest later on when treating coherent SSC signal reception, i.e. at the receiver device300: ρℛ⁡(M)=maxi,j=1,…⁢,Ni≠j⁢ℛ⁢〈mi,mj〉.(4⁢b) where(⋅) denotes the real part of a complex number. Good SSC projection matrices, including the QO SSC matrices, have low coherence, thus they are potentially good compressed sensing measurement matrices. With low coherence, the superposed columns in the received signal r, where r=z+n is the transmitted signal z corrupted by e.g., noise/interference/distortion n, can be easily detected by projecting the received signal r onto each column of the projection matrix F. For example, the projections might be computed as pi=|fi,r|,i=1, . . . ,N.(5) The set* of the indices that correspond to the L largest projections, formally defined as *=⁢(6) is used to recover the transmitted information message m. In equation (6),is any L-element subset of {1, . . . , N}. The basic SSC receiver in equation (6) highlights the principle of operation of any SSC/SVC decoder. If the coherence ρ(F) is high, then received signal projection onto each column incurs high interference from other superposed columns, thereby making information message recovery error-prone. It is therefore understood that, given any SSC projection matrix F with coherence (4a) or (4b):1. Any matrix F′ obtained by permuting the columns/rows of the projection matrix F in an arbitrary order has the same coherence as the projection matrix F. Thus, F′ is as good as the projection matrix F when used as a SSC projection matrix;2. Any matrix F″ obtained by removing any arbitrary subset of the columns of the projection matrix F has the same or lower coherence than the projection matrix F. Thus, F″ is as good as or better than the projection matrix F when used as a SSC projection matrix.3. Any matrix F′″ obtained by arbitrary constant phase rotation of the projection matrix F as F′″=ejϑF has the same coherence as the projection matrix F. Thus, F′″ is as good as the projection matrix F when used as a SSC projection matrix. In an embodiment, the columns of the projection matrix F are obtained from the set of Kerdock bent sequences of length M=2m, m even and then used for transmission according to a SSC scheme. Kerdock bent sequences of length M are obtained from the coset leaders of a Kerdock code of same length M. A Kerdock code is the union of 2m-1cosets of a first-order Reed-Muller code RM(1, m) whose codeword length is 2m. The codewords in each coset are obtained by bit-wise modulo-2 sum of each of the codewords of RM (1, m) and a coset leader. The coset leader is therefore a representative codeword of the corresponding coset. Given the set {λ1, . . . , Δ2m-1} of Kerdock coset leaders, a set of modulated coset leaders {μ1, . . . , μ2m-1} is obtained, e.g., by BPSK modulation as μk=1−2λk, k=1 . . . , 2m-1. The cthsubmatrix Fcis obtained as [Fc]i,j=[HM]i,j[λc]i,c=1, . . . ,2m-1,i,j=1, . . . ,M(7a) or equivalently Fc=HM∘Λc,c=1, . . . ,2m-1(7b) where ∘ indicates the Hadamard (element-wise) product, HMis the Hadamard matrix of size M×M and Λcis a M×M matrix with all its columns equal to λc. It has been shown that the inner product between any two Kerdock bent sequences is upper bounded by √{square root over (M)}. It follows that any Kerdock-based quasi-orthogonal (QO-K) SSC projection matrix fulfills (3) as fpH⁢fqfp⁢fq≤1M. Thus, the corresponding QO-K SSC matrix has low coherence. Kerdock bent sequences have length M=2m(m any even positive integer). As an example, the coset leaders of the length-16 Kerdock code are shown in Table 1. Further, Table 2 below contains the coset leaders of the length-64 Kerdock code. TABLE 1Coset leaders of the length-16 Kerdock code.Coset leaderValueλ10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0λ20 0 1 1 1 0 0 1 0 1 0 1 1 1 1 1λ30 0 0 1 1 1 1 0 0 1 1 1 0 1 1 1λ40 1 0 1 0 0 1 1 0 1 1 0 1 1 1 1λ50 0 1 0 0 1 1 1 0 1 1 1 1 1 0 1λ60 1 0 0 1 1 0 1 0 1 1 1 1 1 1 0λ70 1 1 1 0 1 0 0 0 1 1 1 1 0 1 1λ80 1 1 0 1 0 1 0 0 0 1 1 1 1 1 1 TABLE 2Coset leaders of the length-64 Kerdock code.Coset leaderValueλ10000000000000000000000000000000000000000000000000000000000000000λ20101000001011111101011110101111101100011011011000110001110010011λ30110011001010101001100111111111101101001010110100011110011110000λ40111100001111000011101110111011101111000100001110111011110001000λ50111101111010001011110111101000101111011110100011000010000101110λ60111101111011110101101110001001000101110100010111110001001000111λ70111111011011011101100101110100001001101111010001000000111011011λ80111111011101000111001111000111000100100101100101011110111010100λ90110111111111001101011001100010101011100001101011001111100001001λ100110110010101111101011111001001100110110111101010000101000110110λ110110110011110101100111001111101000001010011011001111101001100011λ120011110010100101111111110110011001010101110011000110100111110000λ130101111111110101110010010110001101100011001101101111010110100000λ140010011110111110111010110111001001111101000110110100111000101000λ150100110101111110110110111110100000101011111001110100001010001110λ160011010111110110011011111010110000001001110010101010110001101111λ170100101101110111111011100010110101000100100001111110000111011101λ180110010111001111011010100011111101100101001100001001010100111111λ190110100101100110111100001111111100111100001100110101101001010101λ200111110110000010011111010111110100011011111001000001101100011011λ210111101110110111110100010001110100010010110111101011100001110100λ220011111110011010100110100011111101010110000011001111001110101001λ230001111011101110110100101101110101110111011110000100010010110100λ240001011111101000101111011011110100101011001010110111111010000001λ250101011000111111111111001001010101011001110011110000110010011010λ260111011100101101000111101011101100100010100001111011010011101110λ270011101001101111010111001111011001011100000010011100010101101111λ280001101110001101011111011110101100100111101100010100000111010111λ290101001110011111110001011111011000001001001110101001111101010011λ300010011100011011110101111110101100010100110101110001101111011000λ310001110101111011010001111101111000010010011101001011011100101110λ320000001111110011001111111100111101010110010110010110101001100101 In an embodiment, the columns of the SSC projection matrix F are Zadoff-Chu (ZC) sequences of a quasi-orthogonal set of ZC sequences of prime length M. The obtained matrix is then used for transmission according to a SSC scheme. The columns of the cthsubmatrix Fcare circular shifts of the same ZC sequence, where a ZC sequence with root index u is defined as zu⁡(k)=e-j⁢πM⁢uk(k+M⁢⁢mod2),k=0,…⁢,M-1(8) ZC sequences have the convenient property that any circular shift of a given sequence zu=(zu(k))k=1Mis orthogonal to the original sequence: zu,zu(f)=0,f≠0  (9) where zu(f)denotes the circular shift of sequence zuby f positions, defined as zu(f)(k)=zu(f+k)M. Here, M is the sequence length and (a)M=1+(a−1)mod M. Up to M−1 different sub-matrices can be obtained as M−1 distinct root indices u∈{1, . . . , M−1} are available. The cthsubmatrix Fcis thus obtained as [Fc]i,j=zc(j)(i),c=1, . . . ,2m-1, i,j=1, . . . ,M(10) The cross-correlation of any two ZC sequences of same length M and different root indices is upper bounded by √{square root over (M)}. Therefore, any ZC-based quasi-orthogonal (QO) SSC projection matrix fulfills (6) as fpH⁢fqfp⁢fq≤1M. In an embodiment, the SSC projection matrix F is obtained by concatenation of multiple (Q) smaller SSC projection matrices obtained by phase-rotation of the same SSC projection matrix F0as follows: F=[F0F0ejφ1. . . F0ejφQ-1].  (11) In other words, at least one submatrix Faof the projection matrix F is a phase-rotated version of another submatrix Fbof the projection matrix F. As a result, the SSC projection matrix of equation (11) contains the same submatrices as F0and their multiple phase rotations. SSC projection matrices containing multiple phase rotations of the same columns/submatrices as in F of equation (11) would introduce ambiguity on detection as the projection of the received signal onto any of those columns would have the same value |fi, r|=|fiejφ,r|, φ∈[0,2π] (apart from the contribution of noise/distortion/impairments). Therefore, decoding based on maximization of the column projections would not work. However, in coherent receivers the carrier phase is recovered and used for coherent detection. A similar situation arises in OFDM systems where demodulation reference signals are transmitted interleaved with data to allow receiver estimation of the channel magnitude and phase in each of the time-frequency resources used for transmission. The projection operation performed by a coherent SSC receiver is the following: {circumflex over (p)}=fi,r, i=1, . . . ,N(12) Taking the real part in the projection and selecting as in (4) the columns that correspond to the maximum projections in equations (12) eliminates the aforementioned ambiguity and thus enables using extended projection matrices. Table 3 below summarizes few relevant SSC matrix extension types. The portmanteau quadriorthogonal refers to a combination of quadrature extension and biorthogonal extension. TABLE 3SSC projection matrix extension types.Extension typeQ(φ1, . . . , φQ−1)Quadrature2(π/2)Biorthogonal2(π)Quadriorthogonal4(π/2, π, 3π/2) SSC projection matrix concatenation generates larger SSC matrices, thereby potentially supporting transmission of longer messages and ultimately achieving increased spectral efficiency. One drawback of SSC projection matrix concatenation is that the corresponding SSC scheme might not be uniquely decodable. For example, any biorthogonal concatenation (φ0=π) contains a set of columns and their opposites. Each column in the left half of the SSC matrix has a corresponding opposite column in the right half. If any information message m that selects any of the columns in the left half together with its opposite in the right half is transmitted, then the two columns cancel each other in the superposition and the resulting transmitted signal is all-zero. As such cancellation may happen for more than one message, then multiple messages would be transmitted with the same all-zero signal. The resulting SSC scheme would not be uniquely decodable. A similar drawback is also in quadriorthogonal concatenations, as any column, its opposite and its quadrature phase rotations are in the SSC matrix. In order to obtain unique decodability, the columns of the extended SSC projection matrix F in equation (11) are permuted in a way that any column cannot be selected in combination with its opposite or any of its quadrature phase rotations, thereby achieving unique decodability. Thanks to the sparseness of x, any permutation that groups together in nearby positions any given column, its opposite and quadrature phase rotations is enough to achieve unique decodability, with the condition that B is an integer multiple of Q. As an example, an extended SSC projection matrix F obtained by quadriorthogonal concatenation would be permuted as {tilde over (F)}=[F(1)F(1+N0)F(2+N0) . . .F(1+(Q−1)N0)F(2)F(2+N0) . . . ]  (13) where F(i) denotes the ithcolumn of F and N0is the number of columns in F0. A graphical representation of such kind of permutation, with Q=4, is shown inFIG.7where each square represents a matrix column, the upper part shows the arrangement of columns in the original matrix, i.e. F, and the lower part shows the arrangement of columns in the matrix after permutation, i.e. {tilde over (F)}. In an embodiment, the subset of columns of the SSC projection matrix of the previous embodiments are selected according to a sparse vector generated by dividing the information message m into L segments of size b bits each. Each segment is mapped to one of L subvectors x1, . . . , xLof same length B=2b, where the lthsubvector has h=1 non-zero elements. The locations of the non-zero elements are obtained based on the bits in the lthmessage segment. For example, the location of the non-zero element could be the integer value of the corresponding message segment. Hence, in other words the transmitter device100is configured to select one column from each submatrix [F1F2. . . FC]. In an embodiment, the subset of columns of the SSC projection matrix of previous embodiments are selected according to a sparse vector generated by dividing the information message m into L segments of size b bits each. Each segment is mapped to one of L subvectors x1, . . . , xLof same length B, where the lthsubvector has h>1 non-zero elements and b≤log2 (Bh). The non-zero elements in the lthsubvector are [xl]il,1, . . . , [xl]il,h, and the set of indices (il,1, . . . , il,h) is selected among the (Bh) combinations of h out of B elements. For example, when h=2 the locations of the non-zero elements in the lthsegment could be obtained by mapping the integer value vlof the corresponding message segment xlto one of the (B2) combinations as a1=⌊vlB-1⌋;a2={vl⁢mod⁡(B-1),if⁢⁢a1>vl⁢mod⁡(B-1)vl⁢mod⁡(B-1)+1,otherwi⁢se and then taking (il,1, il,2)=(a1+1, a2+1) if a1<a2, and (il,1, i1,2)=(B−a1, B−a2) otherwise. Hence, in other words the transmitter device100is configured to select two or more columns from each submatrix [F1F2. . . FC]. In some case rate adaptation may be needed for transmission of the information message m so as to adapt to the number available time-frequency resources for transmission. Therefore, methods for puncturing and extension is hereby also presented. In the puncturing case so as to increase the rate the transmitter devices100punctures symbols of the transmission signal z when a number of time-frequency resources available for transmission is smaller than the M number of transmission symbols of the transmission signal z. On the other hand so as to decrease the rate the transmitter devices100repeats symbols of the transmission signal z when a number of time-frequency resources available for transmission is larger than M number of transmission symbols of the transmission signal z. In an embodiment, the length-M signal is punctured as the number of time-frequency channel resources available for transmission is M′<M. Thus, M−M′ symbols in the generated length-M signal are punctured or removed, i.e., they are not transmitted. The same punctured signal can be obtained by removing M−M′ rows in the SSC projection matrix F thereby obtaining a new projection matrix Fpwith the remaining rows according to a predefined pattern p. As a first example, a uniform puncturing pattern could be conveniently obtained as p=[1,⌊1+MM′⌋,⌊1+2⁢MM′⌋,…⁢,⌊1+(M′-1)⁢MM′⌋](14) where p contains the indices of the selected rows of the projection matrix F used to generate Fp. According to equation (14), the punctured symbols are evenly spaced along the signal. As a second example, M−M′ consecutive symbols are punctured. Thus, the corresponding pattern is p=[1, . . . , p0, p0+M−M′+1, . . . , M], where p0is any integer between 0 and M′. In an embodiment, the length-M signal is extended as the number of time-frequency channel resources available for transmission is M″>M. Thus, M″−M symbols in the generated signal are repeated/duplicated, i.e., transmitted twice. The same extended signal can be obtained by duplication of M″−M rows in the SSC projection matrix F thereby obtaining a new projection matrix Fdwith repeated rows according to a predefined pattern. As a first example, a uniform repetition pattern could be conveniently obtained as d=[1,⌊1+MM″⌋,⌊1+2⁢MM″⌋,…⁢,⌊1+(M″-1)⁢MM″⌋](15) where d contains the indices of the selected rows of the projection matrix F used to generate Fd. Each row of the projection matrix F can be selected more than once. According to equation (15), the duplicated symbols are evenly spaced along the signal. As a second example, M″−M consecutive symbols are repeated. Thus, the corresponding pattern is d=[1, . . . , M, d0, . . . , d0+M″−M−1], where d0is any integer between 1 and 2M−M″+1. The QO-SSC receiver device300recovers the information message from the received signal r=z+n, where n corresponds to, e.g., additive noise, transmitter distortion, interference or any other impairments. A simple projection receiver projects the received signal r onto each column of the matrix F as: p=(FHr)  (16) and then uses the columns corresponding to the highest correlation: d^=arg⁢⁢maxd=1,…⁢,N⁢pd(17) for recovery of the transmitted message. Simple projection yields rather limited performance when the number of superposed columns is larger than 2. Thus, enhanced a receiver device is needed. Enhanced performance is obtained performing Iterative Successive Interference Cancellation (ISIC). The ISIC receiver operates according to the following algorithm (here, assuming h=1—extension to h>1 is straightforward—and the SSC projection matrix F is divided into L sub-matrices of size M×B).FIG.8illustrates the following ISIC SSC decoding algorithm with reference to the step numerals give herein below. Hence, with reference to the flow chart inFIG.8the decoding algorithm runs as follows:(1) Inputs to the algorithm: the received signal r, SSC projection matrix F, parameters L and B, and number of iterations Nitat step (1) inFIG.8.(2) Initialize output vector {circumflex over (x)}=[0, . . . , 0]Tat step (2) inFIG.8.(3) Initialize interference-canceled received signals rl←r, l=1, . . . , L at step (3) inFIG.8.(4) For it=1 to Nitat step (4) inFIG.8:(a) Set of sub-matrices to be visited in current iteration: V={1, . . . , L}.(b) While V is not empty(i) Project each rl, l∈V, onto the corresponding sub-matrix Fland obtain projection vectors pl=(FlHrl).(ii) Select the largest projection value pî,{circumflex over (d)}among those in all projection vectors pl: (l^,d^)=arg⁢⁢maxl,d⁢(pl,d),l∈V, d∈{1, . . . , B}.(iii) V←V\{{circumflex over (l)}}.(iv) Set [{circumflex over (x)}{circumflex over (l)}]d=1 when d={circumflex over (d)} and [{circumflex over (x)}{circumflex over (l)}]d=0 otherwise.(v) If it>1, then sum the {circumflex over (d)}oldth({circumflex over (l)}) column of F{circumflex over (l)}(f{circumflex over (l)},{circumflex over (d)}old({circumflex over (l)})) to the interference-canceled received signals: rl←rl+f{circumflex over (l)},{circumflex over (d)}old({circumflex over (l)}), l∈{1, . . . , L}\{{circumflex over (l)}}.(vi) Cancel the {circumflex over (d)}thcolumn of F{circumflex over (l)}(f{circumflex over (l)},{circumflex over (d)}) from the interference-canceled received signals: rl←rl−f{circumflex over (l)},{circumflex over (d)}, l∈{1, . . . , L}\{{circumflex over (l)}}.(vii) Set {circumflex over (d)}old({circumflex over (l)})←{circumflex over (d)}.(c) End While;(5) End For at step (5) inFIG.8;(6) Return {circumflex over (x)}=[{circumflex over (x)}1T. . . {circumflex over (x)}LT]Tat step (6) inFIG.8, wherein the non-zero elements of vector {circumflex over (z)} indicates the subset of the columns of the projection matrix. In its inner signal processing iterations, the ISIC receiver repeatedly executes a sequence of three basic steps:Project an interference cancelled received signal onto the columns of a set of submatrices;Select the largest projection value among those obtained in the previous step;Cancel the column corresponding to the selected projection value from the interference cancelled received signal. Computation of projections in step (4)(b)(i) in the algorithm illustrated inFIG.8is the most demanding operation in the ISIC algorithm in terms of computational load. For a submatrix Flhaving size M×M, projection computation FlHr has complexity M2, where M is the sequences length. When the SSC projection matrix is of QO-ZC type, the computational complexity of that step can be greatly reduced: as QO-ZC matrices contain circular shifts of the same sequences, computation of projection FlHr can be conveniently performed by computing the circular cross-correlation between the received signal and the ZC sequence in the Fourier-transform domain as FlHr=IFFT(FFT(z*u)∘FFT(r)), where (I)FFT denotes the (Inverse) Fast Fourier Transform. The complexity is reduced to 3 log2M+M (assuming FFT computed using the radix-2 algorithm). A similar complexity reduction can be obtained for QO-K SSC by computing the correlations in the Hadamard transform domain. It has been observed in performance evaluations that, for L≲√{square root over (M)}/2, the ISIC decoder performance approaches Maximum-Likelihood (ML) receiver performance. The spectral efficiency (SE) performance of SSC with QO-K and QO-ZC has been evaluated. Results are shown inFIG.9where QO-K SSC and QO-ZC SSC schemes are compared with NR polar codes. Performance comparison with NR LDPC codes is not shown as it has been verified that NR polar codes perform better than NR LDPC codes for the considered SNR ranges, rates and spectral efficiencies. The number of selected columns per message subvector is h=1. The SSC matrix size is 256×216. The codeword length is M=256 symbols. The QO-ZC matrix is obtained by puncturing the last symbol of length-257 ZC sequences. The SSC matrix consists of 256 submatrices, where each submatrix contains 256 different circular shifts of a ZC sequence with given root index. Each submatrix corresponds to a different ZC root index. The QO-K SSC matrix has been obtained by quadrature concatenation from a 256×215QO-K matrix. The channel model used for evaluation is AWGN. The SSC receiver performs ISIC decoding with 10 iterations. BLER is evaluated by Monte Carlo simulation. The spectral efficiency is computed as SE=(1-BLER)⁢KM.(18) It is observed that QO-ZC and QO-K SSC have approximately the same performance. SSC has higher SE than NR polar codes for SNR<7 dB and SE<0.12 bits/s/Hz.FIG.10shows the BLER performance of QO-K and QO-ZC SSC compared with NR polar codes. The SSC matrix size is 64×212. The codeword length is M=64 symbols. The number of selected columns per message subvector is h=1. The QO-ZC matrix is obtained by puncturing three symbols of length-67 ZC sequences according to (14). The SSC matrix consists of 64 submatrices, where each submatrix contains 64 different circular shifts of a ZC sequence with given root index. Each submatrix corresponds to a different ZC root index. The QO-K SSC matrix has been obtained by quadrature concatenation from a 64×211QO-K matrix. The channel model used for evaluation is AWGN. The SSC receiver performs ISIC decoding with 10 iterations. InFIG.10Block Error Rate of QO-K/QO-ZC SSC schemes. M: codeword length [symbols]. K: message length [bits]. It is observed that QO-ZC and QO-K SSC have approximately the same performance. QO-K and QO-ZC SSC schemes have better BLER than NR polar codes as they achieve BLER=10−5at lower SNR than NR polar codes.FIG.11shows the BLER performance of QO-ZC SSC compared with prior-art SVC. A randomly generated matrix with elements in {−1, +1} was used. InFIG.11, the performance of SVC is evaluated using a Maximum-Likelihood (ML) decoder and a Multipath Matching Pursuit (MMP) decoder. QO-ZC SSC performance is better than SVC in the whole range of SNRs. The client device900herein, may be denoted as a user device, a User Equipment (UE), a mobile station, an internet of things (IoT) device, a sensor device, a wireless terminal and/or a mobile terminal, is enabled to communicate wirelessly in a wireless communication system, sometimes also referred to as a cellular radio system. The UEs may further be referred to as mobile telephones, cellular telephones, computer tablets or laptops with wireless capability. The UEs in this context may be, for example, portable, pocket-storable, hand-held, computer-comprised, or vehicle-mounted mobile devices, enabled to communicate voice and/or data, via the radio access network, with another entity, such as another receiver or a server. The UE can be a Station (STA), which is any device that contains an IEEE 802.11-conformant Media Access Control (MAC) and Physical Layer (PHY) interface to the Wireless Medium (WM). The UE may also be configured for communication in 3GPP related LTE and LTE-Advanced, in WiMAX and its evolution, and in fifth generation wireless technologies, such as New Radio. The network access node800herein may also be denoted as a radio network access node, an access network access node, an access point, or a base station, e.g. a Radio Base Station (RBS), which in some networks may be referred to as transmitter, “gNB”, “gNodeB”, “eNB”, “eNodeB”, “NodeB” or “B node”, depending on the technology and terminology used. The radio network access node may be of different classes such as e.g. macro eNodeB, home eNodeB or pico base station, based on transmission power and thereby also cell size. The radio network access node can be a Station (STA), which is any device that contains an IEEE 802.11-conformant Media Access Control (MAC) and Physical Layer (PHY) interface to the Wireless Medium (WM). The radio network access node may also be a base station corresponding to the fifth generation (5G) wireless systems. Furthermore, any method according to embodiments of the disclosure may be implemented in a computer program, having code means, which when run by processing means causes the processing means to execute the steps of the method. The computer program is included in a computer readable medium of a computer program product. The computer readable medium may comprise essentially any memory, such as a ROM (Read-Only Memory), a PROM (Programmable Read-Only Memory), an EPROM (Erasable PROM), a Flash memory, an EEPROM (Electrically Erasable PROM), or a hard disk drive. Moreover, it is realized by the skilled person that embodiments of the transmitter device100and the receiver device300comprises the communication capabilities in the form of e.g., functions, means, units, elements, etc., for performing the solution. Examples of other such means, units, elements and functions are: processors, memory, buffers, control logic, encoders, decoders, rate matchers, de-rate matchers, mapping units, multipliers, decision units, selecting units, switches, interleavers, de-interleavers, modulators, demodulators, inputs, outputs, antennas, amplifiers, receiver units, transmitter units, DSPs, MSDs, TCM encoder, TCM decoder, power supply units, power feeders, communication interfaces, communication protocols, etc. which are suitably arranged together for performing the solution. Especially, the processor(s) of the transmitter device100and the receiver device300may comprise, e.g., one or more instances of a Central Processing Unit (CPU), a processing unit, a processing circuit, a processor, an Application Specific Integrated Circuit (ASIC), a microprocessor, or other processing logic that may interpret and execute instructions. The expression “processor” may thus represent a processing circuitry comprising a plurality of processing circuits, such as, e.g., any, some or all of the ones mentioned above. The processing circuitry may further perform data processing functions for inputting, outputting, and processing of data comprising data buffering and device control functions, such as call processing control, user interface control, or the like. Finally, it should be understood that the invention is not limited to the embodiments described above, but relates to and incorporates all embodiments within the scope of the appended independent claims.
43,316
11863492
DESCRIPTION OF EMBODIMENTS To make objectives, technical solutions and advantages of the embodiments of this application clearer, the following further describes the embodiments of this application in detail with reference to the accompanying drawings. The following describes some terms in the embodiments of this application, to facilitate understanding of a person skilled in the art.(1) A terminal device includes a device that provides a user with voice and/or data connectivity, for example, may include a handheld device having a wireless connection function, or a processing device connected to a wireless modem. The terminal device may communicate with a core network through a radio access network (RAN), and exchange a voice and/or data with the RAN. The terminal device may include user equipment (UE), a wireless terminal device, a mobile terminal device, a subscriber unit (subscriber unit), a subscriber station, a mobile station, a mobile console, a remote station, an access point (AP), a remote terminal, an access terminal, a user terminal, a user agent, a user device, or the like. For example, the terminal device may include a mobile phone (or referred to as a “cellular” phone), a computer having a mobile terminal device, a portable, pocket-sized, handheld, computer built-in, or vehicle-mounted mobile apparatus, and a smart wearable device. For example, the terminal device is a device such as a personal communications service (PCS) phone, a cordless phone, a session initiation protocol (SIP) phone, a wireless local loop (WLL) station, or a personal digital assistant (PDA). The terminal device further includes a limited device, for example, a device with low power consumption, a device with a limited storage capability, or a device with a limited computing capability. For example, the terminal device may be an information sensing device, for example, a barcode, radio frequency identification (RFID), a sensor, a global positioning system (GPS), or a laser scanner. As an example instead of a limitation, in the embodiments of this application, the terminal device may alternatively be a wearable device. The wearable device may also be referred to as a wearable intelligent device, and is a general term for wearable devices such as glasses, gloves, watches, clothes, and shoes that are developed by applying wearable technologies in intelligent designs of daily wear. The wearable device is a portable device that can be directly worn on a body or integrated into clothes or an accessory of a user. The wearable device is not merely a hardware device, but is used to implement a powerful function through software support, data interaction, and cloud interaction. In a broad sense, the wearable intelligent device includes full-featured and large-sized devices that can implement all or some functions without depending on smartphones, for example, smart watches or smart glasses, and devices that focus on only one type of application function and need to work with other devices such as smartphones, for example, various smart bands, smart helmets, or smart jewelry for monitoring physical signs.(2) A network device is a device in a wireless network. The network device may be a radio access network (RAN) node (or device) that enables a terminal device to access the wireless network, or may be referred to as a base station. Currently, examples of some network devices are a next-generation NodeB (gNB), a transmission reception point (TRP), an evolved NodeB (eNB), a NodeB (NB), a home evolved NodeB (for example, a home evolved NodeB or a home NodeB, HNB), a baseband unit (BBU), or a wireless fidelity (Wi-fi) access point (AP). In addition, in a network structure, the RAN may include a centralized unit (CU) node and a distributed unit (DU) node. In this structure, protocol layers of the base station are split. Functions of some protocol layers are centrally controlled by the CU, functions of some or all of remaining protocol layers are distributed in the DU, and the CU centrally controls the DU. A specific technology and a specific device form used by the base station are not limited in the embodiments of this application. In addition, in the embodiments of this application, the network device provides a service for a cell. The terminal device communicates with the network device through a transmission resource (for example, a frequency domain resource or a spectrum resource) used in the cell. The cell may be a cell corresponding to the network device (for example, a base station). The cell may belong to a macro base station, or a base station corresponding to a small cell. The small cell herein may include a metro cell, a micro cell, a pico cell, a femto cell, or the like. The small cells have features such as small coverage and low transmit power, and are used to provide high-rate data transmission services.(3) A subcarrier spacing is a value of a spacing between central locations or peak locations of two adjacent subcarriers in frequency domain in an orthogonal frequency division multiplexing (OFDM) system. For example, a subcarrier spacing in a long term evolution (LTE) system is 15 (kilohertz, kHz), and a subcarrier spacing in a next generation new radio (NR) system may be 15 kHz, 30 kHz, 60 kHz, 120 kHz, or the like. For details, refer to Table 1. Table 1 shows subcarrier spacings that can be currently supported in the 5G NR system. TABLE 1μSubcarrier spacing = 2μ· 15 (kHz)CP type015Normal130Normal260Normal or extended3120Normal4240Normal μ is used to determine a subcarrier spacing. For example, when μ=0, the subcarrier spacing is 15 kHz; when μ=1, the subcarrier spacing is 30 kHz.(4) URLLC service: The URLLC service has a very high requirement on a latency. A latency of unidirectional transmission from a transmit end to a receive end is required to be within 0.5 ms, and transmission reliability needs to reach 99.999% within 1 ms. To meet the transmission latency requirement of the URLLC service, a shorter time scheduling unit may be used for data transmission over a radio air interface, for example, a mini-slot or a slot with a larger subcarrier spacing is used as a minimum time scheduling unit. One mini-slot includes one or more time domain symbols. The time domain symbol herein may be an orthogonal frequency division multiplexing OFDM symbol. One slot whose subcarrier spacing is 15 kHz includes 6 or 7 time domain symbols, and a corresponding time length is 0.5 ms. For one slot whose subcarrier spacing is 60 kHz, a corresponding time length is shortened to 0.125 ms. Data of the URLLC services usually uses a relatively short time scheduling unit, to meet a requirement for an ultra-short latency. For example, two time domain symbols having a subcarrier spacing of 15 kHz or one slot having a subcarrier spacing of 60 kHz are used to correspond to 7 time domain symbols, and a corresponding time length is 0.125 ms. To better quantize performance indicators of the URLLC service to provide a reference input and evaluation criterion for designing the 5G system, the performance indicators defined by the third generation partnership project (3GPP) working groups for the URLLC service include a latency and reliability. Specifically, the latency is a transmission time that is required when an application layer data packet of a user reaches a radio protocol stack layer 2/3 service data unit (SDU) of a receive end from a radio protocol stack layer 2/3 SDU of a transmit end. When neither a network device nor a terminal device is in a discontinuous reception (DRX) state, a user plane latency requirement of the URLLC service is 0.5 ms in both uplink and downlink. It should be noted that a performance requirement of 0.5 ms herein means an average latency of data packets. The reliability is a probability that X-bit data is correctly transmitted from the transmit end to the receive end within a specific time. The specific time is still defined as the time required when the application layer data packet of the user reaches the radio protocol stack layer 2/3 SDU of the receive end from the radio protocol stack layer 2/3 SDU of the transmit end. For the URLLC service, a typical requirement is that reliability of sending 32-bytes data within 1 ms reaches 99.999%. It should be noted that the foregoing performance indicators are merely typical values. A specific URLLC service may have a different requirement for reliability. For example, extremely stringent industrial control requires a transmission success probability of 99.9999999% within a 0.25 ms end-to-end latency.(5) Start and length indicator value table: In this specification, a start and length indicator value may be referred to as an SLIV for short. Correspondingly, in this specification, the start and length indicator value table may be referred to as an SLIV table for short. The SLIV table may include a physical downlink shared channel (PDSCH) mapping type (PDSCH mapping type) and a demodulation reference signal (DMRS)-type A-position (dmrs-TypeA-Position), a slot offset K0from a slot in which a physical downlink control channel (PDCCH) is located to a slot in which an uplink channel of a PDSCH scheduled by the PDCCH is located, a start symbol S of the PDSCH in a slot, and a quantity L of symbols occupied by the PDSCH. One SLIV table may include at least one type of SLIV information, and each type of SLIV information has a corresponding number (that is, an SLIV index). For example, referring to Table 2 that is Table 5.1.2.1.1-2 in the protocol NR R15 38.214 v 15.2.0, a row index in the table is an SLIV index, and the SLIV table and the SLIV indexes may be configured by using a higher-layer parameter or predefined. In the existing protocol, an SLIV index is carried in DCI on the PDCCH, and is used to indicate time domain resource allocation of a PDSCH scheduled by the DCI, that is, a combination of a start time domain symbol and a length of consecutive time domain symbols of the PDSCH. TABLE 2Row indexDmrs-TypeA-PositionPDSCH mapping typeK0SL12Type A02123Type A031122Type A02103Type A03932Type A0293Type A03842Type A0273Type A03652Type A0253Type A03462Type B0943Type B010472Type B0443Type B06482, 3Type B05792, 3Type B052102, 3Type B092112, 3Type B0122122, 3Type A0113132, 3Type A016142, 3Type A024152, 3Type B047162, 3Type B084(6) An uplink channel of hybrid automatic repeat request-acknowledgment (HARQ)-ACK may be understood as an uplink channel used to carry the HARQ-ACK, or may be described as an uplink channel corresponding to the HARQ-ACK.(7) That a first parameter is related to DCI and may include a plurality of understandings. For example, one understanding is that the first parameter is included, carried, or born in the DCI, or one understanding is that the first parameter may be derived from a parameter carried in the DCI, or the first parameter is a parameter related to a PDCCH in which the DCI is located, or the first parameter is a parameter for scrambling the DCI. The following describes the two different understandings in detail, and details are not described herein.(8) A HARQ-ACK corresponding to a PDSCH may also be described as a HARQ-ACK of the PDSCH and indicate that the HARQ-ACK is feedback information for the PDSCH. For example, the HARQ-ACK may include an acknowledgment (ACK) or a negative acknowledgment (NACK). When a terminal device correctly receives a PDSCH sent by a network device, the terminal device may feed back an ACK for the correctly received PDSCH. When the terminal device fails to correctly receive a PDSCH sent by the network device, the terminal device may feed back a NACK for the PDSCH that is not correctly received.(9) PUCCH resource set: Currently, K (1≤K≤4) PUCCH resource sets are configured in the 5G NR system, and a value range of NUCI, that is, the quantity of bits (payload size) corresponding to a PUCCH resource set n (n=0, 1, 2, 3) to carry an ACK/NACK is Nn≤NUCI≤Nn+1. Currently, N0=1 & N1=3 are specified in the 5G NR system.(10) A PUCCH resource group is a new concept proposed in this application. In this application, one PUCCH resource group may include one or more PUCCH resource sets. The PUCCH resource set may be defined in an existing protocol, or may be newly defined in this application. The following provides detailed examples for description, and details are not described herein.(11) An eMBB PDSCH is a PDSCH corresponding to an eMBB service, or may be described as a PDSCH of an eMBB service. Similarly, a URLLC PDSCH is a PDSCH corresponding to a URLLC service, or may be described as a PDSCH of a URLLC service.(12) A K1 value is a time unit offset from a time unit in which a PDSCH is located to a time unit in which an uplink channel of a HARQ-ACK corresponding to the PDSCH is located. In an existing protocol mechanism, a PDSCH-to-HARQ-timing-indicator field carried in DCI is used to indicate the K1 value. The field includes three bits, and a value of the field may range from “000” to “111”. A specific K1 value indicated in one piece of DCI is configured by using RRC or is predefined.(13) A first time length represents a time length corresponding to a K1 value, and may also be referred to as a unit of the K1 value or a granularity of the K1 value.(14) A time unit in the embodiments of this application may be used to carry information. For example, one time unit may include one or more consecutive transmission time intervals (TTI), one or more consecutive slots, or one or more consecutive time domain symbols. The slot may be a full slot, or may be a mini-slot (or referred to as a non-slot). The mini-slot includes less than 14 orthogonal frequency division multiplexing (OFDM) symbols. One mini-slot may include 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, or 13 OFDM symbols. Different time units are used to carry different data packets or different repetitions (or referred to as repeated versions) of a same data packet.(15) A time-frequency resource in the embodiments of this application is a general term of a time domain resource and a frequency domain resource. The time-frequency resource includes the time domain resource and the frequency domain resource, and the time-frequency resource may be used to carry signaling or data during communication between a terminal device and a network device. The time domain resource may be a resource in a time unit.(16) “A plurality of” in the embodiments of this application means two or more than two. In view of this, “a plurality of” in the embodiments of this application may alternatively be understood as “at least two”. “And/or” describes an association relationship between associated objects and represents that three relationships may exist. For example, A and/or B may represent three cases: There is only A, there are both A and B, and there is only B. In addition, the character “/” generally indicates an “or” relationship between the associated objects unless otherwise specified. In the embodiments described in this application, “number” and “index” may be understood as a same concept, and both the “number” and the “index” are indexes in English. For example, an SLIV index may also be described as an SLIV number, and the two concepts may be interchanged. In addition, unless otherwise stated, ordinal numbers such as “first” and “second” in the embodiments of this application are used to distinguish between a plurality of objects, but are not intended to limit a sequence, a time sequence, priorities, or importance of the plurality of objects. The following describes a technical background of the embodiments of this application. In a 5G NR system, there is a scenario in which a URLLC service and an eMBB service coexist. The eMBB service is transmitted at a scheduling granularity of a slot, and the URLLC service is usually transmitted at a scheduling granularity of a mini-slot (for example, 2, 4, or 7 time domain symbols). If transmission granularities of the two services are different, a PUCCH carrying a HARQ-ACK corresponding to an eMBB PDSCH and a PUCCH carrying a HARQ-ACK corresponding to a URLLC PDSCH may need to be transmitted in one time unit (for example, a slot). Currently, in the prior art, a HARQ-ACK may be determined in one slot. In other words, in the prior art, transmission of a plurality of physical uplink control channels (PUCCH) carrying HARQ-ACKs is not supported in one slot. That is, in the prior art, only one PUCCH carrying a HARQ-ACK can be transmitted in one slot. In the prior art, when HARQ-ACKs corresponding to a plurality of PDSCHs need to be transmitted in one slot, the plurality of HARQ-ACKs that need to be transmitted in one slot are jointly encoded into one HARQ-ACK codebook and the HARQ-ACK codebook is transmitted on one PUCCH. For example, as shown inFIG.1, it is assumed that a terminal device needs to feed back HARQ-ACKs for different PDSCHs (a PDSCH1and a PDSCH2) in one slot. The PDSCH1may be a URLLC PDSCH, and the PDSCH2may be an eMBB PDSCH. It is assumed that a HARQ-ACK that is fed back for the PDSCH1and that is determined by the terminal device is a HARQ-NACK1, and a HARQ-ACK that is fed back for the PDSCH2is a HARQ-ACK2. It is further assumed that a 30 kHz subcarrier spacing is used for downlink transmission, and a 15 kHz subcarrier spacing is used for uplink transmission. Limited by a data decoding capability of the terminal device, the HARQ-NACK1of the PDSCH1may be fed back at a start location of the second uplink slot at the earliest, and the HARQ-ACK2of the PDSCH2that is subsequently scheduled arrives at a relatively late moment, and may be fed back at an end location of the uplink slot at the earliest. Because an existing protocol restricts that only one uplink HARQ-ACK can be transmitted in one slot, for the foregoing example, by using the method in the prior art, the HARQ-NACK1needs to wait for a specific time and is fed back together with the HARQ-ACK2. After the HARQ-ACK2is determined, the HARQ-NACK1and the HARQ-ACK2are combined into one HARQ-ACK to be carried in one PUCCH for feedback. In this way, transmission of the NACK1of the PDSCH1is delayed, and correspondingly, retransmission by a network device is also delayed. Because slot lengths for uplink and downlink transmission are inconsistent, a retransmission latency may exceed one downlink slot (for example, 1 ms). However, the URLLC service has a relatively high requirement (a 0.5 ms end-to-end latency) on a transmission latency. Therefore, the existing mechanism cannot meet the latency requirement required by the URLLC service. In view of this, the embodiments of this application provide a communications method, apparatus, and device, to reduce a transmission latency of an uplink channel when uplink channels that carry a plurality of HARQ-ACKs are transmitted in one time unit. The communications method provided in the embodiments of this application may be applied to a 5G NR system or an LTE system, or may be applied to a future mobile communications system, for example, a 6th generation mobile communications system. This is not limited in this application. In addition, in the following description, an example in which the technical solutions provided in the embodiments of this application are applied to a URLLC service and an eMBB service is mainly used. This is not limited in actual application. For example, the technical solutions provided in the embodiments of this application may also be applied to other services. FIG.2is a schematic diagram of a network architecture to which an embodiment of this application is applied. As shown inFIG.2, the network architecture includes a network device and at least one terminal device. The terminal device may be at a fixed location, or may be movable. The terminal device may be connected to the network device wirelessly. The network device may be, for example, a base station, and the terminal device may be, for example, UE. The network device and the terminal device may work in an NR system, and the terminal device may communicate with the network device through the NR system.FIG.2is merely a schematic diagram, and the mobile communications system may further include another network device, for example, may further include a wireless relay device and a wireless backhaul device that are not shown inFIG.2. Quantities of network devices, and terminal devices included in the mobile communications system are not limited in the embodiments of this application. FIG.3is a schematic diagram of another network architecture to which an embodiment of this application is applied. As shown inFIG.3, a network device and a terminal device1to a terminal device6form a wireless communications network. In the wireless communications network, the terminal device1to the terminal device6are used as entities for sending uplink data, and may transmit an uplink channel (the uplink channel may carry uplink data) to the network device. Certainly, the terminal device1to the terminal device6may also receive downlink data sent by the network device. In addition, the terminal device4to the terminal device6may also form a communications system. In the communications system, the network device may send downlink data to the terminal device1, the terminal device2, the terminal device3, and the terminal device5, and the terminal device5may also send downlink data to the terminal device4and the terminal device6. It should be understood that an example in which the network architecture shown inFIG.3includes only one network device is used for description. However, the embodiments of this application are not limited thereto. For example, the network architecture may further include more network devices. Similarly, the network architecture may also include more terminal devices, and may further include another device, which is not shown inFIG.3. Referring toFIG.4, an embodiment of this application provides a communications method. In the following description, an example in which the method is applied to the application scenario shown inFIG.2is used. A procedure of the method is described as follows: S101. A terminal device obtains a grouping relationship. The grouping relationship represents a correspondence between a first parameter and N groups of time-frequency resources, the N groups of time-frequency resources are obtained by grouping time-frequency resources in one time unit, each group of time-frequency resources in the N groups of time-frequency resources corresponds to one or more first parameters, the first parameter is related to downlink control information (DCI), a time-frequency resource in each group of time-frequency resources is a time-frequency resource of an uplink channel that carries a HARQ-ACK, and N is a positive integer greater than or equal to 2. Each group of time-frequency resources may include one or more time-frequency resources. In this embodiment of this application, the terminal device may receive the grouping relationship from a network device, or the terminal device locally obtains the grouping relationship. When the terminal device locally obtains the grouping relationship, the terminal device may locally prestore the grouping relationship. The grouping relationship may be obtained by the terminal device from the network device in advance, or may be preset. The following provides description by using an example in which the terminal device receives the grouping relationship from the network device. The first parameter may include one or more of a K1 value (or written as a K1value), a first time length, a codebook identifier (codebook ID), a radio network temporary identifier (RNTI), an uplink channel end symbol, a PDCCH monitoring occasion, or an SLIV index. For example, the first parameter includes the K1 value and the first time length, or includes the K1 value and the radio network temporary identifier, or includes the K1 value and the SLIV index, or includes the K1 value and the PDCCH monitoring occasion, or includes the K1 value and the codebook identifier, or includes the codebook identifier and the uplink channel end symbol, or the like. In this application, the K1 value is the quantity of time units offset from a time unit in which a physical downlink shared channel (PDSCH) is located to a time unit in which an uplink channel of a HARQ-ACK corresponding to the PDSCH is located. The first time length represents a time length corresponding to the K1 value. In this embodiment of this application, the first time length may include a first time unit length and a second time unit length. For example, the first time unit length is a slot, and the slot may include 14 time domain symbols. For example, the second time unit length is a mini-slot, and the mini-slot may include 2, 4, or 7 time domain symbols. Meanings of the K1 value and the first time length that are described below in this application are the same as those described herein. Details are not described. It should be noted that, that the first parameter is related to the DCI in this application may include: The first parameter is carried in the DCI, or the first parameter may be derived from a parameter carried in the DCI, or the first parameter is a parameter related to a PDCCH in which the DCI is located, or the first parameter is a parameter used to scramble the DCI. For example, the first parameter carried in the DCI may include the K1 value, the SLIV index, and the codebook identifier. For another example, the first parameter that may be derived based on the parameter carried in the DCI may include the first time length derived based on the K1 value and the uplink channel end symbol that is derived based on an uplink channel time-frequency resource allocation parameter. For another example, the parameter related to the PDCCH in which the DCI is located may include the PDCCH monitoring occasion. For another example, the parameter used to scramble the DCI is the RNTI. In this embodiment of this application, the uplink channel may include a physical uplink control channel (PUCCH) or a physical uplink shared channel (PUSCH). It may be understood that the grouping relationship may be in a form of a list, or may be in another form. This is not limited in this application. S102. The terminal device receives first DCI. InFIG.4, an example in which the terminal device receives the first DCI from the network device is used for illustration. A first parameter related to the first DCI corresponds to the ithgroup of time-frequency resources in the N groups of time-frequency resources, and i is a positive integer less than or equal to N. S103. The terminal device determines, in the N groups of time-frequency resources based on the obtained grouping relationship, the ithgroup of time-frequency resources corresponding to the first parameter related to the first DCI. S104. The terminal device determines a first uplink channel that carries a first HARQ-ACK on a first time-frequency resource in the ithgroup of time-frequency resources. The first HARQ-ACK corresponds to a PDSCH scheduled by the first DCI. It may be understood that the first HARQ-ACK is feedback information for the PDSCH scheduled by the first DCI. The first HARQ-ACK may be an ACK or a NACK. In this embodiment of this application, the first time-frequency resource may be some time-frequency resources in the ithgroup of time-frequency resources, or may be all time-frequency resources in the ithgroup of time-frequency resources. The following describes how the terminal device determines the first uplink channel that carries the first HARQ-ACK on the first time-frequency resource in the ithgroup of time-frequency resources. In a possible implementation, the terminal device determines a corresponding PUCCH resource set based on the quantity of bits (payload size) of the first HARQ-ACK, and then determines, in the PUCCH resource set based on a PUCCH resource indicator (ARI) in the first DCI, the first time-frequency resource that carries the first uplink channel. For example, assuming that the ARI is “000”, it may be determined that the resource carrying the first uplink channel is the 1stPUCCH resource in the PUCCH resource set. In other words, the first time-frequency resource is the 1stPUCCH resource in the PUCCH resource set. In another possible implementation, the terminal device determines, in a PUCCH resource group configured via higher layer signaling, a corresponding PUCCH resource set based on the quantity of bits (payload size) of the first HARQ-ACK, and then determines, in the PUCCH resource set based on a PUCCH resource indicator in the first DCI, the first time-frequency resource that carries the first uplink channel. It should be noted that, in this implementation, the PUCCH resource group is a new concept proposed in this application. Because quantities of bits of different HARQ-ACKs may differ greatly, in this application, different PUCCH resource groups may be configured for different quantities of bits of the HARQ-ACKs via higher layer signaling, and each PUCCH resource group includes one or more PUCCH resource sets. According to the communications method provided in this embodiment of this application, time-frequency resources in one time unit are grouped into N groups of time-frequency resources, and each group of time-frequency resources in the N groups of time-frequency resources is available for transmitting an uplink channel that carries a HARQ-ACK. In other words, in comparison with the prior art in which one time unit can be used to transmit only one uplink channel that carries a HARQ-ACK, in the method provided in this embodiment of this application, one time unit is available for transmitting N uplink channels that carry HARQ-ACKs. In this way, when a plurality of uplink channels that carry HARQ-ACKs need to be transmitted in one time unit, an uplink channel that carries a HARQ-ACK and that needs to be sent earlier in time domain in the time unit does not need to be sent on a same PUCCH resource as a last uplink channel that carries a HARQ-ACK. In other words, according to the method in this application, a HARQ-ACK that arrives earlier can be sent earlier, to reduce a transmission latency and improve transmission efficiency. In this embodiment of this application, if the grouping relationship is received by the terminal device from the network device, before the network device sends the grouping relationship to the terminal device, the network device may further determine the grouping relationship based on one or more of the following conditions: condition 1: the K1 value, where the K value may be semi-statically configured or predefined; condition 2: the first time length, where the first time length may be semi-statically configured or predefined; condition 3: the SLIV index, where the SLIV index may be semi-statically configured or predefined, and in this embodiment of this application, an SLIV is an SLIV of a PDSCH corresponding to a HARQ-ACK; condition 4: the codebook identifier, where the codebook identifier is used to indicate a group of time-frequency resources that carry a HARQ-ACK in the N groups of time-frequency resources, the codebook identifier may include N values, each value corresponds to a group of time-frequency resources in the N groups of time-frequency resources, and the codebook identifier may be carried in DCI. condition 5: the RNTI, where the RNTI is used to scramble DCI; condition 6: the uplink channel end symbol; and condition 7: the PDCCH monitoring occasion. The following describes a process in which the network device determines the grouping relationship based on the K1 value. An example in which the K1 value is semi-statically configured is used for description. When the K1 value is semi-statically configured, before the network device determines the grouping relationship based on the K1 value, the network device may further obtain several K1 values configured by a higher layer. For ease of description, the several K1 values are described as a K1 value set below. After obtaining the K1 value set configured by the higher layer, the network device may divide the K1 value set into N K1 value subsets, then may establish a one-to-one correspondence between the N K1 value subsets and the N groups of time-frequency resources, and determine the one-to-one correspondence between the N K1 subsets and the N groups of time-frequency resources as the grouping relationship. Optionally, the network device may divide the K1 value set into the N subsets based on indexes (which may be understood as numbers) of the several K1 values. For example, it is assumed that the indexes of the several K1 values are 1 to 8, and a corresponding K1 value set may be denoted as {1, 2, 3, 4, 5, 6, 7, 8}. N=2 is used as an example. The network device may divide the K1 value set {1, 2, 3, 4, 5, 6, 7, 8} into a first K1 value subset {1, 2, 3, 4} and a second K1 value subset {5, 6, 7, 8} based on the indexes of the K1 values. After dividing the K1 value set into the first K1 value subset and the second K1 value subset, the network device may establish a one-to-one correspondence between the two K1 value subsets and the two groups of time-frequency resources. For ease of description, the two groups of time-frequency resources are denoted as a first group of time-frequency resources and a second group of time-frequency resources below. For example, the network device may map the first K1 value subset to the first group of time-frequency resources, and map the second K1 value subset to the second group of time-frequency resources, to further determine a correspondence between the first K1 value subset and the first group of time-frequency resources and a correspondence between the second K1 value subset and the second group of time-frequency resources as the grouping relationship. In addition, in the foregoing example, the network device may alternatively divide the K1 value set {1, 2, 3, 4, 5, 6, 7, 8} into a first K1 value subset {1, 2, 3} and a second K1 value subset {4, 5, 6, 7, 8}. Certainly, the first K1 value subset and the second K value subset may alternatively be obtained through division in another manner. This is not limited in this application. In this example, the grouping relationship may be in a form of a list. Table 3 shows a possible form of the grouping relationship. In Table 3, the first parameter is the K1 value for illustration. When the K1 value ranges from 1 to 4, the K value corresponds to the first group of time-frequency resources. When the K1 value ranges from 5 to 8, the K1 value corresponds to the second group of time-frequency resources. TABLE 3K1 valueN groups of time-frequency resources1First group of time-frequency resources2345Second group of time-frequency resources678 For another example, it is assumed that the indexes of the several K1 values are 1 to 8, and a corresponding K1 value set may be denoted as {1, 2, 3, 4, 5, 6, 7, 8}. N=3 is used as an example. The network device may divide the K1 value set {1, 2, 3, 4, 5, 6, 7, 8} into a first K1 value subset {1, 2, 3}, a second K1 value subset {4, 5, 6}, and a third K value subset {7, 8} based on the indexes of the K1 values. After dividing the K1 value set into the first K1 value subset, the second K1 value subset, and the third K value subset, the network device may establish a one-to-one correspondence between the three K1 value subsets and the three groups of time-frequency resources. For ease of description, the three groups of time-frequency resources are denoted as a first group of time-frequency resources, a second group of time-frequency resources, and a third group of time-frequency resources below. For example, the network device may map the first K1 value subset to the first group of time-frequency resources, map the second K1 value subset to the second group of time-frequency resources, and map the third K1 value subset to the third group of time-frequency resources, to further determine the one-to-one correspondence between the three K1 value subsets and the three groups of time-frequency resources as the grouping relationship. For still another example, it is assumed that the indexes of the several K1 values are 1 to 16, and a corresponding K1 value set may be denoted as {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}. N=2 is used as an example. The network device may divide the K1 value set {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16} into a first K1 value subset {1, 2, 3, 4, 5, 6, 7, 8} and a second K1 value subset {9, 10, 11, 12, 13, 14, 15, 16} based on the indexes of the K1 values. For example, after dividing the K1 value set into the first K1 value subset and the second K1 value subset, the network device may map the first K1 value subset to a second group of time-frequency resources, and map the second K1 value subset to a first group of time-frequency resources, to further determine a correspondence between the first K1 value subset and the second group of time-frequency resources and a correspondence between the second K1 value subset and the first group of time-frequency resources as the grouping relationship. In this embodiment of this application, if the network device determines the grouping relationship based on the K1 value, correspondingly, the terminal device may determine, in the N groups of time-frequency resources based on the obtained grouping relationship and the K1 value carried in the received first DCI, the ithgroup of time-frequency resources corresponding to the K1 value carried in the first DCI. The following describes an implementation. For example, assuming that N is 2 and i is 1, the grouping relationship determined by the network device includes: The first K1 value subset {1, 2, 3, 4} corresponds to the first group of time-frequency resources, and the second K1 value subset {5, 6, 7, 8} corresponds to the second group of time-frequency resources. After determining the grouping relationship, the network device sends the grouping relationship to the terminal device, and sends the first DCI to the terminal device, where the K1 value carried in the first DCI is 3. After receiving the grouping relationship and the first DCI that are sent by the network device, the terminal device may learn that a K1 value 3 (which may also be described as the K1 value carried in the first DCI) related to the first DCI belongs to the first K1 value subset, and the first K1 value subset corresponds to the first group of time-frequency resources. Therefore, the terminal device may determine, in the two groups of time-frequency resources based on the grouping relationship, the first group of time-frequency resources corresponding to the K1 value 3 related to the first DCI, and may further determine the first uplink channel that carries the first HARQ-ACK on the first time-frequency resource in the first group of time-frequency resources. It should be noted that when the K1 value is predefined, the network device may also determine the grouping relationship by using the foregoing method. A difference lies in that if the K1 value is predefined, the network device does not need to obtain the several K1 values configured by the higher layer, but directly performs the foregoing method by using the predefined K1 value. The following describes a process in which the network device determines the grouping relationship based on the first time length. An example in which the first time length is semi-statically configured is used for description. When the first time length is semi-statically configured, before the network device determines the grouping relationship based on the first time length, the network device may further obtain several first time lengths configured by a higher layer. For ease of description, the several first time lengths are described as a first time length set below. After obtaining the first time length set configured by the higher layer, the network device may divide the first time length set into N first time length subsets, then may establish a one-to-one correspondence between the N first time length subsets and the N groups of time-frequency resources, and determine the one-to-one correspondence between the N first time length subsets and the N groups of time-frequency resources as the grouping relationship. For example, it is assumed that the several first time lengths obtained by the network device are 14 time domain symbols, 2 time domain symbols, 4 time domain symbols, and 7 time domain symbols, and a corresponding first time length set may be denoted as {2, 4, 7, 14}. N=2 is used as an example. The network device may divide the first time length set {2, 4, 7, 14} into a first time length subset {2, 4, 7} and a second time length subset {14} based on the first time lengths. After dividing the first time length set into the first time length subset and the second time length subset, the network device may establish a one-to-one correspondence between the two time length subsets and the two groups of time-frequency resources. For ease of description, the two groups of time-frequency resources are denoted as a first group of time-frequency resources and a second group of time-frequency resources below. For example, the network device may map the first time length subset {2, 4, 7} to the first group of time-frequency resources, and map the second time length subset {14} to the second group of time-frequency resources, to further determine a one-to-one correspondence between the two time length subsets and the two groups of time-frequency resources as the grouping relationship. In this example, the grouping relationship may be in a form of a list. Table 4 shows a possible form of the grouping relationship. In Table 4, the first parameter is the first time length for illustration. When the first time length is 2, 4, or 7, the first time length corresponds to the first group of time-frequency resources. When the K1 value ranges from 5 to 8, the first time length corresponds to the second group of time-frequency resources. TABLE 4First time lengthN groups of time-frequency resources2First group of time-frequency resources4714Second group of time-frequency resources In this embodiment of this application, if the network device determines the grouping relationship based on the first time length, correspondingly, the terminal device may determine, in the N groups of time-frequency resources based on the obtained grouping relationship and the first time length, the ithgroup of time-frequency resources corresponding to the first time length related to the first DCI. The following describes an implementation. For example, assuming that N is 2 and i is 1, the grouping relationship determined by the network device includes: the first time length subset {2 time domain symbols, 4 time domain symbols, 7 time domain symbols} corresponds to the first group of time-frequency resources, and the second time length subset {14 time domain symbols} corresponds to the second group of time-frequency resources. After determining the grouping relationship, the network device sends the grouping relationship to the terminal device, and sends the first DCI to the terminal device. It is assumed that the first time length corresponding to the first DCI is 7 time domain symbols. After receiving the grouping relationship and the first DCI that are sent by the network device, the terminal device may learn that the first time length that is 7 time domain symbols related to the first DCI belongs to the first time length subset, and may learn, based on the grouping relationship, that the first time length subset corresponds to the first group of time-frequency resources. Therefore, the terminal device may determine, in the two groups of time-frequency resources based on the grouping relationship, the first group of time-frequency resources corresponding to the first time length that is 7 time domain symbols related to the first DCI, and may further determine the first uplink channel that carries the first HARQ-ACK on the first time-frequency resource in the first group of time-frequency resources. In this embodiment of this application, the foregoing description is provided by using an example in which the network device determines the grouping relationship based on the K1 value or the first time length. The network device may further determine the grouping relationship based on both the K1 value and the first time length. The following describes a method for determining the grouping relationship by the network device based on both the K1 value and the first time length. In a possible implementation, the network device may configure, as a time length subset based on the first time length, K1 values corresponding to time lengths that are a same first time length. For example, it is assumed that several K1 values obtained by the network device from a higher layer are indexed by 1 to 8, and a corresponding K1 value set may be denoted as {1, 2, 3, 4, 5, 6, 7, 8}. A time length (that is, a granularity of the K1 value) corresponding to K1 values indexed by 1 to 4 is a ½ slot, and a time length (that is, a granularity of the K1 value) corresponding to K1 values indexed by 5 to 8 is a slot. The network device may configure, based on the first time length, the K1 values corresponding to the ½ slot as a first time length subset {1, 2, 3, 4}, and may configure, based on the first time length, the K1 values corresponding to the slot as a second time length subset {5, 6, 7, 8}, so that the first time length subset {1, 2, 3, 4} may correspond to the first group of time-frequency resources, and the second time length subset {5, 6, 7, 8} may correspond to the second group of time-frequency resources. A one-to-one correspondence between the two time length subsets and the two groups of time-frequency resources is determined as the grouping relationship. It should be noted that when the first time length is predefined, the network device may also determine the grouping relationship by using the foregoing method. A difference lies in that if the first time length is predefined, the network device does not need to obtain the several first time lengths configured by the higher layer, but directly performs the foregoing method by using the predefined first time length. The following describes a process in which the network device determines the grouping relationship based on the SLIV index. An example in which the SLIV index is semi-statically configured is used for description. When the SLIV index is semi-statically configured, before the network device determines the grouping relationship based on the SLIV index, the network device may further obtain an SLIV table configured by a higher layer. The SLIV table may include a plurality of SLIV indices. For example, an SLIV table in an existing protocol includes a total of 16 SLIV indices: 1 to 16. For ease of description, the several SLIV indices are described as an SLIV index set below. After obtaining the SLIV table configured by the higher layer, the network device may determine the SLIV index set, further divide the SLIV index set into N SLIV index subsets, then establish a one-to-one correspondence between the N SLIV index subsets and the N groups of time-frequency resources, and determine the one-to-one correspondence between the N index subsets and the N groups of time-frequency resources as the grouping relationship. For example, it is assumed that the several SLIV indices are 1 to 16, and a corresponding SLIV index set may be denoted as {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}. N=2 is used as example. The network device may divide the SLIV index set {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16} into a first SLIV index subset {1, 2, 3, 4, 5, 6, 7, 8} and a second SLIV index subset {9, 10, 11, 12, 13, 14, 15, 16} based on the SLIV indices. After dividing the SLIV index set into the first SLIV index subset and the second SLIV index subset, the network device may establish a one-to-one correspondence between the two SLIV index subsets and the two groups of time-frequency resources. For ease of description, the two groups of time-frequency resources are denoted as a first group of time-frequency resources and a second group of time-frequency resources below. For example, the network device may map the first SLIV index subset to the first group of time-frequency resources, and map the second SLIV index subset to the second group of time-frequency resources, to further determine the one-to-one correspondence between the two SLIV index subsets and the two groups of time-frequency resources as the grouping relationship. It should be noted that, in the foregoing example, an example in which the SLIV indices included in the SLIV index set are equally divided into the N subsets is used for illustration. In this application, alternatively, the network device may unequally divide the SLIV indices included in the SLIV index set into the N subsets. The following describes an example. For example, it is assumed that the several SLIV indices are 1 to 16, and a corresponding SLIV index set may be denoted as {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}. N=2 is used as an example. The network device may unequally divide the SLIV index set {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16} into a first SLIV index subset {1, 2, 3, 4, 8, 12, 15} and a second SLIV index subset {5, 6, 7, 9, 10, 11, 13, 14, 16}. After dividing the SLIV index set into the SLIV index subsets, the network device may establish a one-to-one correspondence between the two SLIV index subsets and the two groups of time-frequency resources, for example, may map the first SLIV index subset to the first group of time-frequency resources, and map the second SLIV index subset to the second group of time-frequency resources. For another example, it is assumed that the several SLIV indices are 1 to 16, and a corresponding SLIV index set may be denoted as {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}. N=3 is used as an example. The network device may unequally divide the SLIV index set {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16} into a first SLIV index subset {1, 2, 3, 4, 8, 12, 15}, a second SLIV index subset {5, 6, 7, 13, 14, 16}, and a third SLIV index subset {9, 10, 11}. After dividing the SLIV index set into the SLIV index subsets, the network device may establish a one-to-one correspondence between the three SLIV index subsets and the three groups of time-frequency resources, for example, may map the first SLIV index subset to the first group of time-frequency resources, map the second SLIV index subset to the second group of time-frequency resources, and map the third SLIV index subset to the third group of time-frequency resources. For still another example, it is assumed that the several SLIV indices are 1 to 16, and a corresponding SLIV index set may be denoted as {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 12, 13, 14, 14, 16, 16}. N=4 is used as an example. The network device may unequally divide the SLIV index set {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16} into a first SLIV index subset {1, 2, 3, 12}, a second SLIV index subset {5, 6, 7, 13, 14, 16}, a third SLIV index subset {9, 10, 11}, and a fourth SLIV index subset {4, 8, 15}. After dividing the SLIV index set into the SLIV index subsets, the network device may establish a one-to-one correspondence between the four SLIV index subsets and the four groups of time-frequency resources, for example, may map the first SLIV index subset to the first group of time-frequency resources, map the second SLIV index subset to the second group of time-frequency resources, map the third SLIV index subset to the third group of time-frequency resources, and map the fourth SLIV index subset to the fourth group of time-frequency resources. In this embodiment of this application, if the network device determines the grouping relationship based on the SLIV index, correspondingly, the terminal device may determine, in the N groups of time-frequency resources based on the obtained grouping relationship and the SLIV index, the ithgroup of time-frequency resources corresponding to the SLIV index related to the first DCI. The following describes an implementation. For example, assuming that N is 2 and i is 1, the grouping relationship determined by the network device includes: A first SLIV index subset {1, 2, 3, 4, 5, 6, 7, 8} corresponds to a first group of time-frequency resources, and a second SLIV index subset {9, 10, 11, 12, 13, 14, 15, 16} corresponds to a second group of time-frequency resources. After determining the grouping relationship, the network device sends the grouping relationship to the terminal device, and then may send the first DCI to the terminal device. It is assumed that the SLIV index carried in the first DCI is 8. After receiving the grouping relationship and the first DCI that are sent by the network device, the terminal device may learn that the SLIV index 8 related to the first DCI belongs to the first SLIV index subset, and may learn, based on the grouping relationship, that the first SLIV index subset corresponds to the first group of time-frequency resources. Therefore, the terminal device may determine, in the two groups of time-frequency resources based on the grouping relationship, the first group of time-frequency resources corresponding to the SLIV index related to the first DCI, and may further determine the first uplink channel that carries the first HARQ-ACK on the first time-frequency resource in the first group of time-frequency resources. In a possible implementation, the SLIV indices may be some SLIV indices included in the SLIV table. In this implementation, the network device may equally or unequally divide the some SLIV indices into the N SLIV index subsets, and establish a one-to-one correspondence between the N SLIV index subsets and the N groups of time-frequency resources. In still another possible implementation, all PUCCHs are piggybacked on a PUSCH. In this implementation, the network device may determine the grouping relationship based on SLIV indices corresponding to the PUSCHs. For a specific implementation, refer to the method for determining the grouping relationship based on the SLIV indices corresponding to the PDSCHs. Details are not described herein again. The following describes a process in which the network device determines the grouping relationship based on the codebook identifier. In a possible implementation, there may be N values of the codebook identifier, and each value corresponds to one group of time-frequency resources in the N groups of time-frequency resources. N=2 is used as an example. Values of the codebook identifier may include 0 and 1. The network device may map a value 0 to a first group of time-frequency resources in the two groups of time-frequency resources, and map a value 1 to a second group of time-frequency resources in the two groups of time-frequency resources. In this way, the network device may establish a one-to-one correspondence between the two values of the codebook identifier and the two groups of time-frequency resources. The terminal device may determine, in the N groups of time-frequency resources based on the value of the codebook identifier carried in the received DCI, a group of time-frequency resources corresponding to the value of the codebook identifier. The following describes a process in which the network device determines the grouping relationship based on the RNTI. Optionally, the network device may determine the grouping relationship based on a type of the RNTI. For example, it is assumed that there are three types of RNTIs: a C-RNTI, a CS-RNTI, and an MCS-C-RNTI. N=3 is used as an example. The network device may map the C-RNTI to a first group of time-frequency resources in the three groups of time-frequency resources, map the CS-RNTI to a second group of time-frequency resources in the three groups of time-frequency resources, and map the MCS-C-RNTI to a third group of time-frequency resources in the three groups of time-frequency resources. In this way, the network device may establish a one-to-one correspondence between the three types of RNTIs and the three groups of time-frequency resources. The terminal device may derive, based on the received DCI, a RNTI scrambling type used for the DCI, and then may determine, in the N groups of time-frequency resources based on the RNTI scrambling type used for the DCI, a group of time-frequency resources of an uplink channel that carries a HARQ-ACK of a PDSCH scheduled by the DCI. It should be noted that the MCS-C-RNTI is anew RNTI provided in this application, and has the following function: It may be determined, by using the MCS-C-RNTI, that data of a PDSCH corresponding to a HARQ-ACK is from a first-type service, where the first-type service may be, for example, a URLLC service. The MCS-C-RNTI represents only a possible name, or may be described as an X-RNTI. A name is not limited in this application, and is to distinguish between an RNTI having the foregoing function and an existing RNTI. The existing RNTI may include, for example, the C-RNTI, the CS-RNTI, a P-RNTI, or an SI-RNTI. The following describes a process in which the network device determines the grouping relationship based on the PDCCH monitoring occasion. Optionally, the network device may divide several PDCCH monitoring occasions into N subsets based on the several PDCCH monitoring occasions. For ease of description, the several PDCCH monitoring occasions are referred to as a PDCCH monitoring occasion set. For example, it is assumed that the several PDCCH monitoring occasions include symbols 0, 2, 4, 6, 8, 10, and 12 in a slot, and correspondingly, this may be understood as: the PDCCH monitoring occasion set is {0, 2, 4, 6, 8, 10, 12}. N=2 is used as an example. The network device may divide the PDCCH monitoring occasion set {0, 2, 4, 6, 8, 10, 12} into a first PDCCH monitoring occasion subset {0, 2, 4, 6}, and a second PDCCH monitoring occasion subset {8, 10, 12}. After dividing the PDCCH monitoring occasion set into the first PDCCH monitoring occasion subset and the second PDCCH monitoring occasion subset, the network device may establish a one-to-one correspondence between the two PDCCH monitoring occasion subsets and the two groups of time-frequency resources. For example, the network device may map the first PDCCH monitoring occasion subset to the first group of time-frequency resources, and map the second PDCCH monitoring occasion subset to the second group of time-frequency resources. The following describes a process in which the network device determines the grouping relationship based on the uplink channel end symbol. Before determining the grouping relationship based on the uplink channel end symbol, the network device may further obtain several start control channel element (CCE) indices configured by a higher layer. For example, the indices may be described as a CCE index set. After obtaining the CCE index set configured by the higher layer, the network device may divide the CCE index set into N CCE index subsets, and each CCE index subset may correspond to a group of uplink channel end symbols. In this way, the network device may establish a one-to-one correspondence between the N groups of uplink channel end symbols and the N groups of time-frequency resources, and further determine the one-to-one correspondence between the N groups of uplink channel end symbols and the N groups of time-frequency resources as the grouping relationship. Each CCE index subset may correspond to an uplink channel end symbol in a range. For example, it is assumed that there are two CCE index subsets. One CCE index subset may correspond to an uplink channel end symbol in a symbol range of 2 to 7, and the other CCE index subset may correspond to an uplink channel end symbol in a symbol range of 8 to 13. Optionally, the network device may divide the CCE index set into the N subsets based on values of the CCE indices. For example, it assumed that the values of the several CCE indices range from 1 to 8, and a corresponding CCE index set may be denoted as {1, 2, 3, 4, 5, 6, 7, 8}. N=4 is used as an example. The network device may divide the CCE index set {1, 2, 3, 4, 5, 6, 7, 8} into a first CCE index subset {1, 2}, a second CCE index subset {3, 4, 5}, a third CCE index subset {6, 7}, and a fourth CCE index subset {8}. After dividing the CCE index set into the CCE index subsets, the network device may determine a group of uplink channel end symbols corresponding to each CCE index subset. It is assumed that a first group of uplink channel end symbols corresponding to the first CCE index subset {1, 2, 3} is {3, 4, 6}, a second group of uplink channel end symbols corresponding to the second CCE index subset {3, 4, 5} is {7, 13}, a third group of uplink channel end symbols corresponding to the third CCE index subset {6, 7} is {10}, and a fourth group of uplink channel end symbols corresponding to the fourth CCE index subset {8} is {7}. Further, the network device may establish a one-to-one correspondence between the four groups of uplink channel end symbols and the four groups of time-frequency resources. For ease of description, the four groups of time-frequency resources are denoted as a first group of time-frequency resources, a second group of time-frequency resources, a third group of time-frequency resources, and a fourth group of time-frequency resources. For example, the network device may map the first group of uplink channel end symbols to the first group of time-frequency resources, map the second group of uplink channel end symbols to the second group of time-frequency resources, map the third group of uplink channel end symbols to the third group of time-frequency resources, and map the fourth group of uplink channel end symbols to the fourth group of time-frequency resources, so that the one-to-one correspondence between the four groups of uplink channel end symbols and the four groups of time-frequency resources can be determined as the grouping relationship. For another example,FIG.5(a)is a schematic diagram of time unit grouping according to an embodiment of this application. InFIG.5(a), it is assumed that a time unit is a slot, N is 4, and an uplink channel is a PUCCH. It is assumed that HARQ-ACKs corresponding to seven PDSCHs need to be transmitted in a slot n, one PUCCH carries a HARQ-ACK for one PDSCH, and time domain resources used to transmit PUCCHs are selected based on quantities of bits of the HARQ-ACKs, to obtain a PUCCH1to a PUCCH7. Grouping is determined based on an end symbol of one PUCCH. There may be start symbols of several PUCCHs before the end symbol, and these PUCCHs overlap in time domain. InFIG.5(a), there are start symbols of the PUCCH2and the PUCCH3before a last symbol of the PUCCH1. In this case, the PUCCHs1to3are grouped into one group, and correspond to a first group of time-frequency resources. There is only a start symbol of the PUCCH5before an end symbol of the PUCCH4. In this case, resources occupied by the PUCCH4and the PUCCH5correspond to a second group of time-frequency resources. There is no start symbol of another PUCCH before an end symbol of the PUCCH6, and the PUCCH6is independently grouped into one group and corresponds to a third group of time-frequency resources. Similarly, the PUCCH7corresponds to a fourth group of time-frequency resources. Therefore, the network device may determine a one-to-one correspondence between the four groups of uplink channel end symbols and the four groups of time-frequency resources as the grouping relationship. In this embodiment of this application, the foregoing describes the method for determining the grouping relationship by the network device based on one condition. In addition, the network device may further determine the grouping relationship based on a combination of two conditions. The following describes an example in which the network device determines the grouping relationship based on two conditions. For example, an example in which the network device determines the grouping relationship based on the first time length and the RNTI is used for description. It is assumed that several first time lengths obtained by the network device are 14 time domain symbols, 2 time domain symbols, 4 time domain symbols, and 7 time domain symbols, and a corresponding first time length set may be denoted as {2, 4, 7, 14}. The network device may divide the first time length set {2, 4, 7, 14} into a first time length subset {2, 4, 7} and a second time length subset {14} based on the first time lengths. Further, it is assumed that there are three types of RNTIs: a C-RNTI, a CS-RNTI, and an MCS-C-RNTI. N=4 is used as an example. To be specific, time-frequency resources in one time unit are grouped into four groups of time-frequency resources, which are denoted as a first group of time-frequency resources, a second group of time-frequency resources, a third group of time-frequency resources, and a fourth group of time-frequency resources. That the network device groups the resources in the time unit based on a combination of the first time length and the RNTI may include: The network device maps the first time length {2, 4, 7} and DCI scrambled with the MCS-C-RNTI to the first group of time-frequency resources, maps the first time length {2, 4, 7} and DCI scrambled with the C-RNTI and the CS-RNTI to the second group of time-frequency resources, maps the first time length 14 and DCI scrambled with the MCS-C-RNTI to the third group of time-frequency resources, and maps the first time length {2, 4, 7} and DCI scrambled with the C-RNTI and the CS-RNTI to the fourth group of time-frequency resources. The network device may further determine the four groups of correspondences as the grouping relationship. For another example, an example in which the network device determines the grouping relationship based on the K1 value and the RNTI is used for description. It is assumed that several K1 values obtained by the network device are 1, 2, 3, 4, 5, 6, 7, or 8, and a corresponding K1 value set may be denoted as {1, 2, 3, 4, 5, 6, 7, 8}. The network device may divide the K1 value set {1, 2, 3, 4, 5, 6, 7, 8} into a first K1 value subset {1, 2, 3, 4} and a second K1 value subset {5, 6, 7, 8} based on the K1 values. Further, it is assumed that there are three types of RNTIs: a C-RNTI, a CS-RNTI, and an MCS-C-RNTI. N=3 is used as an example. To be specific, time-frequency resources in one time unit are grouped into three groups of time-frequency resources, which are denoted as a first group of time-frequency resources, a second group of time-frequency resources, and a third group of time-frequency resources. That the network device groups the resources in the time unit based on a combination of the K1 value and the RNTI may include: The network device maps the K1 values {5, 6, 7, 8} and DCI scrambled with the MCS-C-RNTI to the first group of time-frequency resources, maps the K1 values {1, 2, 3, 4} and DCI scrambled with the C-RNTI and the CS-RNTI to the second group of time-frequency resources, and maps the K1 values {1, 2, 3, 4} and DCI scrambled with the MCS-C-RNTI to the third group of time-frequency resources. The network device may further determine the three groups of correspondences as the grouping relationship. It may be understood that the foregoing examples are merely examples for description. The network device may alternatively determine the grouping relationship based on another condition combination. For details, refer to the foregoing methods for determining the grouping relationship based on a single condition. Details are not described herein again. In addition, the network device may further determine the grouping relationship based on more than two of the foregoing conditions. For details, refer to the foregoing methods for determining the grouping relationship based on a single condition. Details are not described herein again. In this embodiment of this application, before determining the grouping relationship according to the foregoing method, the network device may further obtain a parameter N configured by a higher layer. In other words, the quantity of groups into which the network device groups time-frequency resources in one time unit may be configured by the higher layer. It may be understood that, in this application, N groups of time-frequency resources obtained by grouping time-frequency resources in one time unit may overlap or may not overlap. Overlapping may be partial or complete overlapping.FIG.5(b)is a schematic diagram of time unit grouping according to an embodiment of this application. InFIG.5(b), it is assumed that a time unit is a slot, and N is 3. In other words, inFIG.5(b), an example in which time-frequency resources in one slot are grouped into three groups of time-frequency resources is used for illustration. As shown inFIG.5(b), a first group of time-frequency resources occupies time-frequency resources in symbols 1 to 3 in the slot, a second group of time-frequency resources occupies time-frequency resources in symbols 4 to 9 in the slot, and a third group of time-frequency resources occupies time-frequency resources in symbols 8 to 14 in the slot. The first group of time-frequency resources does not overlap the second group of time-frequency resources, the first group of time-frequency resources does not overlap the third group of time-frequency resources, and the second group of time-frequency resources overlaps the third group of time-frequency resources. The foregoing mainly describes how the network device determines the grouping relationship. The following describes in detail how the terminal device implements the communications method provided in this application. It may be understood that the quantity of pieces of DCI received by the terminal device is not limited in this application. The method embodiment corresponding toFIG.4mainly describes how the terminal device performs the method provided in this application when the terminal device receives one piece of DCI. The following further describes the method provided in this embodiment of this application by using an example in which the terminal device receives two pieces of DCI. Certainly, the terminal device may alternatively receive more than two pieces of DCI. An implementation principle is similar. In this application, the following uses an example in which the terminal device receives two pieces of DCI. In a possible implementation, the terminal device may receive second DCI in addition to the first DCI. After receiving the second DCI, the terminal device may determine, in the N groups of time-frequency resources based on the obtained grouping relationship, a time-frequency resource corresponding to a first parameter related to the second DCI. In a possible case, both the first parameter corresponding to the second DCI and the first parameter corresponding to the first DCI correspond to the ithgroup of time-frequency resources in the N groups of time-frequency resources. In another possible case, the first parameter corresponding to the second DCI corresponds to the kthgroup of time-frequency resources in the N groups of time-frequency resources, where k is a positive integer less than or equal to N, and k and i are different values. The following separately describes the two possible cases. In the first case, both the first parameter corresponding to the second DCI and the first parameter corresponding to the first DCI correspond to the ithgroup of time-frequency resources in the N groups of time-frequency resources. The terminal device may combine the first HARQ-ACK corresponding to the PDSCH scheduled by the first DCI and a second HARQ-ACK corresponding to a PDSCH scheduled by the second DCI into one hybrid HARQ-ACK, and transmit, on the ithgroup of time-frequency resources, an uplink channel that carries the hybrid HARQ-ACK. In the second case, the first parameter corresponding to the second DCI corresponds to the kthgroup of time-frequency resources in the N groups of time-frequency resources, and the first parameter corresponding to the first DCI corresponds to the ithgroup of time-frequency resources in the N groups of time-frequency resources. The terminal device may determine a second uplink channel that carries a second HARQ-ACK on a second time-frequency resource in the kthgroup of time-frequency resources. In the foregoing second case, the first time-frequency resource may overlap or may not overlap the second time-frequency resource. The following separately describes a case in which the first time-frequency resource overlaps the second time-frequency resource and a case in which the first time-frequency resource does not overlap the second time-frequency resource. In a possible implementation, the first time-frequency resource does not overlap the second time-frequency resource. For example,FIG.5(b)is used as an example for illustration. It is assumed that N is 3, i is 1, and k is 2. The ithgroup of time-frequency resources corresponds to the first group of time-frequency resources inFIG.5(b), the kthgroup of time-frequency resources corresponds to the second group of time-frequency resources inFIG.5(b), and the first group of time-frequency resources does not overlap the second group of time-frequency resources inFIG.5(b). The first time-frequency resource is a time-frequency resource in the first group of time-frequency resources, and the second time-frequency resource is a time-frequency resource in the second group of time-frequency resources. Therefore, the first time-frequency resource does not overlap the second time-frequency resource inFIG.5(b). In this implementation, the terminal device sends the first uplink channel on the first time domain resource, and sends the second uplink channel on the second time domain resource. In this way, the terminal device may transmit, respectively on the two different groups of time-frequency resources in the N groups of time-frequency resources, the first uplink channel that carries the first HARQ-ACK and the second uplink channel that carries the second HARQ-ACK. The first uplink channel and the second uplink channel do not need to be sent on a same PUCCH resource, and a HARQ-ACK that arrives earlier in the first HARQ-ACK and the second HARQ-ACK may be fed back earlier. This can reduce a transmission latency to some extent. In another possible implementation, the first time-frequency resource partially or fully overlaps the second time-frequency resource. For example,FIG.5(b)is used as an example for illustration. It is assumed that N is 3, i is 2, and k is 3. The ithgroup of time-frequency resources corresponds to the second group of time-frequency resources inFIG.5(b), the kthgroup of time-frequency resources corresponds to the third group of time-frequency resources inFIG.5(b), and the second group of time-frequency resources partially overlaps the third group of time-frequency resources inFIG.5(b). If the first time-frequency resource determined by the terminal device is symbols 7 to 9 in the second group of time-frequency resources, and the determined second time-frequency resource is symbols 8 to 11 in the third group of time-frequency resources, the first time-frequency resource partially overlaps the second time-frequency resource. If the first time-frequency resource determined by the terminal device is symbols 8 and 9 in the second group of time-frequency resources, and the determined second time-frequency resource is symbols 8 and 9 in the third group of time-frequency resources, the first time-frequency resource fully overlaps the second time-frequency resource. In this implementation, because only one PUCCH is allowed to be transmitted on the overlapping resources, the terminal device needs to reselect a resource regardless of whether the first time-frequency resource partially or fully overlaps the second time-frequency resource. In this application, the terminal device may reselect a resource in the following manner: The terminal device combines the first HARQ-ACK and the second HARQ-ACK into a third HARQ-ACK, and determines a third uplink channel that carries the third HARQ-ACK on a third time-frequency resource, where the third time-frequency resource is a time-frequency resource in a group of time-frequency resources included in the N groups of time-frequency resources. In this application, before determining the third uplink channel that carries the third HARQ-ACK on the third time-frequency resource, the terminal device may further determine the third time-frequency resource. The following provides two methods for determining the third time-frequency resource. In a possible implementation, the terminal device selects a group of time-frequency resources from the ithgroup of time-frequency resources or the kthgroup of time-frequency resources, and determines the third time-frequency resource in the group of time-frequency resources. The following provides description by using an example in which the terminal device determines the third time-frequency resource in the ithgroup of time-frequency resources. The terminal device may determine the third time-frequency resource in the ithgroup of time-frequency resources when determining that the first uplink channel meets one or more of the following conditions. Condition 1: A first time length corresponding to the first uplink channel is shorter than a first time length corresponding to the second uplink channel. It may be understood that when a plurality of time-frequency resources overlap, the terminal device may determine the third time-frequency resource in the ithgroup of time-frequency resources when determining that the first time length corresponding to the first uplink channel is a smallest first time length or is one of smallest first time lengths. Condition 2: The first uplink channel is an uplink channel corresponding to DCI scrambled by a first RNTI. The first RNTI is a new RNTI provided in this application, and has the following function: It may be determined, by using the first RNTI, that data of a PDSCH corresponding to a HARQ-ACK is from a first-type service, where the first-type service may be, for example, a URLLC service. Condition 3: The first uplink channel is an uplink channel carried on a time-frequency resource determined based on the K1 value or the SLIV index. In a possible implementation, it is assumed that the first uplink channel is a PUCCH. The terminal device may determine the third time-frequency resource in the ithgroup of time-frequency resources by using the following method: The terminal device determines, in a first PUCCH resource group, a first PUCCH resource set corresponding to the quantity of bits of the third HARQ-ACK, where the first PUCCH resource group corresponds to a PUCCH transmitted on the ithgroup of time-frequency resources, and the first PUCCH resource group includes one or more PUCCH resource sets. The terminal device may determine the third time-frequency resource in the first PUCCH resource set after determining the first PUCCH resource set. For example, the terminal device may determine, in the first PUCCH resource set based on a resource indicator value of a third PUCCH, the third time-frequency resource that carries a third HARQ-ACK codebook. The resource indicator of the third PUCCH is a value of a PUCCH resource indicator on the third PDCCH, and the third PDCCH is the last PDCCH that is detected by the terminal device and that is used to schedule a PDSCH in a PDSCH set. In another possible implementation, the terminal device determines the third time-frequency resource in a second PUCCH resource group specially configured for overlapping PUCCH resources. The following describes this implementation in detail by using an example in which the first uplink channel is a PUCCH. The terminal device determines, in the second PUCCH resource group, a second PUCCH resource set corresponding to the quantity of bits of the third HARQ-ACK, where the second PUCCH resource group comprises configured for a PUCCH that carries the third HARQ-ACK, the second PUCCH resource group includes one or more PUCCH resource sets, and the second PUCCH resource group comprises time-frequency resources in the jthgroup of time-frequency resources in the N groups of time-frequency resources. It may also be understood that the second PUCCH resource group corresponds to a PUCCH sent on the jthgroup of time-frequency resources. After determining the second PUCCH resource set, the terminal device may determine the third time-frequency resource in the second PUCCH resource set, where j is a positive integer less than or equal to N, and j, i, and k are different values. For example, the terminal device may determine, in the second PUCCH resource set based on a resource indicator value of a third PUCCH, the third time-frequency resource that carries a third HARQ-ACK codebook. The resource indicator of the third PUCCH is a value of a PUCCH resource indicator on the third PDCCH, and the third PDCCH is the last PDCCH that is detected by the terminal device and that is used to schedule a PDSCH in a PDSCH set. The following provides, by using an implementation, description that when the first time-frequency resource overlaps the second time-frequency resource, the terminal device determines the third time-frequency resource in the second PUCCH resource group specially configured for the overlapping PUCCH resources. For example,FIG.6is a schematic diagram of time unit grouping according to an embodiment of this application. InFIG.6, a time unit is a slot, and an example in which time-frequency resources in one time unit are grouped into a first group of time-frequency resources and a second group of time-frequency resources is used for illustration. The first uplink channel is sent on the first group of time-frequency resources, and the second uplink channel is sent on the second group of time-frequency resources. The first uplink channel carries four HARQ-ACKs indicated by DCI #1 to DCI #4, and the four HARQ-ACKs are carried on the first uplink channel. A PUCCH resource set2is selected from the first PUCCH resource group based on the quantity of bits (for example, 10 bits) of the four HARQ-ACKs, because the PUCCH resource set2corresponds to a bit quantity range: 10 bits to 20 bits. Similarly, four HARQ-ACKs indicated by DCI #5 to DCI #8 are carried on the second uplink channel, and a PUCCH resource set3is selected from the second PUCCH resource group based on a quantity20of bits of the four HARQ-ACKs. When resources that carry a HARQ-ACK and that correspond to the PUCCH resource set2in the first PUCCH resource group overlap resources that carry a HARQ-ACK and that correspond to the PUCCH resource set3in the second PUCCH resource group, HARQ-ACK codebooks originally carried on the two PUCCHs may be jointly encoded as 30 bits by using the method in this application. If a PUCCH resource set is to be reselected from the second PUCCH resource group corresponding to the second uplink channel, the PUCCH resource set3is still selected. In this case, if the DCI #1 to the DCI #4 are lost during downlink transmission, only a 20-bit HARQ-ACK codebook indicated by the DCI #5 to the DCI #8 is transmitted on the second uplink channel. In this case, the network device is unaware of a loss of the DCI #1 to the DCI #4 during transmission. Therefore, when receiving the second uplink channel, the network device does not know whether the second uplink channel should be decoded by using 20 bits or 30 bits. For this case, in this embodiment of this application, a PUCCH resource group is specially configured for overlapping PUCCH resources. To be specific, in this application, PUCCH resource groups are separately configured for overlapping PUCCH resources and non-overlapping PUCCH resources. In this way, reliability of transmitting an uplink channel can be improved. In still another possible implementation, when the first time-frequency resource partially or fully overlaps the second time-frequency resource, the terminal device sends only an uplink channel that meets a preset condition, and discards the other uplink channel. It may be understood that when more than two time-frequency resources overlap, the terminal device may send one uplink channel that meets the preset condition, and discard other uplink channels. The foregoing preset condition is explained and described by using an example in which the terminal device sends the first uplink channel. If the terminal device sends the first uplink channel, the first uplink channel meets one or more of the following preset conditions: Condition 1: A first time length corresponding to the first uplink channel is shorter than a first time length corresponding to the second uplink channel. Condition 2: The first uplink channel is an uplink channel corresponding to DCI scrambled with a first RNTI. Condition 3: The first uplink channel is an uplink channel carried on a time-frequency resource determined based on the K1 value or the SLIV index. It should be noted that, in this embodiment of this application, the first DCI and the second DCI may be received by the terminal device from a same network device, or may be received by the terminal device from different network devices. When the first DCI and the second DCI are received from different network devices, the two different network devices may both be transport points (TRP). When the first DCI and the second DCI are received from different network devices, if there is a non-ideal backhaul line between the different network devices, because network devices on the non-ideal backhaul line cannot learn of scheduling statuses of each other in real time, in this scenario, a network device sending the first DCI and a network device sending the second DCI cannot decode a HARQ-ACK codebook obtained by jointly coding the first HARQ-ACK and the second HARQ-ACK. For example, referring toFIG.7, an example in which a network device is a TRP is used. It is assumed that the first DCI and the second DCI are received from different TRPs, the two different TRPs are a TRP #A and a TRP #B, the first DCI is sent by the TRP #A to the terminal device, the second DCI is sent by the TRP #B to the terminal device, and both feedback information HARQ-ACK #1 corresponding to a PDSCH #1 scheduled by the first DCI and feedback information HARQ-ACK #2 corresponding to a PDSCH #2 scheduled by the second DCI are indicated to be sent in a slot n. If there is a non-ideal backhaul line between the TRP #A and the TRP #B, that is, if the TRP #A and the TRP #B cannot learn of scheduling statuses of each other in real time, neither the TRP #A nor the TRP #B can decode a HARQ-ACK codebook obtained by jointly encoding the HARQ-ACK #1 and the HARQ-ACK #2. Based on the foregoing problem, this embodiment of this application further provides a HARQ-ACK sending method. In this method, when the first time-frequency resource partially or fully overlaps the second time-frequency resource, the terminal device determines the first uplink channel that carries the first HARQ-ACK on a fourth time-frequency resource, and the second uplink channel that carries the second HARQ-ACK on a fifth time-frequency resource, where the fourth time-frequency resource is a time-frequency resource in the mthgroup of time-frequency resources included in the N groups of time-frequency resources, the fifth time-frequency resource is a time-frequency resource in the nthgroup of time-frequency resources included in the N groups of time-frequency resources, m and n are positive integers less than or equal to N, and m and n are different values. Optionally, no time-frequency resource in the mthgroup of time-frequency resources overlaps a time-frequency resource in the nthgroup of time-frequency resources in time domain. An example in which the uplink channel is a PUCCH is used. The foregoing implementation may alternatively be understood as: When time-frequency resources corresponding to two PUCCHs overlap, a step of selecting time-frequency resources corresponding to the two PUCCHs is rolled back. In other words, non-overlapping time-frequency resources (which may alternatively be described as PUCCH resources) are reselected for the two PUCCHs that carry the HARQ-ACKs, so that the HARQ-ACKs carried on the two PUCCHs may be separately sent on the non-overlapping time-frequency resources. In the foregoing method, an example in which only two HARQ-ACKs are sent is used for description. When more than two HARQ-ACKs are sent, the foregoing method is still applicable. For example, when more than two HARQ-ACKs are sent, when time-frequency resources corresponding to PUCCHs that carry the HARQ-ACKs overlap, the foregoing method may still be used to determine to separately send the HARQ-ACKs on more than two non-overlapping PUCCH resources. It should be noted that the foregoing method is not limited to a scenario in which there is a non-ideal backhaul line between network devices. In another scenario, the foregoing method is still applicable. It should be further noted that, in the foregoing non-ideal backhaul line scenario, one HARQ-ACK may alternatively be discarded, and only the other HARQ-ACK is transmitted. For details, refer to the foregoing description of the method for discarding a HARQ-ACK. Details are not described herein again. Before the terminal device determines the first uplink channel that carries the first HARQ-ACK on the fourth time-frequency resource, and the second uplink channel that carries the second HARQ-ACK on the fifth time-frequency resource, the terminal device further needs to determine the fourth time-frequency resource and the fifth time-frequency resource. The following specifically describes how the terminal device determines the fourth time-frequency resource and the fifth time-frequency resource by using an example in which the first uplink channel and the second uplink channel are PUCCHs. In a possible implementation, the terminal device determines, in a third PUCCH resource group, a third PUCCH resource set corresponding to the quantity of bits of the first HARQ-ACK, and determines the fourth time-frequency resource in the third PUCCH resource set. The terminal device determines, in a fourth PUCCH resource group, a fourth PUCCH resource set corresponding to the quantity of bits of the second HARQ-ACK, and determines the fifth time-frequency resource in the fourth PUCCH resource set. The third PUCCH resource group includes one or more PUCCH resource sets, the third PUCCH resource group is time-frequency resources in the mthgroup of time-frequency resources. The fourth PUCCH resource group includes one or more PUCCH resource sets, and the fourth PUCCH resource group comprises time-frequency resources in the nthgroup of time-frequency resources. The third PUCCH resource group and the fourth PUCCH resource group are used to select a PUCCH resource when the first time-frequency resource partially or fully overlaps the second time-frequency resource. The third PUCCH resource group and the fourth PUCCH resource group may be preconfigured, or may be configured by the network device, for example, may be configured by the network device (for example, a base station) by using a higher layer parameter or radio resource control (RRC) signaling. It should be further noted that the third PUCCH resource group and the fourth PUCCH resource group may meet the following condition: No PUCCH resource in the third PUCCH resource group overlaps a PUCCH resource in the fourth PUCCH resource group in time domain. This may alternatively be understood as: No PUCCH resource in the third PUCCH resource group has an OFDM symbol in common with any PUCCH resource in the fourth PUCCH resource group. In this way, when time-frequency resources corresponding to two PUCCHs overlap, non-overlapping PUCCH resources may be selected from two preconfigured PUCCH resource groups (for example, the third PUCCH resource group and the fourth PUCCH resource group) that have no overlapping PUCCH resource for HARQ-ACKs carried on the two PUCCHs. For example, the preconfigured PUCCH resource groups are the third PUCCH resource group and the fourth PUCCH resource group. The terminal device may select one PUCCH resource from each of the third PUCCH resource group and the fourth PUCCH resource group based on quantities of bits of the HARQ-ACKs and a PUCCH resource indicator (or referred to as an ACK-NACK resource indicator, ARI) in the last DCI, to transmit two HARQ-ACK codebooks that are carried on two originally overlapping PUCCHs. FIG.8is a possible schematic diagram of the third PUCCH resource group and the fourth PUCCH resource group. InFIG.8, an example in which the third PUCCH resource group includes a PUCCH resource set1to a PUCCH resource set4and the fourth PUCCH resource group includes a PUCCH resource set1to a PUCCH resource set4is used for illustration. It can be learned fromFIG.8that the third PUCCH resource group occupies the first seven symbols in a slot m, the fourth PUCCH resource group occupies the last seven symbols in the slot m, and no PUCCH resource in the third PUCCH resource group overlaps a PUCCH resource in the fourth PUCCH resource group in time domain. It may be understood thatFIG.8is merely a possible schematic diagram, and does not constitute a limitation. This embodiment of this application further provides another HARQ-ACK sending method. In the method, when the first time-frequency resource partially or fully overlaps the second time-frequency resource, the terminal device still uses the first time-frequency resource to carry the first uplink channel of the first HARQ-ACK, and the terminal device determines a second uplink channel that carries the second HARQ-ACK on a sixth time-frequency resource, where the sixth time-frequency resource is a time-frequency resource in the sthgroup of time-frequency resources included in the N groups of time-frequency resources, s is a positive integer less than or equal to N, and s and i are different values. Optionally, the sixth time-frequency resource in the sthgroup of time-frequency resources does not overlap the first time-frequency resource in the ithgroup of time-frequency resources in time domain. According to this implementation, when the first time-frequency resource partially or fully overlaps the second time-frequency resource, the terminal device may keep the first time-frequency resource that carries the first HARQ-ACK unchanged, and select the time-frequency resource that carries the second HARQ-ACK. For example, the uplink channels are PUCCHs. When time-frequency resources corresponding to the two PUCCHs overlap, the HARQ-ACKs may be sent on two non-overlapping PUCCH resources by using this method. Before the terminal device determines the first uplink channel that carries the first HARQ-ACK still on the first time-frequency resource, and determines the second uplink channel that carries the second HARQ-ACK on the sixth time-frequency resource, the terminal device further needs to determine the first time-frequency resource and the sixth time-frequency resource. The following specifically describes how the terminal device determines the first time-frequency resource and the sixth time-frequency resource by using an example in which the first uplink channel and the second uplink channel are PUCCHs. In a possible implementation, the terminal device determines, in a first PUCCH resource group, a fifth PUCCH resource set corresponding to the quantity of bits of the first HARQ-ACK, and determines the first time-frequency resource in the fifth PUCCH resource set. The terminal device determines, in a fifth PUCCH resource group, a sixth PUCCH resource set corresponding to the quantity of bits of the second HARQ-ACK, and determines the sixth time-frequency resource in the sixth PUCCH resource set. The first PUCCH resource group corresponds to a PUCCH transmitted on the ithgroup of time-frequency resources, the first PUCCH resource group includes one or more PUCCH resource sets, and the fifth PUCCH resource group includes one or more PUCCH resource sets. The fifth PUCCH resource group is used to reselect a PUCCH resource when the first time-frequency resource partially or fully overlaps the second time-frequency resource. The foregoing fifth PUCCH resource group may be preconfigured. It may be understood that the sixth time-frequency resource may be selected from a preconfigured PUCCH resource group. The preconfigured PUCCH resource group may be configured by the network device, for example, may be configured by the network device (for example, a base station) by using a higher layer parameter or radio resource control (RRC) signaling. Optionally, all PUCCH resources in the preconfigured PUCCH resource group may be located on several symbols at an edge of a slot, occupy a relatively few time domain resources, and are unlikely to overlap another resource. For example, referring toFIG.9, it is assumed that the preconfigured PUCCH resource group includes four PUCCH resource sets: a PUCCH resource set1, a PUCCH resource set2, a PUCCH resource set3, and a PUCCH resource set4. Each PUCCH resource set includes a PUCCH resource. InFIG.9, these PUCCH resources are located at an edge of a slot m, occupy relatively few time domain resources, and are unlikely to overlap another resource. In a possible implementation, for the foregoing implementation, the first uplink channel meets one or more of the following conditions: Condition 1: A first time length corresponding to the first uplink channel is shorter than a first time length corresponding to the second uplink channel. Condition 2: The first uplink channel is an uplink channel corresponding to DCI scrambled by a first RNTI, and the first RNTI may be an MCS-RNTI. Condition 3: The first uplink channel is an uplink channel carried on a time-frequency resource determined based on the K1 value or the SLIV index. According to the method provided in the foregoing implementation, when feedback information HARQ-ACKs corresponding to PDSCHs from at least two network devices (for example, TRPs on a non-ideal backhaul line) are transmitted in one slot, and a plurality of PUCCH resources that carry the HARQ-ACKs overlap, non-overlapping PUCCH resources can be determined for all the HARQ-ACKs by using the foregoing method, and a HARQ-ACK corresponding to a PDSCH sent by each network device is sent to the network device. In this way, a transmission latency can be reduced, and transmission efficiency is improved. This can further avoid a problem that the network device cannot decode a jointly encoded HARQ-ACK codebook received by the network devices, and can ensure that all HARQ-ACKs that need to be transmitted in one slot can be transmitted in a timely manner. In this embodiment of this application, when the first time-frequency resource overlaps the second time-frequency resource, the terminal device may reallocate time domain resources for transmitting PUCCHs, so that an error can be avoided when the first uplink channel and the second uplink channel are respectively transmitted on the first time-frequency resource and the second time-frequency resource that overlap, thereby improving reliability of transmitting an uplink channel. This embodiment of this application further provides a communications method. The method includes: The terminal device receives first DCI and second DCI; determines, in a preconfigured first PUCCH resource group, a first time-frequency resource used to send a first uplink channel; determines, in a preconfigured second PUCCH resource group, a second time-frequency resource used to send a second uplink channel; sends the first uplink channel on the first time-frequency resource; and sends the second uplink channel on the second time-frequency resource, where the first PUCCH resource group and the second PUCCH resource group are PUCCH resource groups configured for a same slot, the first uplink channel is used to carry a first HARQ-ACK scheduled by the first DCI, and the second uplink channel is used to carry a second HARQ-ACK scheduled by the second DCI. The first DCI corresponds to the first PUCCH resource group, and the second DCI corresponds to the second PUCCH resource group. This may be understood as: A PUCCH resource group may be preconfigured based on a DCI-related condition. In a possible design, the PUCCH resource group may be configured based on one or more of the following DCI-related conditions:1. A PDCCH monitoring occasion indicates a location, in a time unit (for example, a slot), of a start symbol of an occasion on which the terminal device detects a PDCCH. For example, the terminal device may obtain a potential time domain location of the PDCCH monitoring occasion in the slot based on higher-layer configuration information such as a PDCCH monitoring pattern parameter. When the start symbol of the PDCCH monitoring occasion belongs to the first half of the slot, DCI carried by the PDCCH may correspond to the first PUCCH resource group. When the start symbol of the PDCCH monitoring occasion belongs to the second half slot of the slot, the DCI carried by the PDCCH corresponds to the second PUCCH resource group.2. Search space identity (SS ID): The terminal device monitors a PDCCH candidate set (or referred to as a search space), and attempts to decode each PDCCH in the set by monitoring a DCI format. For example, assuming that aggregation levels corresponding to a first SS ID are {1, 2, 4, 8}, and aggregation levels corresponding to a second SS ID is {1, 2, 8}, it may be preconfigured that the first SS ID corresponds to the first PUCCH resource group, and the second SS ID corresponds to the second PUCCH resource group. In this configuration case, after the terminal device receives the first DCI, if the terminal device determines that the first DCI corresponds to the first SS ID, the terminal device may correspondingly determine, in the first PUCCH resource group, the first time-frequency resource used to send the first uplink channel that carries the first HARQ-ACK scheduled by the first DCI. Similarly, after the terminal device receives the second DCI, if the terminal device determines that the second DCI corresponds to the second SS ID, the terminal device may correspondingly determine, in the second PUCCH resource group, the second time-frequency resource used to send the second uplink channel that carries the second HARQ-ACK scheduled by the second DCI.3. An RNTI is used to scramble an information bit of DCI. The terminal device separately performs descrambling on several possible RNTI values. If an information bit obtained after descrambling is performed based on an RNTI value may pass a CRC check, it indicates that the DCI is scrambled by the RNTI. An RNTI of DCI carried by a PDCCH configured by a higher layer may include an existing RNTI such as a C-RNTI, a CS-RNTI, a P-RNTI, or an SI-RNTI, or may include a new RNTI. For example, the new RNTI may be referred to as an X-RNTI. A name of the new RNTI is not limited in this application, and may be referred to as another RNTI. There may be one or more types of X-RNTIs. A typical feature is that a value of the new RNTI is not equal to a value of the existing RNTI (for example, the C-RNTI, the CS-RNTI, the P-RNTI, or the SI-RNTI). A typical function may include: The new RNTI is used to indicate that data of a PDSCH scheduled by a PDCCH is from a first-type service, for example, the URLLC service. The X-RNTI may be an MCS-C-RNTI or another RNTI that identifies a ultra-reliable low-latency service. In this case, a PUCCH resource group may be divided based on a type of an RNTI of a PDCCH. For example, DCI carried on a PDCCH corresponding to the existing RNTI (for example, the C-RNTI, the CS-RNTI, the P-RNTI, or the SI-RNTI) corresponds to the first PUCCH resource group, and DCI carried on a PDCCH corresponding to the new RNTI (for example, the X-RNTI) corresponds to the second PUCCH resource group. In this configuration case, after the terminal device receives the first DCI, if the terminal device determines that a PDCCH that carries the first DCI corresponds to the existing RNTI, the terminal device may correspondingly determine, in the first PUCCH resource group, the first time-frequency resource used to send the first uplink channel that carries the first HARQ-ACK scheduled by the first DCI. Similarly, after the terminal device receives the second DCI, if the terminal device determines that a PDCCH that carries the second DCI corresponds to the new RNTI, the terminal device may correspondingly determine, in the second PUCCH resource group, the second time-frequency resource used to send the second uplink channel that carries the second HARQ-ACK scheduled by the second DCI.4. DCI format: The DCI format may be used to distinguish between DCI carried on PDCCHs. The terminal device may attempt to decode each DCI format with a different quantity of bits (payload size) through PDCCH blind detection, perform cyclic redundancy check (CRC), determine, through the CRC, the quantity of bits of the DCI corresponding to the PDCCH, and further determine a DCI format of the PDCCH with reference to a format indicator byte in the decoded DCI. A DCI format configured by a higher layer may include a format 1_0, a format 1_1, and a format 1_x. The format 1_0 and the format 1_1 may be existing DCI formats. The format 1_x may be a new DCI format different from the format 1_0 and the format 1_1. There may be one or more types of format 1_x. The format 1_x may be a DCI format that identifies a ultra-reliable low-latency service. A typical feature of the new DCI format may include: The format 1_x has the quantity of bits different from those of the format 1_0 and the format 1_1. In this application, a PUCCH resource group may be divided based on a type of a DCI format. For example, DCI whose DCI format is an existing DCI format (for example, the format 1_0 or the format 11) may correspond to the first PUCCH resource group, and DCI whose DCI format is the new DCI format (for example, the format 1_x) may correspond to the second PUCCH resource group. In this configuration case, after the terminal device receives the first DCI, if the terminal device determines that a DCI format corresponding to the first DCI is the existing DCI format, the terminal device may correspondingly determine, in the first PUCCH resource group, the first time-frequency resource used to send the first uplink channel that carries the first HARQ-ACK scheduled by the first DCI. Similarly, after the terminal device receives the second DCI, if the terminal device determines that a DCI format corresponding to the second DCI is the new DCI format, the terminal device may correspondingly determine, in the second PUCCH resource group, the second time-frequency resource used to send the second uplink channel that carries the second HARQ-ACK scheduled by the second DCI.5. Network device that sends DCI If the first DCI and the second DCI are sent by a first network device and a second network device respectively, it may be configured, by using a higher layer parameter, that the first DCI corresponds to the first PUCCH resource group and the second DCI corresponds to the second PUCCH resource group. It should be noted that the first DCI and the second DCI may be from a same network device, or may be from different network devices. In other words, the first network device and the second network device may be a same network device, or may be different network devices. In a possible design, when the first time-frequency resource partially or fully overlaps the second time-frequency resource, the terminal device reselects, for the first uplink channel and/or the second uplink channel, a time-frequency resource used to carry sending of the first uplink channel and/or the second uplink channel. In a possible implementation, when the first time-frequency resource partially or fully overlaps the second time-frequency resource, the terminal device reselects, for the first uplink channel and the second uplink channel, time-frequency resources used to carry sending of the first uplink channel and the second uplink channel. In this implementation, the terminal device may determine, in a preconfigured third PUCCH resource group, a third time-frequency resource used to send the first uplink channel, determine, in a preconfigured fourth PUCCH resource group, a fourth time-frequency resource used to send the second uplink channel, send the first uplink channel on the third time-frequency resource, and send the second uplink channel on the fourth time-frequency resource. In a possible design, no PUCCH resource in the third PUCCH resource group has an OFDM symbol in common with any PUCCH resource in the fourth PUCCH resource group. In other words, a PUCCH resource in the third PUCCH resource group does not overlap a PUCCH resource in the fourth PUCCH resource group. In this design, the first uplink channel and the second uplink channel have no common OFDM symbol, so that the first HARQ-ACK and the second HARQ-ACK can be carried on different uplink channels in one slot for separate sending. In a possible implementation, when the first time-frequency resource partially or fully overlaps the second time-frequency resource, the terminal device reselects, for the second uplink channel, a time-frequency resource used to carry sending of the second uplink channel. In this implementation, the terminal device may determine, in a preconfigured fifth PUCCH resource group, a fifth time-frequency resource used to send the second uplink channel, and send the second uplink channel on the fifth time-frequency resource. In a possible design, no PUCCH resource in the first PUCCH resource group has an OFDM symbol in common with any PUCCH resource in the fifth PUCCH resource group. In other words, a PUCCH resource in the first PUCCH resource group does not overlap a PUCCH resource in the fifth PUCCH resource group. In this design, the first uplink channel and the second uplink channel have no common OFDM symbol, so that the first HARQ-ACK and the second HARQ-ACK can be carried on different uplink channels in one slot for separate sending. It should be noted that, in this application, reselecting a time-frequency resource means discarding a previously selected time-frequency resource and reselecting a time-frequency resource. For example, that the terminal device reselects, for the second uplink channel, a time-frequency resource used to carry sending of the second uplink channel may be understood as: The terminal device discards the time-frequency resource that is determined for the second uplink channel before the reselection, and reselects a time-frequency resource for the second uplink channel. The foregoing mainly describes the solutions provided in the embodiments of this application from a perspective of interaction between the terminal device and the network device. It may be understood that to implement the foregoing functions, the terminal device and the network device include corresponding hardware structures and/or software modules for performing the functions. A person skilled in the art should easily be aware that, in combination with the examples described in the embodiments disclosed in this specification, algorithm steps may be implemented by hardware or a combination of hardware and computer software. Whether a function is performed by hardware or hardware driven by computer software depends on particular applications and design constraints of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application In the embodiments of this application, function modules of the terminal device and the network device may be divided based on the foregoing method examples. For example, each function module may be obtained through division based on each corresponding function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software function module. It should be noted that, in the embodiments of this application, division into the modules is an example, and is merely logical function division. During actual implementation, another division manner may be used. Based on a same inventive concept, an embodiment of this application further provides an apparatus configured to implement any method in the embodiments of this application. For example, the apparatus includes units (or means) configured to implement steps performed by the terminal device in any method in the embodiments of this application. For another example, another apparatus is further provided, including units (or means) configured to implement the steps performed by the network device in any method in the embodiments of this application. In a possible implementation, an embodiment of this application provides a communications apparatus700. The communications apparatus700may be used in a terminal device.FIG.10is a schematic structural diagram of the communications apparatus700according to an embodiment of this application. Referring toFIG.10, the communications apparatus700may include an obtaining unit701, a receiving unit702, and a processing unit703. Based on the communications method shown inFIG.4, in the communications apparatus700shown inFIG.10, the obtaining unit701may be used by the communications apparatus700to perform the step shown in S101, the receiving unit702may be used by the communications apparatus700to perform the step shown in S102, and the processing unit703may be used by the communications apparatus700to perform the step shown in S103or S104. In another possible implementation, an embodiment of this application provides a communications apparatus800. The communications apparatus800may be used in a network device.FIG.11is a schematic structural diagram of the communications apparatus800according to an embodiment of this application. Referring toFIG.11, the communications apparatus800may include a sending unit801. In an implementation, the communications apparatus800may further include a processing unit802. Based on the communications method shown inFIG.4, in the communications apparatus800shown inFIG.11, the sending unit801may be used by the communications apparatus800to perform the step shown in S102. When the communications apparatus700is used in a terminal device, and the communications apparatus800is used in a network device, the following operations may be further performed: In a possible design, the first parameter includes one or more of a K1 value, a first time length, a codebook identifier, a radio network temporary identifier RNTI, an uplink channel end symbol, a physical downlink control channel PDCCH monitoring occasion, or a start and length indicator value SLIV index, the K1 value is a time units offset from a time unit in which a PDSCH is located to a time unit in which an uplink channel of a HARQ-ACK corresponding to the PDSCH is located, and the first time length represents a time length corresponding to the K1 value. In a possible design, the receiving unit702is further configured to: receive second DCI. The processing unit703is further configured to: determine, in the N groups of time-frequency resources based on the grouping relationship, the kthgroup of time-frequency resources corresponding to a first parameter related to the second DCI received by the receiving unit702, where k is a positive integer less than or equal to N, and k and i are different values; and determine a second uplink channel that carries a second HARQ-ACK on a second time-frequency resource in the kthgroup of time-frequency resources. In a possible design, the processing unit703is further configured to: when the first time-frequency resource partially or fully overlaps the second time-frequency resource, combine the first HARQ-ACK and the second HARQ-ACK into a third HARQ-ACK, and determine a third uplink channel that carries the third HARQ-ACK on a third time-frequency resource, where the third time-frequency resource is a time-frequency resource in a group of time-frequency resources included in the N groups of time-frequency resources. In a possible design, the third time-frequency resource is a time-frequency resource in the ithgroup of time-frequency resources when the first uplink channel meets one or more of the following conditions: a first time length corresponding to the first uplink channel is shorter than a first time length corresponding to the second uplink channel; the first uplink channel is an uplink channel corresponding to DCI scrambled with a first RNTI; and the first uplink channel is an uplink channel carried on a time-frequency resource determined based on the K1 value or the SLIV index. In a possible design, the first uplink channel is a PUCCH; and the processing unit703is further configured to: determine, in a first PUCCH resource group, a first PUCCH resource set corresponding to the quantity of bits of the third HARQ-ACK, where the first PUCCH resource group corresponds to a PUCCH transmitted on the ithgroup of time-frequency resources, and the first PUCCH resource group includes one or more PUCCH resource sets; and determine the third time-frequency resource in the first PUCCH resource set. In a possible design, the first uplink channel is a PUCCH; and the processing unit703is further configured to: determine, in a second PUCCH resource group, a second PUCCH resource set corresponding to the quantity of bits of the third HARQ-ACK, where the second PUCCH resource group comprises configured for a PUCCH that carries the third HARQ-ACK, the second PUCCH resource group includes one or more PUCCH resource sets, and the second PUCCH resource group comprises time-frequency resources in the jthgroup of time-frequency resources in the N groups of time-frequency resources; and determine the third time-frequency resource in the second PUCCH resource set, where j is a positive integer less than or equal to N, and j, i, and k are different values. In a possible design, the processing unit703is further configured to: when the first time-frequency resource partially or fully overlaps the second time-frequency resource, determine the first uplink channel that carries the first HARQ-ACK on a fourth time-frequency resource, and the second uplink channel that carries the second HARQ-ACK on a fifth time-frequency resource, where the fourth time-frequency resource is a time-frequency resource in the mthgroup of time-frequency resources included in the N groups of time-frequency resources, the fifth time-frequency resource is a time-frequency resource in the nthgroup of time-frequency resources included in the N groups of time-frequency resources, m and n are positive integers less than or equal to N, and m and n are different values. In a possible design, the mthgroup of time-frequency resources does not overlap the nthgroup of time-frequency resources in time domain. In a possible design, the first uplink channel and the second uplink channel are PUCCHs, and the processing unit703is further configured to: determine, in a third PUCCH resource group, a third PUCCH resource set corresponding to the quantity of bits of the first HARQ-ACK, where the third PUCCH resource group includes one or more PUCCH resource sets, and the third PUCCH resource group is time-frequency resources in the mthgroup of time-frequency resources; determine the fourth time-frequency resource in the third PUCCH resource set; determine, in a fourth PUCCH resource group, a fourth PUCCH resource set corresponding to the quantity of bits of the second HARQ-ACK, where the fourth PUCCH resource group includes one or more PUCCH resource sets, and the fourth PUCCH resource group comprises time-frequency resources in the nth group of time-frequency resources; and determine the fifth time-frequency resource in the fourth PUCCH resource set, where both the third PUCCH resource group and the fourth PUCCH resource group are preconfigured. In a possible design, the processing unit703is further configured to: when the first time-frequency resource partially or fully overlaps the second time-frequency resource, determine the second uplink channel that carries the second HARQ-ACK on a sixth time-frequency resource, where the sixth time-frequency resource is a time-frequency resource in the sthgroup of time-frequency resources included in the N groups of time-frequency resources, s is a positive integer less than or equal to N, and s and i are different values. In a possible design, the sixth time-frequency resource in the sthgroup of time-frequency resources does not overlap the first time-frequency resource in the ithgroup of time-frequency resources in time domain. In a possible design, the first uplink channel and the second uplink channel are PUCCHs, and the processing unit703is further configured to: determine, in a first PUCCH resource group, a fifth PUCCH resource set corresponding to the quantity of bits of the first HARQ-ACK, where the first PUCCH resource group corresponds to a PUCCH transmitted on the ithgroup of time-frequency resources, and the first PUCCH resource group includes one or more PUCCH resource sets; determine the first time-frequency resource in the fifth PUCCH resource set; determine, in a fifth PUCCH resource group, a sixth PUCCH resource set corresponding to the quantity of bits of the second HARQ-ACK, where the fifth PUCCH resource group includes one or more PUCCH resource sets, and the fifth PUCCH resource group is preconfigured; and determine the sixth time-frequency resource in the sixth PUCCH resource set. In a possible design, the first uplink channel meets one or more of the following conditions: a first time length corresponding to the first uplink channel is shorter than a first time length corresponding to the second uplink channel; the first uplink channel is an uplink channel corresponding to DCI scrambled with a first RNTI; and the first uplink channel is an uplink channel carried on a time-frequency resource determined based on the K1 value or the SLIV index. In a possible design, the processing unit802is configured to determine the grouping relationship based on one or more of the following conditions: the K1 value; the first time length; the SLIV index; the codebook identifier; the RNTI; the uplink channel end symbol; and the PDCCH monitoring occasion. In another possible communications method, the communications apparatus700may further include a sending unit704. The receiving unit702is configured to receive first DCI and second DCI. The processing unit703is configured to determine, in a preconfigured first PUCCH resource group, a first time-frequency resource used to send a first uplink channel, and determine, in a preconfigured second PUCCH resource group, a second time-frequency resource used to send a second uplink channel. The sending unit704is configured to send the first uplink channel on the first time-frequency resource, and send the second uplink channel on the second time-frequency resource. The first PUCCH resource group and the second PUCCH resource group are PUCCH resource groups configured for a same slot, the first uplink channel is used to carry a first hybrid automatic repeat request-acknowledgment HARQ-ACK scheduled by the first DCI, and the second uplink channel is used to carry a second HARQ-ACK scheduled by the second DCI. It should be noted that the first DCI and the second DCI may be from a same network device, or may be from different network devices. In a possible design, the processing unit703is further configured to: when the first time-frequency resource partially or fully overlaps the second time-frequency resource, reselect, for the first uplink channel and/or the second uplink channel, a time-frequency resource used to carry sending of the first uplink channel and/or the second uplink channel. In a possible implementation, the processing unit703is further configured to: when the first time-frequency resource partially or fully overlaps the second time-frequency resource, reselect, for the first uplink channel and the second uplink channel, time-frequency resources used to carry sending of the first uplink channel and the second uplink channel. In this implementation, the processing unit703may determine, in a preconfigured third PUCCH resource group, a third time-frequency resource used to send the first uplink channel, determine, in a preconfigured fourth PUCCH resource group, a fourth time-frequency resource used to send the second uplink channel, send the first uplink channel on the third time-frequency resource by using the sending unit704, and send the second uplink channel on the fourth time-frequency resource by using the sending unit704. In a possible design, no PUCCH resource in the third PUCCH resource group has an OFDM symbol in common with any PUCCH resource in the fourth PUCCH resource group. In other words, a PUCCH resource in the third PUCCH resource group does not overlap a PUCCH resource in the fourth PUCCH resource group. In a possible implementation, the processing unit703is further configured to: when the first time-frequency resource partially or fully overlaps the second time-frequency resource, reselect, for the second uplink channel, a time-frequency resource used to carry sending of the second uplink channel. In this implementation, the processing unit703may determine, in a preconfigured fifth PUCCH resource group, a fifth time-frequency resource used to send the second uplink channel, and send the second uplink channel on the fifth time-frequency resource by using the sending unit704. In a possible design, no PUCCH resource in the first PUCCH resource group has an OFDM symbol in common with any PUCCH resource in the fifth PUCCH resource group. In other words, a PUCCH resource in the first PUCCH resource group does not overlap a PUCCH resource in the fifth PUCCH resource group. It should be understood that division into the units in the foregoing apparatuses is merely logical function division. In actual implementation, all or some of the units may be integrated into a physical entity, or may be physically separate. In addition, all the units in the apparatuses may be implemented in a form of software invoked by a processing element, or may be implemented in a form of hardware, or some units may be implemented in a form of software invoked by a processing element, and some units may be implemented in a form of hardware. For example, each unit may be an independently disposed processing element, or may be integrated into a chip of the apparatus for implementation. Alternatively, each unit may be stored in a memory in a form of a program to be invoked by a processing element of the apparatus to perform a function of the unit. In addition, all or some of the units may be integrated together, or may be implemented independently. The processing element herein may also be referred to as a processor, and may be an integrated circuit having a signal processing capability. In an implementation process, the steps in the foregoing methods or the foregoing units may be implemented by using a hardware integrated logic circuit in the processing element, or may be implemented in a form of software invoked by the processing element. In an example, a unit in any one of the foregoing apparatuses may be one or more integrated circuits configured to implement the foregoing methods, for example, one or more application specific integrated circuits (ASIC), one or more microprocessors (DSP), one or more field programmable gate arrays (FPGA), or a combination of at least two of these forms of integrated circuits. For another example, when a unit in the apparatus may be implemented by a program scheduled by a processing element, the processing element may be a general-purpose processor, for example, a central processing unit ( ) or another processor that can invoke the program. For another example, the units may be integrated and implemented in a form of a system-on-a-chip (SOC). The foregoing receiving unit is an interface circuit of the apparatus, and is configured to receive a signal from another apparatus. For example, when the apparatus is implemented in a form of a chip, the receiving unit is an interface circuit that is of the chip and that is configured to receive a signal from another chip or apparatus. The foregoing sending unit is an interface circuit of the apparatus, and is configured to send a signal to another apparatus. For example, when the apparatus is implemented in a form of a chip, the sending unit is an interface circuit that is of the chip and that is configured to send a signal to another chip or apparatus. FIG.12is a schematic structural diagram of a terminal device according to an embodiment of this application. The terminal device is configured to implement operations of the terminal device in the foregoing embodiments. As shown inFIG.12, the terminal device includes an antenna901, a radio frequency part902, and a signal processing part903. The antenna901is connected to the radio frequency part902. In a downlink direction, the radio frequency part902receives, by using the antenna901, information sent by a network device, and sends, to the signal processing part903for processing, the information sent by the network device. In an uplink direction, the signal processing part903processes information of the terminal device, and sends the information to the radio frequency part902. The radio frequency part902processes the information of the terminal device, and then sends processed information to the network device through the antenna901. The signal processing part903may include a modem subsystem, configured to process data at each communications protocol layer. The signal processing part903may further include a central processing subsystem, configured to process an operating system and an application layer that are of the terminal device. In addition, the signal processing part903may further include another subsystem, for example, a multimedia subsystem, or a peripheral subsystem. The multimedia subsystem is configured to control a camera or a screen display of the terminal device. The peripheral subsystem is configured to connect to another device. The modem subsystem may be a separately disposed chip. Optionally, the foregoing apparatus used in the terminal device may be located in the modem subsystem. The modem subsystem may include one or more processing elements9031, for example, include one main control CPU and another integrated circuit. In addition, the modem subsystem may further include a storage element9032and an interface circuit9033. The storage element9032is configured to store data and a program. However, a program used to perform the methods performed by the terminal device in the foregoing methods may not be stored in the storage element9032, but is stored in a memory outside the modem subsystem, and is loaded and used by the modem subsystem when to be used. The interface circuit9033is configured to communicate with another subsystem. The foregoing apparatus used in the terminal device may be located in the modem subsystem, and the modem subsystem may be implemented by a chip. The chip includes at least one processing element and an interface circuit. The processing element is configured to perform the steps of any one of the methods performed by the terminal device. The interface circuit is configured to communicate with another apparatus. In an implementation, units of the terminal device that implement the steps of the methods in the embodiments of this application may be implemented by a program invoked by a processing element. For example, the apparatus used in the terminal device includes a processing element and a storage element. The processing element invokes a program stored in the storage element, to perform the methods performed by the terminal device in the foregoing method embodiments. The storage element may be a storage element located on a same chip as the processing element, namely, an on-chip storage element. In another implementation, a program used to perform the methods performed by the terminal device in the methods according to the embodiments of this application may be in a storage element located on a different chip from the processing element, namely, an off-chip storage element. In this case, the processing element invokes or loads the program from the off-chip storage element to the on-chip storage element, to invoke and perform the methods performed by the terminal device in the foregoing method embodiments. In still another implementation, units that implement the steps in the foregoing methods in the embodiments of this application and that are in the apparatus used in the terminal device may be configured as one or more processing elements. These processing elements are disposed in the modem subsystem. The processing element herein may be an integrated circuit, for example, one or more ASICs, one or more DSPs, one or more FPGAs, or a combination of these types of integrated circuits. These integrated circuits may be integrated together to form a chip. Units of the terminal device that implement the steps in the methods in the embodiments of this application may be integrated together, and implemented in a form of a system-on-a-chip (SOC). The SOC chip is configured to implement the foregoing methods. At least one processing element and storage element may be integrated into the chip, and the processing element invokes a program stored in the storage element to implement the foregoing methods performed by the terminal device. Alternatively, at least one integrated circuit may be integrated into the chip, to implement the foregoing methods performed by the terminal device. Alternatively, with reference to the foregoing implementations, functions of some units may be implemented by the processing element invoking a program, and functions of some units may be implemented by the integrated circuit. It can be learned that the foregoing apparatus used in the terminal device may include at least one processing element and an interface circuit. The at least one processing element is configured to perform any one of the methods performed by the terminal device provided in the foregoing method embodiments. The processing element may perform some or all steps performed by the terminal device, in a first manner, to be specific, by invoking the program stored in the storage element; or may perform some or all steps performed by the terminal device, in a second manner, to be specific, by using a hardware integrated logic circuit in the processor element in combination with an instruction; or may certainly perform, by combining the first manner and the second manner, some or all steps performed by the terminal device. As described above, the processing element herein may be a general purpose processor, for example, a CPU, or may be one or more integrated circuits configured to implement the foregoing methods, for example, one or more ASICs, one or more microprocessors DSPs, one or more FPGAs, or a combination of at least two of these forms of the integrated circuits. A storage element may be one memory, or may be a general term of a plurality of storage elements. FIG.13is a schematic structural diagram of a network device according to an embodiment of this application. The network device is configured to implement operations of the network device in the foregoing embodiments. As shown inFIG.13, the network device includes an antenna1001, a radio frequency apparatus1002, and a baseband apparatus1003. The antenna1001is connected to the radio frequency apparatus1002. In an uplink direction, the radio frequency apparatus1002receives, through the antenna1001, information sent by a terminal device, and sends, to the baseband apparatus1003for processing, the information sent by the terminal device. In a downlink direction, the baseband apparatus1003processes information of a terminal device, and sends the information to the radio frequency apparatus1002. The radio frequency apparatus1002processes the information of the terminal device, and then sends the processed information to the terminal device through the antenna1001. The baseband apparatus1003may include one or more processing elements10031, for example, include a main control CPU and another integrated circuit. In addition, the baseband apparatus1003may further include a storage element10032and an interface circuit10033. The storage element10032is configured to store a program and data. The interface circuit10033is configured to exchange information with the radio frequency apparatus1002, and the interface circuit is, for example, a common public radio interface (CPRI). The foregoing apparatus used in the network device may be located in the baseband apparatus1003. For example, the foregoing apparatus used in the network device may be a chip in the baseband apparatus1003. The chip includes at least one processing element and an interface circuit. The processing element is configured to perform the steps of any method performed by the network device. The interface circuit is configured to communicate with another apparatus. In an implementation, units of the network device that implement the steps of the methods in the embodiments of this application may be implemented by a program invoked by a processing element. For example, the apparatus used in the network device includes a processing element and a storage element. The processing element invokes a program stored in the storage element, to perform the methods performed by the network device in the foregoing method embodiments. The storage element may be a storage element located on a same chip as the processing element, namely, an on-chip storage element, or may be a storage element located on a different chip from the processing element, namely, an off-chip storage element. In another implementation, units that implement the steps in the foregoing methods in the embodiments of this application and that are in the apparatus used in the network device may be configured as one or more processing elements. These processing elements are disposed in the baseband apparatus. The processing element herein may be an integrated circuit, for example, one or more ASICs, one or more DSPs, one or more FPGAs, or a combination of these types of integrated circuits. These integrated circuits may be integrated together to form a chip. Units of the network device that implement the steps in the methods in the embodiments of this application may be integrated together, and implemented in a form of a system-on-a-chip (SOC). For example, the baseband apparatus includes the SOC chip, configured to implement the foregoing methods. At least one processing element and storage element may be integrated into the chip, and the processing element invokes a program stored in the storage element to implement the foregoing methods performed by the network device. Alternatively, at least one integrated circuit may be integrated into the chip, to implement the foregoing methods performed by the network device. Alternatively, with reference to the foregoing implementations, functions of some units may be implemented by a program invoked by the processing element, and functions of some units may be implemented by the integrated circuit. It can be learned that the foregoing apparatus used in the network device may include at least one processing element and an interface circuit. The at least one processing element is configured to perform any method performed by the network device provided in the foregoing method embodiments. The processing element may perform some or all steps performed by the network device, in a first manner, to be specific, by invoking the program stored in the storage element; or may perform some or all steps performed by the network device, in a second manner, to be specific, by using a hardware integrated logic circuit in the processor element in combination with an instruction; or may certainly perform, by combining the first manner and the second manner, some or all steps performed by the network device. As described above, the processing element herein may be a general purpose processor, for example, a CPU, or may be one or more integrated circuits configured to implement the foregoing methods, for example, one or more ASICs, one or more microprocessors DSPs, one or more FPGAs, or a combination of at least two of the integrated circuits. The storage element may be one memory, or may be a general term of a plurality of storage elements. The embodiments of this application further provide a communications method. The method may be performed by a terminal device or a communications apparatus (for example, a chip system) that can support the terminal device in implementing the method. In this application, an example in which the terminal device performs the method is used for description. FIG.14shows another communications method according to an embodiment of this application. The method includes the following steps. S201. A terminal device obtains a first grouping relationship. The first grouping relationship represents a correspondence between a first time length and N groups of time-frequency resources, the N groups of time-frequency resources are obtained by grouping time-frequency resources in one time unit, each group of time-frequency resources corresponds to one or more first time lengths, the first time length is related to a K1 set, the K set includes a plurality of K values, the K value is the quantity of time units offset from a time unit in which a PDSCH is located to a time unit in which an uplink channel of a HARQ-ACK corresponding to the PDSCH is located, a time-frequency resource in each group of time-frequency resources is a time-frequency resource of an uplink channel that carries a HARQ-ACK, the first time length is a unit time length of the K1 value or the first time length represents a time length corresponding to the K1 value, and N is a positive integer greater than or equal to 2. In this embodiment of this application, the terminal device may receive the first grouping relationship from a network device, or the terminal device locally obtains the first grouping relationship. If the terminal device locally obtains the first grouping relationship, the first grouping relationship may be preset by the terminal device, or may be obtained from the network device and stored in advance. In this embodiment of this application, if the first grouping relationship is received by the terminal device from the network device, before the network device sends the first grouping relationship to the terminal device, the network device may further determine the first grouping relationship based on the first time length. For a method for determining the first grouping relationship by the network device based on the first time length, refer to the foregoing description. Details are not described herein again. In this application, the first time length may be a slot, or may be a mini-slot, for example, may be a ½ slot, or may be a ¼ slot, or may be M time domain symbols, where M is a positive integer less than 14. In this embodiment, the described HARQ-ACK may be a semi-static codebook. An example in which the HARQ-ACK is a semi-static codebook is used for description below. For a semi-static HARQ-ACK, the network device or a higher layer may configure several possible K1 values for the terminal device. In this application, the several possible K1 values are referred to as a K set. Certainly, this application is not limited thereto. A set including a plurality of K1 values may be referred to as a K1 set. In addition, in this application, that the first time length is related to the K1 set may mean that there is a correspondence between the first time length and the K1 set. The correspondence may be configured via higher layer signaling or by the network device. It may be understood that, after obtaining the K1 set, the terminal device may correspondingly determine the first time length corresponding to the K1 set. S202. The terminal device obtains a first K1 set and a second K1 set. Optionally, the first K1 set and the second K1 set may be locally obtained by the terminal device, or may be obtained from the network device, or may be configured via higher layer signaling. In this application, the first grouping relationship may be in a form of a list, or may be in another form. This is not limited. Table 5 shows a possible first grouping relationship. In Table 5, N=2 is used as an example. In other words, one time unit is divided into two groups of time-frequency resources: a first group of time-frequency resources and a second group of time-frequency resources, and an example in which the first time length includes a slot and a ½ slot is used for illustration. TABLE 5First time lengthN groups of time-frequency resources½ slotFirst group of time-frequency resourcesSlotSecond group of time-frequency resources In this embodiment, after determining the correspondence between the first time length (the slot and the ½ slot) and the N groups of time-frequency resources (the first group of time-frequency resources and the second group of time-frequency resources) based on the first grouping relationship, the terminal device may further determine a correspondence between the K1 set and the N groups of time-frequency resources based on the correspondence between the first time length and the K1 set. Table 6 shows a correspondence between the K1 set, the first time length, and the N groups of time-frequency resources based on an assumed condition in Table 5. The first K1 set is {0, 1, 2, 3}, the second K1 set is {1, 2, 3, 4}, the first K1 set is related to the slot, and the second K1 set is related to the ½ slot. TABLE 6First timeK1 setlengthN groups of time-frequency resourcesFirst K1 set½ slotFirst group of time-frequency resources{0, 1, 2, 3}Second K1SlotSecond group of time-frequency resourcesset {1, 2, 3, 4} S203. Based on the first grouping relationship, the terminal device determines, in the N groups of time-frequency resources, the ithgroup of time-frequency resources corresponding to a first time length related to the first K1 set, and determines, in the N groups of time-frequency resources, the kthgroup of time-frequency resources corresponding to a first time length related to the second K1 set. For example, Table 6 is used as an example. Based on the first grouping relationship, the terminal device may determine, in the two groups of time-frequency resources, the first group of time-frequency resources corresponding to the first time length related to the first K1 set, and determine, in the N groups of time-frequency resources, the second group of time-frequency resources corresponding to the first time length related to the second K1 set. Herein, i is a positive integer less than or equal to N, k is a positive integer less than or equal to N, and k and i are different values. S204. The terminal device determines a first uplink channel that carries a first HARQ-ACK on a first time-frequency resource in the ithgroup of time-frequency resources, and determines a second uplink channel that carries a second HARQ-ACK on a second time-frequency resource in the kthgroup of time-frequency resources. In this way, the terminal device may use different groups of time-frequency resources to carry the first uplink channel and the second uplink channel. Compared with the prior art in which only one uplink channel can be sent in one time unit, in the method in this application, a plurality of uplink channels can be sent in one time unit. The first HARQ-ACK corresponds to a first downlink association set, and the second HARQ-ACK corresponds to a second downlink association set. In this embodiment of this application, a downlink association set may be determined based on the K1 set. In this embodiment of this application, the first time-frequency resource may be some time-frequency resources in the ithgroup of time-frequency resources, or may be all time-frequency resources in the ithgroup of time-frequency resources. The second time-frequency resource may be some time-frequency resources in the kthgroup of time-frequency resources, or may be all time-frequency resources in the kthgroup of time-frequency resources. The uplink channel may include a physical uplink control channel (PUCCH) or a physical uplink shared channel (PUSCH). S205. When the first time-frequency resource partially or fully overlaps the second time-frequency resource, and a first downlink association subset in the first downlink association set fully overlaps a second downlink association subset in the second downlink association set, the terminal device takes a union of the first downlink association set and the second downlink association set, to obtain a third downlink association set. The first downlink association subset in the first downlink association set corresponds to a third HARQ-ACK, the second downlink association subset in the second downlink association set corresponds to a fourth HARQ-ACK, the third HARQ-ACK belongs to the first HARQ-ACK, and the fourth HARQ-ACK belongs to the second HARQ-ACK. In this embodiment of this application, the union of the first downlink association set and the second downlink association set may be a set obtained by combining resources included in the first downlink association set and resources included in the second downlink association set, that is, the third downlink association set. The third downlink association set includes resources included in the first downlink association set and the second downlink association set, but there is no repeated resource in the third downlink association set. The union of the first downlink association set and the second downlink association set may be denoted as the first downlink association set U the second downlink association set. S206. The terminal device sends a fifth HARQ-ACK based on the third downlink association set, where the fifth HARQ-ACK includes the third HARQ-ACK or the fourth HARQ-ACK. In this embodiment, the fifth HARQ-ACK may further include a sixth HARQ-ACK, and the first HARQ-ACK may include the sixth HARQ-ACK and the third HARQ-ACK. The fifth HARQ-ACK may further include a seventh HARQ-ACK, and the second HARQ-ACK may include the seventh HARQ-ACK and the fourth HARQ-ACK. The terminal device may send the fifth HARQ-ACK, or send the fifth HARQ-ACK and the sixth HARQ-ACK to the network device. In this method, the terminal device sends only one of the third HARQ-ACK and the fourth HARQ-ACK based on the third downlink association set, thereby reducing the quantity of bits of a fed back HARQ-ACK, and improving HARQ-ACK transmission efficiency. The following further describes the foregoing method in an implementation. FIG.15is a schematic diagram of taking a union of resources according to an embodiment of this application. InFIG.15, it is assumed that the first grouping relationship obtained by the terminal device is the grouping relationship shown in Table 5, the first K1 set obtained by the terminal device is {0, 1, 2, 3}, and the second K1 set obtained by the terminal device is {1, 2, 3, 4}. Further, based on the first grouping relationship, the terminal device may determine, in two groups of time-frequency resources, a first group of time-frequency resources corresponding to the first time length related to the first K1 set, and determine, in the two groups of time-frequency resources, a second group of time-frequency resources corresponding to the first time length related to the second K1 set. InFIG.15, it is assumed that a first time unit is a slot #k. In other words, the two groups of time-frequency resources are time-frequency resources in the slot #k. Further, the terminal device may determine the first uplink channel that carries the first HARQ-ACK on the first time-frequency resource in the first group of time-frequency resources, and may further determine the second uplink channel that carries the second HARQ-ACK on the second time-frequency resource in the second group of time-frequency resources. It can be learned fromFIG.15that a first downlink association set scheduled by or corresponding to the first K1 set may include a ½ slot #n−4, a ½ slot #n−3, a ½ slot #n−2, a ½ slot #n−1, a ½ slot #n, and a ½ slot #n+1, the second downlink association set scheduled by or corresponding to the second K1 set may include a slot #k−4, a slot #k−3, a slot #k−2, and a slot #k−1. Assuming that the first time-frequency resource partially or fully overlaps the second time-frequency resource, and the first downlink association subset in the first downlink association set fully overlaps the second downlink association subset in the second downlink association set, as shown inFIG.15, the first downlink association subset includes the ½ slot #n−4, the ½ slot #n−3, the ½ slot #n−2, and the ½ slot #n−1, and the second downlink association subset includes the slot #k−2 and the slot #k−1. According to the method in this application, for the first downlink association subset and the second downlink association subset that overlap each other, only HARQ-ACKs corresponding to some resources (for example, the first downlink association subset or the second downlink association subset) are sent. Specifically, the terminal device may take a union of the first downlink association set and the second downlink association set to obtain the third downlink association set, and then may send the fifth HARQ-ACK based on the third downlink association set. The fifth HARQ-ACK includes the third HARQ-ACK or the fourth HARQ-ACK. As shown inFIG.15, the first downlink association subset corresponds to the third HARQ-ACK, and the second downlink association subset corresponds to the fourth HARQ-ACK. According to the method of this application, the fifth HARQ-ACK sent by the terminal device based on the third downlink association set includes only one of the third HARQ-ACK and the fourth HARQ-ACK, so that the quantity of bits for joint feedback can be reduced. In the implementation shown inFIG.15, if the first HARQ-ACK and the second HARQ-ACK are directly cascade combined, it may be understood that a HARQ-ACK obtained through the cascade combination include the sixth HARQ-ACK, the third HARQ-ACK, the seventh HARQ-ACK, and the fourth HARQ-ACK. Assuming that a 1-bit HARQ-ACK is fed back on a resource with one granularity, the HARQ-ACK obtained through the cascade combination includes 10 bits. In other words, the terminal device needs to feed back a 10-bit HARQ-ACK. If a granularity of overlapping resources is a ½ slot, the fifth HARQ-ACK includes the third HARQ-ACK. If the terminal device feeds back the fifth HARQ-ACK based on the third downlink association set, the terminal device needs to feed back only an 8-bit HARQ-ACK. If the granularity of the overlapping resources is a slot, the fifth HARQ-ACK includes the fourth HARQ-ACK. If the terminal device feeds back the fifth HARQ-ACK based on the third downlink association set, the terminal device needs to feed back only a 6-bit HARQ-ACK based on the third downlink association set. It may be understood that operations performed by the terminal device in the communications methods provided in the embodiments shown inFIG.14andFIG.15may be performed by a communications apparatus that is used in the terminal device and that is provided in the embodiments of this application, for example, the communications apparatus700, or may be performed by the terminal device provided in the embodiments of this application, for example, the terminal device shown inFIG.15. For example, the communications apparatus or the terminal device may include an obtaining unit, a processing unit, and a sending unit. The steps S201and S202may be performed by the obtaining unit, the steps S203to S205may be performed by the processing unit, and the step S206may be performed by the sending unit. Alternatively, the communications apparatus or the terminal device includes a processor and a transceiver that are coupled to a memory, and the steps S201to S206may be performed by the processor that is coupled to the memory. Alternatively, the steps S201to S205may be performed by the processor coupled to the memory, and the step S206is performed by the transceiver. Alternatively, the steps S202to S205may be performed by the processor coupled to the memory, and the steps S201and S206are performed by the transceiver. Details are not described. Operations performed by the network device in the communications methods provided in the embodiments shown inFIG.14andFIG.15may be performed by a communications apparatus that is used in the network device and that is provided in the embodiments of this application, for example, the communications apparatus800, or may be performed by the network device provided in the embodiments of this application, for example, the network device shown inFIG.13. Details are not described. According to the methods provided in the embodiments of this application, this application further provides a communications system. The communications system includes the foregoing terminal device and network device. The embodiments of this application further provide a computer storage medium. The computer storage medium stores a computer-executable instruction, and when the computer-executable instruction is invoked by a computer, the computer is enabled to perform any one of the foregoing methods. The embodiments of this application further provide a computer program product. The computer program product stores an instruction, and when the instruction is run on a computer, the computer is enabled to perform any one of the foregoing methods. The embodiments of this application further provide a chip system. The chip system includes a processor, and may further include a memory, to implement any one of the foregoing methods. The chip system may include a chip, or may include a chip and another discrete component. A person skilled in the art should understand that the embodiments of this application may be provided as a method, a system, or a computer program product. Therefore, this application may use a form of hardware only embodiments, software only embodiments, or embodiments with a combination of software and hardware. In addition, this application may use a form of a computer program product that is implemented on one or more computer-usable storage media (including but not limited to a disk memory, a CD-ROM, an optical memory, and the like) that include computer-usable program code. This application is described with reference to the flowcharts and/or block diagrams of the method, the device (system), and the computer program product according to the embodiments of this application. It should be understood that computer program instructions may be used to implement each process and/or each block in the flowcharts and/or the block diagrams and a combination of a process and/or a block in the flowcharts and/or the block diagrams. These computer program instructions may be provided to a processor of a general-purpose computer, a special-purpose computer, an embedded processor, or another programmable data processing device to generate a machine, so that an instruction that is executed by a processor of a computer or another programmable data processing device generates an apparatus configured to implement a specified function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams. These computer program instructions may alternatively be stored in a computer readable memory that can instruct the computer or the another programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory generate an artifact that includes an instruction apparatus. The instruction apparatus implements a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams. These computer program instructions may be loaded onto a computer or another programmable data processing device, so that a series of operations and steps are performed on the computer or the another programmable device, thereby generating computer-implemented processing. Therefore, the instructions executed on the computer or the another programmable device provide steps for implementing a specified function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams. Although some possible embodiments of this application are described, a person skilled in the art can make changes and modifications to the embodiments once the basic inventive concept is learned. Therefore, the following claims are intended to be construed to cover the embodiments of this application and all changes and modifications falling within the scope of this application. It is clear that a person skilled in the art can make various modifications and variations to this application without departing from the spirit and scope of this application. If these modifications and variations of this application fall within the scope of the claims of this application and their equivalent technologies, this application is also intended to cover these modifications and variations.
162,735
11863493
DETAILED DESCRIPTION The technical solutions in the embodiments of the present disclosure will be described below in connection with the drawings in the embodiments of the present disclosure. Obviously, the described embodiments are part of the embodiments of the present disclosure, but not all the embodiments. Based on the embodiments in the present disclosure, all other embodiments obtained by a person of ordinary skill in the art without creative work fall within the protection scope of the present disclosure. The technical solutions of the embodiments of the present disclosure can be applied to various communication systems, such as a Global System of Mobile Communication (GSM) system, a Code Division Multiple Access (CDMA) system, a Wideband Code Division Multiple Access (WCDMA) system, a General Packet Radio Service (GPRS), a Long Term Evolution (LTE) system, an LTE Frequency Division Duplex (FDD) system, an LTE Time Division Duplex (TDD) system, a Universal Mobile Telecommunication System (UMTS), a Global Interoperability for Microwave Access (WiMAX) communication system, a 5G system, or the like. By way of example, a communication system100employed in the embodiments of the present disclosure is shown inFIG.1. The communication system100can include a network device110, which can be a device that communicates with a terminal device120(or referred to as a communication terminal or a terminal). The network device110can provide communication coverage for a specific geographic area and can communicate with terminal devices located within the coverage area. Optionally, the network device110may be a base station (Base Transceiver Station, BTS) in a GSM system or a CDMA system, a base station (NodeB, NB) in a WCDMA system, an evolutional base station in an LTE system (Evolutional Node B, eNB or eNodeB), or a wireless controller in Cloud Radio Access Network (CRAN), or the network device can be a mobile switching center, a relay station, an access point, an on-board device, a wearable device, a hub, a switch, a bridge, a router, a network device in 5G network or network devices in future public land mobile network (PLMN), etc. The communication system100also includes at least one terminal device120located within the coverage of the network device110. As used herein, the “terminal device” includes, but is not limited to, a device connected via wired lines, such as a Public Switched Telephone Networks (PSTN), a Digital Subscriber Line (DSL), a digital cable or direct cable connection; another data connection/network; wireless interfaces, such as those for cellular networks, a wireless local area network (WLAN), a digital TV network such as DVB-H network, satellite network, an AM-FM broadcast transmitter; a means of another terminal device configured to receive/transmit communication signals; and/or Internet of Things (IoT) devices. A terminal device configured to communicate through a wireless interface may be referred to as a “wireless communication terminal,” “wireless terminal,” or “mobile terminal.” Examples of the mobile terminal include but are not limited to a satellite or cellular phone; a personal communication system (PCS) terminal that can incorporate data processing, facsimile, and data communication capabilities in a cellular radio telephone; a PDA that can include a radio telephone, a pager, Internet/internal network access, a web browser, a notepad, a calendar, and/or a Global Positioning System (GPS) receiver; and a conventional laptop and/or palm-type receiver or other electronic devices including a radio telephone transceiver. The terminal device may refer to an access terminal, User Equipment (UE), a subscriber unit, a user station, a moving station, a mobile station, a remote station, a remote terminal, a mobile device, a user terminal, a terminal, a wireless communication device, a user agent or a user device. The access terminal may be a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant, PDA), a handheld device with a wireless communication function, a computing device or other processing devices connected to a wireless modem, an on-board device, a wearable device, a terminal device in a 5G network, a terminal device in a future evolved PLMN, or the like. Optionally, Device to Device (D2D) communication can be performed between the terminal devices120. Alternatively, the 5G system or 5G network may also be referred to as a New Radio (NR) system or NR network. FIG.1illustrates a network device and two terminal devices. Alternatively, the communication system100may include a plurality of network devices, and a different number of terminal devices may be included within the coverage area of each of the network devices, which is not limited in the embodiments of the present disclosure. Optionally, the communication system100can further include other network entities such as a network controller and a mobility management entity, which is not limited in the embodiments of the present disclosure. It should be understood that the devices with the communication function in the network/system in the embodiments of the present disclosure can be referred to as a communication device. For instance, in the communication system100shown inFIG.1, the communication device can include the network device110and the terminal device120which have the communication function, and the network device110and the terminal device120can be any of the specific devices described above, which will not be repeated here. The communication device can also include other devices in the communication system100, such as a network controller, a mobility management entity, and other network entities, which are not limited in the embodiments of the present disclosure. It should be understood that the terms “system” and “network” are often used interchangeably herein. The term “and/or” used herein is merely to describe relative relationships of relative objects, indicating that there can be three kinds of relationships. For example, A and/or B can indicate three cases where A exists alone, A and B exist simultaneously, or B exists alone. In addition, the character “I” used herein generally indicates that the related objects before and after this character are in an “or” relationship. The unlicensed spectrum is a spectrum allocated by countries and regions which is available for radio equipment communication. This spectrum is generally considered to be a shared spectrum. That is, communication devices in different communication systems can use this spectrum if they meet regulatory requirements specified by the countries or regions on the spectrum, and there is no need to apply to the government for a proprietary spectrum license. In order to allow various communication systems that use the unlicensed spectrum for wireless communication to coexist amicably on this spectrum, some countries or regions have stipulated the regulatory requirements that must be met when using the unlicensed spectrum. For example, in some regions, the communication device follows a “first listening and then speaking” principle, that is, the communication device needs to perform a channel listening before transmitting a signal on a channel of the unlicensed spectrum, and can perform signal transmission only when a result of the channel listening indicates an idle channel. If the channel listening result of the communication device on the channel of the unlicensed spectrum indicates that the channel is busy, the communication device cannot perform the signal transmission. In order to ensure fairness, the duration of the signal transmission by the communication device using the channel of the unlicensed spectrum in a transmission process cannot exceed a Maximum Channel Occupation Time (MCOT). With the development of wireless communication technologies, both LTE systems and NR systems will consider deploying networks on the unlicensed spectrum to perform data service transmission by using the unlicensed spectrum. In NR Release 15 (Rel-15), dynamically determining a HARQ feedback timing (HARQ-timing) is supported. The terminal device first determines a pre-configured HARQ timing set, and the base station indicates a value k in the HARQ timing by using Downlink Control Information (DCI). If a Physical Downlink Shared Channel (PDSCH) scheduled by the DCI is transmitted in a slot n, corresponding acknowledgment/non-acknowledgment (ACK/NACK) information is transmitted in a slot n+k. The pre-configured HARQ-timing set can include up to eight values. For different DCI formats, the eight values can be different. For example, for DCI format 1_0, the set is agreed by the protocol, and for DCI format 1_1, the set can be configured by the base station. In addition, the NR Rel-15 system also supports ACK/NACK multiplexed transmission, that is, ACK/NACK information corresponding to multiple PDSCHs is transmitted through one channel. For ACK/NACK multiplexed transmission, two ACK/NACK information generation methods are further supported: a semi-static ACK/NACK codebook (semi-static HARQ-ACK codebook) and a dynamic ACK/NACK codebook (dynamic HARQ-ACK codebook). The semi-static ACK/NACK codebook is determined based on elements in the pre-configured feedback timing set. Since the feedback timing set is agreed by the protocol or configured semi-statically in high levels, the number of ACK/NACK bits included in the ACK/NACK codebook will not change in accordance with an actual scheduling situation. The advantage of this solution is that there will be no ambiguity in understanding the number of bits of feedback information and a mapping relationship between the base station and the UE. However, the disadvantage is that the feedback overhead is large, and even if only a small number of PDSCHs are scheduled, a complete ACK/NACK codebook should be transmitted, which may contain a large amount of redundant information. For example, as shown inFIG.2, in the case of single-carrier and single-codeword transmission, assuming that the value of the HARQ-timing set indicated in DCI is 8, the number of elements in the pre-configured feedback timing set is 8, and the pre-configured feedback timing set is {1, 2, 3, 4, 5, 6, 7, 8}, the number of ACK/NACK bits is also8. Actually, however, as shown inFIG.2, only two PDSCHs are transmitted. That is, there are 6 bits of redundant information. The dynamic ACK/NACK codebook mainly solves the problem of feedback overhead, that is, in the downlink slot corresponding to the feedback time set, the number of bits of the ACK/NACK information is determined based on the number of the PDSCHs which are actually scheduled. The specific DCI which schedules the PDSCH transmission introduces a Downlink Assignment Index (DAI) information field to indicate a total number of PDSCHs that have been scheduled up to a currently scheduled PDSCH. For example, inFIG.2, in the case of the single-carrier and single-codeword transmission, the terminal device receives two PDSCHs, PDSCH 1 and PDSCH 2, and in this case, the terminal device only needs to feedback 2-bit information. The disadvantage of this method is that when the terminal device does not receive part of the PDSCHs transmitted by the base station, such as the last PDSCH 2 inFIG.2, there is a problem that the base station and the UE are of inconsistent understandings in the number of the PDSCHs actually scheduled, resulting in an inconsistent understanding in the number of bits of the feedback information. For the NR-U in Rel-16, currently, how to transmit the feedback information on the unlicensed spectrum has not yet been determined. For example, NR-U supports the introduction of a case where the HARQ-timing value is infinite in the downlink control signaling. This value indicates that the transmission time and resources of the ACK/NACK feedback information corresponding to the PDSCH scheduled by the DCI cannot be determined temporarily, and the ACK/NACK codebook determination method in Rel-15 needs to be determined based on the feedback timing set. Therefore, in the case where the infinite HARQ timing value is included, the existing Rel-15 scheme cannot be reused. Therefore, the embodiments of the present disclosure provide a method for transmitting feedback information in which the terminal device determines the ACK/NACK codebook based on an indication of trigger signaling, which can effectively reduce redundant information in the feedback information. FIG.3is a schematic flowchart of a method200for transmitting feedback information according to an embodiment of the present disclosure. The method200can be performed by a terminal device. For example, the terminal device can be the terminal device120shown inFIG.1. As shown inFIG.3, the method200includes S210, receiving, by the terminal device, trigger signaling which is used for triggering transmission of feedback information for at least one downlink channel group by the terminal device; S220, determining, by the terminal device based on the trigger signaling, a feedback information codebook which includes the feedback information for the at least one downlink channel group. The above trigger signaling can be transmitted to the terminal device120by the network device110shown inFIG.1. It should be understood that the embodiments of the present disclosure can be applied to unlicensed spectrum, or to licensed spectrum, and the embodiments of the present disclosure are not limited thereto. In the embodiments of the present disclosure, prior to S210, the method200can further include: receiving, by the terminal device, a downlink channel transmitted by the network device, where the downlink channel can include a downlink physical shared channel and/or a downlink physical control channel Specifically, the network device transmits information of at least one downlink channel to the terminal device, and the terminal device may receive the information of some or all of the downlink channels in the at least one downlink channel, or may not receive any information of the at least one downlink channel. In S210, the terminal device receives the trigger signaling transmitted by the network device, and the trigger signaling can be configured to instruct the terminal device to transmit the feedback information of the at least one downlink channel group, where the at least one downlink channel group belongs to the at least one downlink channel transmitted by the network device. Specifically, the trigger signaling can include a group indication of the at least one downlink channel group, so that the terminal device determines the at least one downlink channel group for which the feedback information needs to be transmitted. Alternatively, the feedback information in the embodiments of the present disclosure can be ACK/NACK information, which indicates whether the corresponding downlink channel information is successfully received by the terminal device or not, and the embodiments of the present disclosure are not limited thereto. It should be understood that the method200can further include: determining, by the terminal device, downlink channel group information corresponding to the received downlink channel, where the received downlink channel may be any downlink channel received by the terminal device. For example, the received downlink channel may be any downlink channel in the at least one downlink channel group indicated by the trigger signaling. Specifically, the terminal device can determine the downlink channel group information corresponding to the received downlink channel in various ways. For example, the terminal device can determine the downlink channel group information corresponding to the downlink channel based on a Channel Occupation Time (COT) for the received downlink channel or the DCI corresponding to the downlink channel. Optionally, in an embodiment, determining the downlink channel group information corresponding to the received downlink channel by the terminal device can include: determining by the terminal device the corresponding downlink channel group information based on the COT for the received downlink channel, where the downlink channel group information corresponding to the downlink channel can be an identification of the COT for the downlink channel. For example, the terminal device can determine the downlink channels in the same COT to belong to the same downlink channel group. As another example, the terminal device can also determine the downlink channels in multiple COTs to belong to the same downlink channel group, and the embodiments of the present disclosure are not limited thereto. Optionally, in another embodiment, determining the downlink channel group information corresponding to the received downlink channel by the terminal device can further include: receiving by the terminal device the downlink control information DCI corresponding to the downlink channel, where a Physical Downlink Control CHannel (PUCCH) resource indicator information field in the DCI is used for indicating the corresponding downlink channel group information if a feedback timing information field in the DCI indicates a predetermined value. Specifically, if the feedback timing information field in the DCI indicates the predetermined value, it indicates that the transmission time of the feedback information corresponding to the downlink channel is undetermined. Alternatively, if the feedback timing information field in the DCI indicates the predetermined value, it indicates that the transmission time of the feedback information can be determined by other information, for example, by first information which is used for triggering transmission of the feedback information of the downlink channel by the terminal device. For example, the first information can be the trigger signaling described above. Optionally, the predetermined value can be infinity, or the predetermined value can represent infinity. It should be understood that the downlink channel can include a physical downlink control channel carrying the DCI or a physical downlink shared channel scheduled by the DCI. Conversely, if the feedback timing information field in the DCI does not indicate the predetermined value, the PUCCH resource indicator information field may not be used for indicating the downlink channel group information corresponding to the downlink channel. For example, the PUCCH resource indicator information field can be used to determine the transmission resource of the feedback information of the downlink channel. It should be understood that the above method for determining the downlink channel group information corresponding to the downlink channel received by the terminal device can be used in any application scenario where the terminal device needs to determine the downlink channel group corresponding to the downlink channel, and is not limited to be applied to the trigger signaling in the method200of the present disclosure, and the embodiments of the present disclosure are not limited thereto. In S220, the terminal device determines the feedback information codebook based on the trigger signaling. The feedback information codebook includes the feedback information for the at least one downlink channel group indicated in the trigger signaling. Specifically, determining the feedback information codebook by the terminal device includes: determining by the terminal device the number of bits of the feedback information; and/or, determining by the terminal device a bit position of the feedback information of each downlink channel in the at least one downlink channel group. In the embodiments of the present disclosure, the terminal device can determine the number of bits of the feedback information for the at least one downlink channel group and/or the bit position of the feedback information for each downlink channel group in various ways. A detailed description will be provided below in connection with several specific embodiments. Optionally, in an embodiment, the number of downlink channels in one downlink channel group can be pre-configured, or the number of bits of feedback information for one downlink channel group can be pre-configured, and then the terminal device can determine the number of bits of the feedback information based on this pre-configured number and the number of groups of the at least one downlink channel group. Specifically, if the number of downlink channels included in each downlink channel group in the at least one downlink channel group is pre-configured, and for example, the pre-configured value can be a maximum number of downlink channels that can be included in each downlink channel group, the terminal device can determine the number of bits of the feedback information based on the pre-configured value, the number of bits of feedback information corresponding to each downlink channel and the number of groups of the at least one downlink channel group. Alternatively, the terminal device can determine the number of the bits of the feedback information corresponding to each downlink channel group based on the pre-configured value and the number of the bits of the feedback information corresponding to each downlink channel, and then determine the number of bits of the feedback information based on the number of bits of the feedback information corresponding to each downlink channel group and the number of groups of the at least one downlink channel group. The number of bits of the feedback information corresponding to each downlink channel can also be a preset value, and the numbers of bits of the feedback information corresponding to different downlink channels can be the same or different. In addition, since the number of downlink channels included in each downlink channel group is pre-configured, the pre-configured value can be the same as or different from the number of channels which are actually transmitted. For example, the pre-configured value can represent the maximum number of downlink channels that can be included in each downlink channel group, and in actual transmission, the number of downlink channels included in any downlink channel group may be less than or equal to this pre-configured value. Alternatively, if the number of bits of the feedback information corresponding to each downlink channel group in the at least one downlink channel group can be pre-configured, the terminal device can determine the number of bits of the feedback information based on the number of bits of the feedback information corresponding to each downlink channel group and the number of groups of the at least one downlink channel groups, where the number of bits of the feedback information corresponding to each downlink channel group can be the same, for example, the number of bits of the feedback information corresponding to each downlink channel group is equal to the maximum number of bits of the feedback information corresponding to one downlink channel group, or the number of bits of the feedback information corresponding to each downlink channel group can be different, and the embodiments of the present disclosure are not limited thereto. For example, as shown inFIG.4, a description will be provided in an example where the downlink channel is PDSCH, and the feedback information is ACK/NACK information. Assuming a single-codeword transmission mode, that is, one PDSCH carries one codeword, correspondingly, there is one-bit ACK/NACK information. Each PDSCH group is pre-configured to include up to 4 PDSCHs, and then each PDSCH group corresponds to a maximum of 4 bits of ACK/NACK information. As shown inFIG.4, the network device transmits 3 PDSCH groups to the terminal device in COT1 and COT2, where PDSCH group 1 includes 4 PDSCHs, labeled as PDSCH1 to PDSCH 4 respectively; PDSCH group 2 includes 3 PDSCHs, labeled as PDSCH1 to PDSCH3 respectively; and PDSCH group 3 includes one PDSCH, labeled as PDSCH1. Assuming that the network device transmits the trigger signaling to instruct the terminal device to transmit the ACK/NACK feedback information corresponding to PDSCH group 1 and PDSCH group 2 in the last slot of COT2, based on that each PDSCH group includes at most 4 PDSCHs and one PDSCH corresponds to 1 bit of ACK/NACK information, the terminal device can determine that the number of bits of the ACK/NACK information corresponding to each PDSCH group is 4, and the number of bits of the ACK/NACK information corresponding to the two PDSCH groups that are to be transmitted is 4*2=8 bits. The feedback information corresponding to PDSCH group 1 can be mapped to first 4 bits of the 8 bits, and the feedback information corresponding to PDSCH group 2 can be mapped to the last 4 bits. Alternatively, if a feedback order of the PDSCH groups indicated in the trigger signaling is PDSCH group 2 first, and then PDSCH group 1, the feedback information corresponding to PDSCH group 1 can also be mapped to the last 4 bits of the 8 bits, and the feedback information corresponding to PDSCH group 2 can be mapped to the first 4 bits. In addition, the order of feedback information corresponding to each of the PDSCHs in each PDSCH group can be determined based on the transmission time sequence or reception time sequence of each PDSCH, or can also be determined based on an identification or a number of each PDSCH. For example, the feedback information can be sequentially mapped based on the DAI value corresponding to each PDSCH. The DAI values can be determined by using a counter-DAI method. That is, the DAI value corresponding to PDSCH1 is 1, the DAI value corresponding to PDSCH 2 is 2, and so on. If the total number of the scheduled PDSCHs is less than 4, placeholder information is set at the end of the corresponding 4 bits. In summary, in this embodiment, the ACK/NACK information to be transmitted for the PDSCH groups shown inFIG.4can be {bG1,1, bG1,2, bG1,3, bG1,4, bG2,1, bG2,2, bG2,3, 0}, where bG1,1represents the ACK/NACK information corresponding to PDSCH 1 in PDSCH group 1, and so on, and 0 is the placeholder information. Although the feedback information determined in the above pre-configured manner may still have some redundant information, the redundant information can be effectively avoided through proper scheduling by the network device. That is, when there are multiple downlink channels, it should try to ensure that the downlink channels allocated in each downlink channel group reach or approach the upper limit of the maximum number as much as possible. Optionally, in another embodiment, the number of downlink channels included in each downlink channel group or the number of bits of the feedback information in each downlink channel group can also be indicated by the trigger signaling, that is, the trigger signaling received by the terminal device can also be used for indicating the number of bits of the feedback information corresponding to each downlink channel group, or to indicate the number of downlink channels included in each downlink channel group, so that the terminal device determines the number of bits of the feedback information for the at least one downlink channel group based on the trigger signaling, or the terminal device determines the number of bits of the feedback information corresponding to each downlink channel group based on the trigger signaling, and then determines the number of bits of the feedback information for the at least one downlink channel group based on the number of bits of the feedback information corresponding to each downlink channel group and the number of the downlink channel groups. Specifically, if the trigger signaling indicates the number of downlink channels included in each downlink channel group, the terminal device can determine the number of bits of the feedback information for the at least one downlink channel group based on the number of downlink channels included in each downlink channel group, the number of bits of the feedback information corresponding to each downlink channel, and the number of groups of the at least one downlink channel group; or the terminal device can determine the number of bits of the feedback information corresponding to a first downlink channel group in the at least one downlink channel group based on the number of downlink channels included in the first downlink channel group and the number of bits of the feedback information corresponding to each downlink channel so as to determine the number of bits of the feedback information corresponding to each downlink channel group in at least one downlink channel group, and can determine the number of bits of the feedback information for the at least one downlink channel group by summing up the feedback information corresponding to all the downlink channel groups in the at least one downlink channel group. The numbers of downlink channels included in different downlink channel groups indicated in the trigger signaling may be the same or different. In addition, the number of bits of the feedback information corresponding to each downlink channel here can be pre-configured, for example, the pre-configured value can represent the maximum number of bits of the feedback information corresponding to each downlink channel, and the number of bits of the feedback information corresponding to each downlink channel can be the same or different. Alternatively, if the trigger signaling indicates the number of bits of the feedback information corresponding to each downlink channel group in the at least one downlink channel group, the terminal device can determine the number of bits of the feedback information for the at least one downlink channel group by summing up the feedback information corresponding to all the downlink channel groups in the at least one downlink channel group. The numbers of bits of the feedback information corresponding to different downlink channel groups indicated in the trigger signaling can be the same or different. For example, as shown inFIG.4, the description will be made still in the example where the downlink channel is PDSCH, and the feedback information thereof is ACK/NACK information. Assuming the single-codeword transmission mode, that is, one PDSCH carries one codeword and corresponds to one-bit ACK/NACK information. TakingFIG.4as an example, the specific distribution of the PDSCH groups is as shown inFIG.4and will not be repeated here. The network device transmits the trigger signaling to instruct the terminal device to transmit ACK/NACK feedback information corresponding to PDSCH group 1 and PDSCH group 2 in the last slot of COT2. In addition, the trigger signaling further indicates that PDSCH group 1 includes 4 PDSCHs, and PDSCH group 2 includes 3 PDSCHs. Then, based on that one PDSCH carries one codeword and corresponds to one-bit of ACK/NACK information and based on the number of PDSCHs included in the PDSCH groups indicated in the trigger signaling, the terminal device can determine that the number of ACK/NACK bits to be transmitted is 4+3=7 bits. Similar to the previous embodiment, the order of the feedback information for the two PDSCH groups fed back by the terminal device and the order of the feedback information corresponding to the PDSCHs in each PDSCH group can be determined in accordance with the order of reception or transmission or in accordance with relevant indication information, for example, the trigger signaling; or can be determined based on the indication or number of each PDSCH, for example, based on the DAI of each PDSCH, and the embodiments of the present disclosure are not limited thereto. In summary, in this embodiment, the ACK/NACK information to be transmitted for the PDSCH groups as shown inFIG.4can be {bG1,1, bG1,2, bG1,3, bG1,4, bG2,1, bG2,2, bG2,3}, where bG1,1represents the ACK/NACK information corresponding to PDSCH 1 in PDSCH group 1, bG1,2represents the ACK/NACK information corresponding to PDSCH 2 in PDSCH group 1, and so on. Therefore, the feedback information codebook determined in the above manner can effectively avoid the ambiguity in understanding of the actually transmitted downlink channels by the network device and the terminal device, and reduce the uplink control signaling overhead while ensuring the consistency in the understanding of transmission signaling, thereby improving the transmission performance of the uplink control signaling. Optionally, in yet another embodiment, the number of downlink channels included in each downlink channel group or the number of bits of the feedback information corresponding to each downlink channel group can also be indicated by other information, for example, by downlink control signaling which schedules downlink channel transmission. Specifically, taking the first downlink channel group in the at least one downlink channel group as an example, the first downlink channel group is any downlink channel group in the at least one downlink channel group, and the terminal device can determine the number of downlink channels included in the first downlink channel group based on at least one piece of indication information corresponding to the first downlink channel Information, or can further determine the number of bits of the feedback information corresponding to the first downlink channel group. Specifically, the terminal device can determine the number of downlink channels included in the first downlink channel group based on the at least one piece of indication information, and then determine the number of bits of the feedback information corresponding to the first downlink channel group based on the number of bits of the feedback information corresponding to each downlink channel, and similarly determine the number of bits of the feedback information corresponding to each downlink channel group in the at least one downlink channel group and sum them up, thereby determining the number of bits of the feedback information for the at least one downlink channel group. For example, as shown inFIG.4, the description will be made still in the example where the downlink channel is PDSCH, and the feedback information thereof is ACK/NACK information. Assuming the single-codeword transmission mode, that is, one PDSCH carries one codeword and corresponds to one-bit ACK/NACK information. TakingFIG.4as an example, the specific distribution of the PDSCH groups is as shown inFIG.4and will not be repeated here. The network device transmits the trigger signaling to instruct the terminal device to transmit the ACK/NACK feedback information corresponding to PDSCH group 1 and PDSCH group 2 in the last slot of COT2, and the terminal device can determine the number of bits of the feedback information corresponding to each PDSCH group based on the DCI which schedules the PDSCHs. Specifically, the description is made here in an example where the number of PDSCHs in each PDSCH group is determined based on the DAI included in the DCI. There are generally two situations. In one method, a continuous counting method is used for the DAI, that is, the DAI value corresponding to PDSCH 1 is 1, the DAI value corresponding to PDSCH 2 is 2, and so on. For any PDSCH group, the terminal device can determine the number of PDSCHs included in the PDSCH group based on the DAI value of the PDSCH received last or the maximum value of the received DAI values. For example, for the PDSCH group 1 shown inFIG.4, the last PDSCH received by the terminal device in the PDSCH group 1 is PDSCH4, the corresponding DAI value thereof is 4, and thus the terminal device can determine that the PDSCH group 1 includes 4 PDSCHs. However, the disadvantage of determining the number of downlink channels included in the downlink channel group by using this method is the same as that of the existing dynamic HARQ-ACK codebook in the NR system. That is, the loss of the DCI corresponding to the last PDSCH will cause inconsistent understanding of the number of bits of the feedback information by the terminal device and the network device. However, the probability of DCI loss is very low, and the network device can correct the ambiguity of understanding caused by DCI loss through a certain amount of blind detection. In another method, there can be two types of DAIs, and one can be referred to as a counter-DAI which can mark the PDSCHs in the PDSCH group by using the continuous counting method, for example, the first method described above; the other type can be referred to as a total DAI which can directly indicate the number of PDSCHs included in the group, and the terminal device can directly determine the number of PDSCHs included in the current PDSCH group based on the total DAI. The terminal device can determine the number of bits of the ACK/NACK information that needs to be fed back for the PDSCH group 1 and the PDSCH group 2 to be 4+3=7 bits in any one of the above two methods. Similar to the previous two embodiments, the order of the feedback information for the two PDSCH groups fed back by the terminal device and the order of the feedback information corresponding to the PDSCHs in each PDSCH group can be determined in accordance with the order of reception or transmission or based on relevant indication information, for example, the trigger signaling; or can be determined based on the indication or number of each PDSCH, for example, based on the DAI of each PDSCH, and the embodiments of the present disclosure are not limited thereto. In summary, in this embodiment, the ACK/NACK information to be transmitted for the PDSCH groups as shown inFIG.4can be {bG1,1, bG1,2, bG1,3, bG1,4, bG2,1, bG2,2, bG2,3}, where bG1,1represents the ACK/NACK information corresponding to PDSCH 1 in PDSCH group 1, bG1,2represents ACK/NACK information corresponding to PDSCH 2 in PDSCH group 1, and so on. Therefore, in the method for transmitting feedback information according to the embodiments of the present disclosure, the terminal device can determine the feedback information codebook including the feedback information for the at least one downlink channel group based on the trigger signaling transmitted by the network device, which can effectively reduce redundant information in the feedback information and can also effectively avoid the ambiguity in understanding the actually transmitted downlink channels by the network device and the terminal device, and reduce the uplink control signaling overhead while ensuring the consistent understanding of transmission signaling, thereby improving the transmission performance of the uplink control signaling. Optionally, the embodiments of the present disclosure also provide a method300for transmitting feedback information in which the terminal device can also determine the ACK/NACK codebook based on an indication of the trigger signaling, which can effectively reduce redundant information in the feedback information. FIG.5shows a schematic flowchart of the method300for transmitting feedback information according to the embodiments of the present disclosure. The method300can be performed by a terminal device. For example, the terminal device can be those shown inFIG.1. As shown inFIG.5, method300includes S310, receiving, by the terminal device, trigger signaling, which is used for triggering transmission of feedback information for at least one downlink transmission channel and/or downlink transmission resource by the terminal device. The trigger signaling can further include a total number of bits for indicating feedback information to be transmitted, where the feedback information to be transmitted includes the feedback information for the at least one downlink transmission channel and/or downlink transmission resource. It should be understood that the embodiments of the present disclosure can be applied to unlicensed spectrum, or to licensed spectrum, and the embodiments of the present disclosure are not limited thereto. In the embodiment of the present disclosure, prior to S310, the method300can further include receiving, by the terminal device, information of the downlink transmission channel and/or downlink transmission resource scheduled by the network device. Specifically, the network device transmits at least one downlink transmission to the terminal device, and the terminal device may receive some or all of the downlink transmissions in the at least one downlink transmission, or may not receive any of them. In S310, the terminal device receives the trigger signaling and determines the total number of bits of the feedback information to be transmitted based on the trigger signaling, the feedback information to be transmitted includes the feedback information for the at least one downlink transmission channel and/or downlink transmission resource, and the at least one downlink transmission channel and/or downlink transmission resource is part or all of the downlink transmission channels and/or downlink transmission resources scheduled by the network device to be used by the terminal. In the embodiment of the present disclosure, the trigger signaling can include a target value which can be used by the terminal device to determine at least one downlink transmission channel and/or downlink transmission resource. Specifically, the target value can be used for indicating a time range. For example, the time range can represent the number of slots, so that the terminal device determines the number of the at least one downlink transmission channel and/or downlink transmission resources included within this time range. Specifically, the time range can be a time range relative to the trigger signaling. For example, the trigger signaling is used as a start timing or an end timing of the time range. Alternatively, the time range can also be a time range relative to a time at which the terminal device transmits the feedback information. For example, the start timing or the end timing of transmitting the feedback information to be transmitted can be used as the start timing or the end timing of the time range, and the embodiments of the present disclosure are not limited thereto. Optionally, the target value can also be configured to directly indicate the number of the at least one downlink transmission channel and/or downlink transmission resource. Alternatively, it can also be used for indicating HARQ progress information corresponding to the at least one downlink transmission channel, and the terminal device determines the at least one downlink transmission channel and/or downlink transmission resource that needs to be feedback based on the HARQ progress information. In the embodiment of the present disclosure, the terminal device can determine the number of the at least one downlink transmission channel and/or downlink transmission resource based on the target value in the trigger signaling, and then determine the total number of bits of the feedback information to be transmitted based on the number of bits of the feedback information corresponding to each downlink transmission channel and/or downlink transmission resource. The number of bits of the feedback information corresponding to each downlink transmission channel and/or downlink transmission resource can be a preset value, and this number of bits of the feedback information corresponding to each downlink transmission channel and/or downlink transmission resource can be the same, then a product of this same value and the number of the at least one downlink transmission channel and/or downlink transmission resource is the total number of bits of the feedback information to be transmitted. Alternatively, the number of bits of the feedback information corresponding to each downlink transmission channel and/or downlink transmission resource can also be different from each other, and the number of bits of the feedback information corresponding to each downlink transmission channel and/or downlink transmission resource can be determined separately, and by summing them up, the total number of bits of the feedback information corresponding to the at least one downlink transmission channel and/or downlink transmission resource channel can be determined. Optionally, the target value included in the trigger signaling can also be configured to directly indicate the total number of bits of the feedback information to be transmitted, and the terminal device transmits the feedback information to be transmitted based on the trigger signaling. Therefore, in the method for transmitting feedback information according to the embodiment of the present disclosure, the terminal device can determine the feedback information codebook including the feedback information for the at least one downlink transmission channel and/or downlink transmission resource based on the trigger signaling transmitted by the network device, which can effectively reduce the redundant information in the feedback information while ensuring the consistent understanding of the transmission signaling. The two methods for transmitting feedback information in the embodiments of the present disclosure are described above in detail from the perspective of the terminal device in connection withFIGS.1to5, respectively. Methods for transmitting feedback information according to the embodiments of the present disclosure will be described below from the perspective of the network device in connection withFIGS.6to7. FIG.6shows a schematic flowchart of a method400for transmitting feedback information according to an embodiment of the present disclosure. The method400can be performed by the network device such as those shown inFIG.1. As shown inFIG.6, method400includes: S410, transmitting, by the network device, trigger signaling, which is used for triggering transmission of feedback information for at least one downlink channel group by a terminal device. The triggering signaling is used by the terminal device to determine a feedback information codebook, which includes the feedback information for the at least one downlink channel group. Optionally, in an embodiment, the trigger signaling includes a group indication of the at least one downlink channel group. Optionally, in an embodiment, a downlink channel in the at least one downlink channel group includes a downlink physical shared channel and/or a downlink physical control channel. Optionally, in an embodiment, the method400further includes transmitting, by the network device, at least one piece of indication information corresponding to a first downlink channel group in the at least one downlink channel group, where the at least one piece of indication information is used by the terminal device to determine the number of downlink channels included in the first downlink channel group. Optionally, in an embodiment, the trigger signaling is also used for indicating the number of downlink channels included in the first downlink channel group in the at least one downlink channel group. Optionally, in an embodiment, the method400further includes transmitting, by the network device, downlink control information DCI corresponding to a downlink channel in the at least one downlink channel group that is received by the terminal device, where, if a feedback timing information field in the DCI indicates a predetermined value, a physical uplink control channel PUCCH resource indication information field in the DCI is used for indicating downlink channel group information corresponding to the received downlink channel. Therefore, in the method for transmitting feedback information according to the embodiments of the present disclosure, the network device transmits the trigger signaling, and the terminal device can determine the feedback information codebook including the feedback information for the at least one downlink channel group based on the trigger signaling, which can effectively reduce redundant information in the feedback information and can also effectively avoid the ambiguity in understanding the actually transmitted downlink channels by the network device and the terminal device, and reduce the uplink control signaling overhead while ensuring the consistent understanding of transmission signaling, thereby improving the transmission performance of the uplink control signaling. FIG.7is a schematic flowchart of a method500for transmitting feedback information according to an embodiment of the present disclosure. The method500can be performed by a network device such as that shown inFIG.1. As shown inFIG.7, the method500includes: S510, transmitting, by the network device, trigger signaling which is used for triggering transmission of feedback information for at least one downlink transmission channel and/or downlink transmission resource by a terminal device and which is used for indicating a total number of bits of feedback information to be transmitted, where the feedback information to be transmitted includes the feedback information for the at least one downlink transmission channel and/or downlink transmission resource. Optionally, in an embodiment, the trigger signaling includes a target value that is used by the terminal device to determine the at least one downlink transmission channel and/or downlink transmission resource. Optionally, in an embodiment, the target value is used for indicating a time range that is used by the terminal device to determine the at least one downlink transmission channel and/or downlink transmission resource within the time range. Optionally, in an embodiment, the target value is the number of the at least one downlink transmission channel and/or downlink transmission resource, or the target value is HARQ progress information corresponding to the at least one downlink transmission channel and/or downlink transmission resource. Optionally, in an embodiment, the trigger signaling includes a target value, which is the total number of bits of the feedback information to be transmitted. Therefore, in the method for transmitting feedback information according to the embodiment of the present disclosure, the network device transmits the trigger signaling, the terminal device can determine the feedback information codebook including the feedback information for the at least one downlink transmission channel and/or downlink transmission resource based on the trigger signaling, which can effectively reduce the redundant information in the feedback information while ensuring the consistent understanding of the transmission signaling. It should be understood that in various embodiments of the present disclosure, the serial numbers of the above processes do not mean that the performing order thereof is sequential, and the performing order of the processes should be determined based on the functions and inherent logic thereof, which should not be construed as any limitation on implementation processes of the embodiments of the present disclosure. In addition, the term “and/or” used herein is merely to describe a relative relationship of the related objects, indicating that there can be three relationships. For example, as for A and/or B, it can indicate three cases where A exists alone, A and B exist simultaneously, and B exists alone. In addition, the character “/” used herein generally indicates that the related objects before and after “I” are in an “or” relationship. The methods for transmitting feedback information according to the embodiments of the present disclosure are described above in detail with reference toFIGS.1to7, and the terminal device and the network device according to the embodiments of the present disclosure will be described below with reference toFIGS.8to12. As shown inFIG.8, a terminal device600, according to the embodiment of the present disclosure, includes a processing unit610and a transceiver unit620. Specifically, the terminal device600can be configured to perform the method200in the embodiment of the present disclosure. That is, the transceiver unit620is configured to receive trigger signaling which is used for triggering transmission of feedback information for at least one downlink channel group by the terminal device, and the processing unit610is configured to determine a feedback information codebook based on the trigger signaling, where the feedback information codebook includes the feedback information for the at least one downlink channel group. Optionally, in an embodiment, the trigger signaling includes a group indication of the at least one downlink channel group. Optionally, in an embodiment, a downlink channel in the at least one downlink channel group includes a downlink physical shared channel and/or a downlink physical control channel. Optionally, in an embodiment, the processing unit610is configured to determine the number of bits of the feedback information and/or to determine a bit position of the feedback information for each downlink channel in the at least one downlink channel group. Optionally, in an embodiment, the processing unit610is configured to determine the number of bits of the feedback information based on the number of bits of the feedback information corresponding to each downlink channel group in the at least one downlink channel group and the number of groups of the at least one downlink channel group. Optionally, in an embodiment, the number of bits of the feedback information corresponding to each downlink channel group is pre-configured, or the trigger signaling is used for indicating the number of bits of the feedback information corresponding to each downlink channel group. Optionally, in an embodiment, the processing unit610is configured to determine the number of bits of the feedback information for a first downlink channel group in the at least one downlink channel group based on the number of downlink channels included in the first downlink channel group. Optionally, in an embodiment, the number of bits of the feedback information for each downlink channel group is determined based on a maximum number of downlink channels, where the maximum number of downlink channels represents a maximum number of downlink channels that can be included in each downlink channel group. Optionally, in an embodiment, the transceiver unit620is configured to receive at least one piece of indication information corresponding to the first downlink channel group, and the processing unit610is configured to determine the number of downlink channels included in the first downlink channel group based on the at least one piece of indication information. Optionally, in an embodiment, the trigger signaling is also used for indicating the number of downlink channels included in the first downlink channel group. Optionally, in an embodiment, the maximum number of downlink channels is preset. Optionally, in an embodiment, the processing unit610is configured to determine the number of bits of the feedback information based on a target parameter, the number of downlink channels included in each downlink channel group in the at least one downlink channel group, and the number of groups of the at least one downlink channel group, where a value of the target parameter is a maximum number of bits of the feedback information corresponding to each downlink channel in the at least one downlink channel group. Optionally, in an embodiment, the processing unit610is configured to determine downlink channel group information corresponding to the received downlink channel, which is a downlink channel in the at least one downlink channel group. Optionally, in an embodiment, the processing unit610is configured to determine the corresponding downlink channel group information based on a Channel Occupation Time (COT) for the received downlink channel. Optionally, in an embodiment, the corresponding downlink channel group information is an indication of the COT for the received downlink channel. Optionally, in an embodiment, the processing unit610is configured to receive downlink control information DCI corresponding to the received downlink channel, where, if a feedback timing information field in the DCI indicates a predetermined value, a physical uplink control channel PUCCH resource indication information field in the DCI is used for indicating the corresponding downlink channel group information. It should be understood that the terminal device600according to the embodiment of the present disclosure can correspondingly perform the method200in the embodiments of the present disclosure, and the above-mentioned and other operations and/or functions of each unit in the terminal device600are respectively for implementing the corresponding process of the terminal device in the methods shown inFIGS.1to4and will not be repeated here for the sake of brevity. Therefore, the terminal device according to the embodiments of the present disclosure can determine the feedback information codebook including the feedback information for the at least one downlink channel group based on the trigger signaling transmitted by the network device, which can effectively reduce redundant information in the feedback information and can also effectively avoid the ambiguity in understanding the actually transmitted downlink channels by the network device and the terminal device, and reduce the uplink control signaling overhead while ensuring the consistent understanding of transmission signaling, thereby improving the transmission performance of the uplink control signaling. Optionally, the terminal device600can also be correspondingly configured to perform the method300in the embodiment of the present disclosure. That is, the transceiver unit620is configured to receive trigger signaling which is used for triggering transmission of feedback information for at least one downlink transmission channel and/or downlink transmission resource by the terminal device and which is used for indicating a total number of bits of feedback information to be transmitted, where the feedback information to be transmitted includes the feedback information for the at least one downlink transmission channel and/or downlink transmission resource. Optionally, in an embodiment, the trigger signaling includes a target value, and the processing unit610is configured to determine the at least one downlink transmission channel and/or downlink transmission resource based on the target value. Optionally, in an embodiment, the target value is used for indicating a time range, and the processing unit610is configured to determine the at least one downlink transmission channel and/or downlink transmission resource within the time range. Optionally, in an embodiment, the target value is the number of the at least one downlink transmission channel and/or downlink transmission resource, or the target value is HARQ progress information corresponding to the at least one downlink transmission channel and/or downlink transmission resource. Optionally, in an embodiment, the processing unit610is configured to determine the total number of bits of the feedback information to be transmitted based on the number of the at least one downlink transmission channel and/or downlink transmission resource. Optionally, in an embodiment, the trigger signaling includes a target value, which is the total number of bits of the feedback information to be transmitted. It should be understood that the terminal device600according to the embodiment of the present disclosure can be correspondingly configured to perform the method300in the embodiments of the present disclosure, and the above-mentioned and other operations and/or functions of each unit in the terminal device600are respectively for implementing the corresponding process of the terminal device in the methods shown inFIG.5and will not be repeated here for the sake of brevity. Therefore, the terminal device according to the embodiment of the present disclosure can determine the feedback information codebook including the feedback information for the at least one downlink transmission channel and/or downlink transmission resource based on the trigger signaling transmitted by the network device, which can effectively reduce the redundant information in the feedback information while ensuring the consistent understanding of the transmission signaling. Optionally, the terminal device600can also be correspondingly configured to perform the following. The transceiver unit620is configured to receive DCI, where, if a feedback timing information field in the DCI indicates a predetermined value, a PUCCH resource indication information field in the DCI is used for indicating the group information of the downlink channel corresponding to the DCI. Optionally, in an embodiment, if the feedback timing information field in the DCI indicates the predetermined value, the transmission time of the feedback information corresponding to the downlink channel is undetermined; or, if the feedback timing information field in the DCI indicates the predetermined value, the transmission time of the feedback information is determined by first information which is used for triggering transmission of the feedback information by the terminal device. Optionally, in an embodiment, the predetermined value is infinity. Optionally, in an embodiment, the downlink channel includes a physical downlink control channel carrying the DCI or a physical downlink shared channel scheduled by the DCI. As shown inFIG.9, a network device700, according to an embodiment of the present disclosure includes a processing unit710and a transceiver unit720. Specifically, the network device700can be correspondingly configured to perform method400in the embodiment of the present disclosure. That is, the transceiver unit720is configured to transmit trigger signaling generated by the processing unit710, the trigger signaling being used for triggering transmission of feedback information for at least one downlink channel group by a terminal device and being configured to be used by the terminal device to determine a feedback information codebook which includes the feedback information for the at least one downlink channel group. Optionally, in an embodiment, the trigger signaling includes a group indication of the at least one downlink channel group. Optionally, in an embodiment, the downlink channel in the at least one downlink channel group includes a downlink physical shared channel and/or a downlink physical control channel. Optionally, in an embodiment, the transceiver unit720is configured to transmit at least one piece of indication information corresponding to a first downlink channel group in the at least one downlink channel group, the at least one piece of indication information being used by the terminal device to determine the number of downlink channels included in the first downlink channel group. Optionally, in an embodiment, the trigger signaling is also used for indicating the number of downlink channels included in the first downlink channel group in the at least one downlink channel group. Optionally, in an embodiment, the transceiver unit720is configured to transmit downlink control information DCI corresponding to any downlink channel in the at least one downlink channel group that is received by the terminal device, where, if a feedback timing information field in the DCI indicates a predetermined value, a physical uplink control channel PUCCH resource indication information field in the DCI is used for indicating downlink channel group information corresponding to the received downlink channel. It should be understood that the network device700according to the embodiment of the present disclosure can be correspondingly configured to perform the method400in the embodiments of the present disclosure, and the above-mentioned and other operations and/or functions of each unit in the network device700are respectively for implementing the corresponding process of the terminal device in the methods inFIG.6and will not be repeated here for the sake of brevity. Therefore, the network device according to the embodiments of the present disclosure transmits the trigger signaling to the terminal device, and the terminal device can determine the feedback information codebook including the feedback information for the at least one downlink channel group based on the trigger signaling, which can effectively reduce redundant information in the feedback information and can also effectively avoid the ambiguity in understanding the actually transmitted downlink channels by the network device and the terminal device, and reduce the uplink control signaling overhead while ensuring the consistent understanding of transmission signaling, thereby improving the transmission performance of the uplink control signaling. Optionally, the network device700can also be correspondingly configured to perform the method500in the embodiment of the present disclosure. That is, the transceiver unit720is configured to transmit trigger signaling which is used for triggering transmission of feedback information for at least one downlink transmission channel and/or downlink transmission resource by the terminal device and which is used for indicating a total number of bits of feedback information to be transmitted, where the feedback information to be transmitted includes the feedback information for the at least one downlink transmission channel and/or downlink transmission resource. Optionally, in an embodiment, the trigger signaling includes a target value that is used by the terminal device to determine the at least one downlink transmission channel and/or downlink transmission resource. Optionally, in an embodiment, the target value is used for indicating a time range that is used by the terminal device to determine the at least one downlink transmission channel and/or downlink transmission resource within the time range. Optionally, in an embodiment, the target value is the number of the at least one downlink transmission channel and/or downlink transmission resource, or the target value is HARQ progress information corresponding to the at least one downlink transmission channel and/or downlink transmission resource. Optionally, in an embodiment, the trigger signaling includes a target value, which is the total number of bits of the feedback information to be transmitted. It should be understood that the network device700according to the embodiment of the present disclosure can be correspondingly configured to perform the method500in the embodiments of the present disclosure, and the above-mentioned and other operations and/or functions of each unit in the network device700are respectively for implementing the corresponding process of the terminal device in the methods inFIG.7and will not be repeated here for the sake of brevity. Therefore, the network device according to the embodiments of the present disclosure transmits the trigger signaling to the terminal device so that the terminal device can determine the feedback information codebook including the feedback information for the at least one downlink channel group based on the trigger signaling, which can effectively reduce redundant information in the feedback information and can also effectively avoid the ambiguity in understanding the actually transmitted downlink channels by the network device and the terminal device, and reduce the uplink control signaling overhead while ensuring the consistent understanding of the transmission signaling, thereby improving the transmission performance of the uplink control signaling. Optionally, the network device700can also be correspondingly configured to perform the following. The processing unit710is configured to generate DCI, and the transceiver unit720is configured to transmit the DCI, where, if a feedback timing information field in the DCI indicates a predetermined value, a PUCCH resource indication information field in the DCI is used for indicating group information of the downlink channel corresponding to the DCI. Optionally, in an embodiment, if the feedback timing information field in the DCI indicates the predetermined value, the transmission time of the feedback information corresponding to the downlink channel is undetermined; or, if the feedback timing information field in the DCI indicates the predetermined value, the transmission time of the feedback information is determined by first information which is used for triggering transmission of the feedback information by the terminal device. Optionally, in an embodiment, the predetermined value is infinity. Optionally, in an embodiment, the downlink channel includes a physical downlink control channel carrying the DCI or a physical downlink shared channel scheduled by the DCI. FIG.10is a schematic structural diagram of a communication device800according to an embodiment of the present disclosure. The communication device800shown inFIG.10includes a processor810which can invoke and execute a computer program from a memory to implement the methods in the embodiments of the present disclosure. Optionally, as shown inFIG.10, the communication device800can further include a memory820from which the processor810can invoke and execute the computer program to implement the methods in the embodiments of the present disclosure. The memory820can be a separate device independent of the processor810or can be integrated in the processor810. Optionally, as shown inFIG.10, the communication device800can further include a transceiver830, which can be controlled by the processor810to communicate with other devices. Specifically, the transceiver830can transmit information or data to other devices or receive information or data transmitted by other devices. The transceiver830can include a transmitter and a receiver. The transceiver830can further include one or more antennas. Optionally, the communication device800can particularly be the network device according to the embodiments of the present disclosure, and the communication device800can implement the corresponding process implemented by the network device in the methods of the embodiments of the present disclosure which will not be repeated here for the sake of brevity. Optionally, the communication device800can particularly be the mobile terminal/terminal device according to the embodiments of the present disclosure, and the communication device800can implement the corresponding process implemented by the mobile terminal/terminal device in the methods of the embodiments of the present disclosure, which will not be repeated here for the sake of brevity. FIG.11is a schematic structural diagram of a chip according to an embodiment of the present disclosure. A chip900shown inFIG.11includes a processor910which can invoke and execute a computer program from a memory to implement the methods in the embodiments of the present disclosure. Optionally, as shown inFIG.11, the chip900can further include a memory920from which the processor910can invoke and execute the computer program to implement the methods in the embodiments of the present disclosure. The memory920can be a separate device independent of the processor910, or can be integrated in the processor910. Optionally, the chip900can further include an input interface930, which can be controlled by the processor910to communicate with other devices or chips. Specifically, it can obtain information or data transmitted by other devices or chips. Optionally, the chip900can further include an output interface940, which can be controlled by the processor910to communicate with other devices or chips. Specifically, it can output information or data to other devices or chips. Optionally, the chip can be applied to the network device in the embodiments of the present disclosure, and can implement the corresponding process implemented by the network device in the methods of the embodiments of the present disclosure, which will not be repeated here for the sake of brevity. Optionally, the chip can be applied to the mobile terminal/terminal device in the embodiments of the present disclosure, and the chip can implement the corresponding process implemented by the mobile terminal/terminal device in the methods of the embodiments of the present disclosure which will not be repeated here for the sake of brevity. It should be understood that the chip mentioned in the embodiments of the present disclosure can also be referred to as a system-on-chip, a system chip, a chip system, or a chip of a system-on-chip. FIG.12is a schematic block diagram of a communication system1000according to an embodiment of the present disclosure. As shown inFIG.12, the communication system1000includes a terminal device1010and a network device1020. The terminal device1010can be configured to implement the corresponding functions implemented by the terminal device in the above methods, and the network device1020can be configured to implement the corresponding functions implemented by the network device in the above methods, which will not be repeated here for the sake of brevity It should be understood that the processor in the embodiments of the present disclosure can be an integrated circuit chip, which has signal processing capabilities. In implementations, the steps of the foregoing method embodiments can be performed by an integrated logic circuit of hardware in the processor or instructions in the form of software. The foregoing processor can be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other programmable logic devices, discrete gates or transistor logic devices and discrete hardware components. The methods, steps, and logical blocks disclosed in the embodiments of the present disclosure can be implemented or performed. The general-purpose processor can be a microprocessor, any conventional processor, or the like. The steps of the methods disclosed in connection with the embodiments of the present disclosure can be directly embodied in and executed by a hardware decoding processor or can be implemented by a combination of hardware and software modules in the decoding processor. The software modules can be located in a mature storage medium in the art such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory or a register. The storage medium is located in the memory, and the processor reads information in the memory and implements the steps of the above methods in combination with the hardware thereof. It can be understood that the memory in the embodiments of the present disclosure can be a volatile memory or a non-volatile memory, or both. The non-volatile memory can be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically EPROM (EEPROM) or a flash memory. The volatile memory can be a Random Access Memory (RAM), which is used as an external cache. By way of example but not limitation, many forms of RAM are available, such as a Static RAM (SRAM), a Dynamic RAM (DRAM), a Synchronous DRAM (SDRAM), a Double SDRAM (DDR SDRAM), an Enhanced SDRAM (ESDRAM), a Synch-Link DRAM (SLDRAM) and a Direct Rambus RAM (DR RAM). It should be noted that the memories of the systems and methods described herein are intended to include, but are not limited to those and any other suitable types of memories. It should be understood that the foregoing description of the memory is exemplary rather than limiting. For example, the memory in the embodiments of the present disclosure can also be a static RAM (SRAM), a Dynamic RAM (DRAM), a Synchronous DRAM (SDRAM), a Double Data Rate SDRAM (DDR SDRAM), an Enhanced SDRAM (ESDRAM), a Synch-Link DRAM (SLDRAM), a Direct Rambus RAM (DR RAM), among others. That is to say, the memory in the embodiments of the present disclosure is intended to include but is not limited to those and any other suitable types of memories. Embodiments of the present disclosure also provide a computer-readable storage medium for storing a computer program. Optionally, the computer-readable storage medium can be applied to the network device in the embodiments of the present disclosure, and the computer program causes the computer to perform the corresponding process implemented by the network device in the methods of the embodiments of the present disclosure which will not be repeated here for the sake of brevity. Optionally, the computer-readable storage medium can be applied to the mobile terminal/terminal device in the embodiments of the present disclosure, and the computer program causes the computer to perform the corresponding process implemented by the mobile terminal/terminal device in the methods of the embodiments of the present disclosure which will not be repeated here for the sake of brevity. The embodiments of the present disclosure also provide a computer program product including computer program instructions. Optionally, the computer program product can be applied to the network device in the embodiments of the present disclosure, and the computer program instructions cause the computer to perform the corresponding process implemented by the network device in the methods of the embodiments of the present disclosure, which will not be repeated here for the sake of brevity. Optionally, the computer program product can be applied to the mobile terminal/terminal device in the embodiments of the present disclosure, and the computer program instructions cause the computer to perform the corresponding process implemented by the mobile terminal/terminal device in the methods of the embodiments of the present disclosure, which will not be repeated here for the sake of brevity. The embodiments of the present disclosure also provide a computer program. Optionally, the computer program can be applied to the network device in the embodiments of the present disclosure, which, when being executed on the computer, causes the computer to perform the corresponding process implemented by the network device in the methods of the embodiments of the present disclosure, which will not be repeated here for the sake of brevity. Optionally, the computer program can be applied to the mobile terminal/terminal device in the embodiments of the present disclosure, which, when being executed on the computer, enables the computer to perform the corresponding process implemented by the mobile terminal/terminal device in the methods of the embodiments of the present disclosure, which will not be repeated here for the sake of brevity. With the above technical solutions, the terminal device can determine the feedback information codebook including the feedback information of the at least one downlink channel group based on the trigger signaling transmitted by the network device, which can effectively reduce redundant information in the feedback information and can also effectively avoid the ambiguity in understanding the actually transmitted downlink channel by the network device and the terminal device, and reduce the uplink control signaling overhead while ensuring the consistency in the understanding of transmission signaling, thereby improving the transmission performance of the uplink control signaling. Those of ordinary skill in the art can realize that the exemplary units and algorithm steps described in connection with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and the electronic hardware. Whether these functions are implemented in hardware or software depends on the specific application of the technical solution and design constraints. Various methods can be used by professional technicians to implement the described functions for each specific application, and such implementations should not be considered as beyond the scope of the present disclosure. Those skilled in the art can clearly understand that for the convenience and conciseness of the description, the specific operating process of the systems, devices and units described above can refer to the corresponding process in the foregoing method embodiments, which will not be repeated here. According to the embodiments provided in the present disclosure, it should be understood that the systems, devices, and methods disclosed can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division, and in actual implementations, there can be other divisions. For example, multiple units or components can be combined or integrated into another system, or some features can be ignored or not implemented. In addition, the coupling or direct coupling or communication connection shown or discussed herein can also be indirect coupling or communication connection through some interfaces, devices or units, and can be in electrical, mechanical or other forms. The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or may be distributed on multiple network units. Some or all of the units can be selected to achieve the purpose of the solutions of this embodiment according to actual requirements. In addition, the functional units in the embodiments of the present disclosure can be integrated into a processing unit, or individually exist physically, or two or more of the units can be integrated into one unit. If the functions are implemented in the form of software functional units that are sold or used as independent products, they can be stored in a computer-readable storage medium. Based on such an understanding, essentially, the technical solution of the present disclosure, a part thereof that contributes to the prior art, or a part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium and includes instructions which enable a computer device which may be a personal computer, a server, a network device or the like to perform all or part of the steps of the methods described in the embodiments of the present disclosure. The foregoing storage medium includes various medium such as a USB drive, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk that can store program codes. Those described above are only specific implementations of the present disclosure, and the protection scope of the present disclosure is not limited thereto. Any person skilled in the art can easily think of alterations or substitutions which should be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be defined by the claims.
83,336
11863494
DETAILED DESCRIPTION Various aspects of the disclosure are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Based on the teachings herein one skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the disclosure disclosed herein, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. Several aspects of telecommunication systems will now be presented with reference to various apparatuses and techniques. These apparatuses and techniques will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, and/or the like (collectively referred to as “elements”). These elements may be implemented using hardware, software, or combinations thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. It should be noted that while aspects may be described herein using terminology commonly associated with 3G and/or 4G wireless technologies, aspects of the present disclosure can be applied in other generation-based communication systems, such as 5G and later, including NR technologies. FIG.1is a diagram illustrating a network100in which aspects of the present disclosure may be practiced. The network100may be an LTE network or some other wireless network, such as a 5G or NR network. Wireless network100may include a number of BSs110(shown as BS110a, BS110b, BS110c, and BS110d) and other network entities. A BS is an entity that communicates with user equipment (UEs) and may also be referred to as a base station, a NR BS, a Node B, a gNB, a 5G node B (NB), an access point, a transmit receive point (TRP), and/or the like. Each BS may provide communication coverage for a particular geographic area. In 3GPP, the term “cell” can refer to a coverage area of a BS and/or a BS subsystem serving this coverage area, depending on the context in which the term is used. A BS may provide communication coverage for a macro cell, a pico cell, a femto cell, and/or another type of cell. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscription. A pico cell may cover a relatively small geographic area and may allow unrestricted access by UEs with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs having association with the femto cell (e.g., UEs in a closed subscriber group (CSG)). A BS for a macro cell may be referred to as a macro BS. A BS for a pico cell may be referred to as a pico BS. A BS for a femto cell may be referred to as a femto BS or a home BS. In the example shown inFIG.1, a BS110amay be a macro BS for a macro cell102a, a BS110bmay be a pico BS for a pico cell102b, and a BS110cmay be a femto BS for a femto cell102c. A BS may support one or multiple (e.g., three) cells. The terms “eNB”, “base station”, “NR BS”, “gNB”, “TRP”, “AP”, “node B”, “5G NB”, and “cell” may be used interchangeably herein. In some aspects, a cell may not necessarily be stationary, and the geographic area of the cell may move according to the location of a mobile BS. In some aspects, the BSs may be interconnected to one another and/or to one or more other BSs or network nodes (not shown) in the access network100through various types of backhaul interfaces such as a direct physical connection, a virtual network, and/or the like using any suitable transport network. Wireless network100may also include relay stations. A relay station is an entity that can receive a transmission of data from an upstream station (e.g., a BS or a UE) and send a transmission of the data to a downstream station (e.g., a UE or a BS). A relay station may also be a UE that can relay transmissions for other UEs. In the example shown inFIG.1, a relay station110dmay communicate with macro BS110aand a UE120din order to facilitate communication between BS110aand UE120d. A relay station may also be referred to as a relay BS, a relay base station, a relay, and/or the like. Wireless network100may be a heterogeneous network that includes BSs of different types, e.g., macro BSs, pico BSs, femto BSs, relay BSs, and/or the like. These different types of BSs may have different transmit power levels, different coverage areas, and different impacts on interference in wireless network100. For example, macro BSs may have a high transmit power level (e.g., 5 to 40 Watts) whereas pico BSs, femto BSs, and relay BSs may have lower transmit power levels (e.g., 0.1 to 2 Watts). A network controller130may couple to a set of BSs and may provide coordination and control for these BSs. Network controller130may communicate with the BSs via a backhaul. The BSs may also communicate with one another, e.g., directly or indirectly via a wireless or wireline backhaul. UEs120(e.g.,120a,120b,120c) may be dispersed throughout wireless network100, and each UE may be stationary or mobile. A UE may also be referred to as an access terminal, a terminal, a mobile station, a subscriber unit, a station, and/or the like. A UE may be a cellular phone (e.g., a smart phone), a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet, a camera, a gaming device, a netbook, a smartbook, an ultrabook, a medical device or equipment, biometric sensors/devices, wearable devices (smart watches, smart clothing, smart glasses, smart wrist bands, smart jewelry (e.g., smart ring, smart bracelet)), an entertainment device (e.g., a music or video device, or a satellite radio), a vehicular component or sensor, smart meters/sensors, industrial manufacturing equipment, a global positioning system device, or any other suitable device that is configured to communicate via a wireless or wired medium. Some UEs may be considered machine-type communication (MTC) or evolved or enhanced machine-type communication (eMTC) UEs. MTC and eMTC UEs include, for example, robots, drones, remote devices, sensors, meters, monitors, location tags, and/or the like, that may communicate with a base station, another device (e.g., remote device), or some other entity. A wireless node may provide, for example, connectivity for or to a network (e.g., a wide area network such as Internet or a cellular network) via a wired or wireless communication link. Some UEs may be considered Internet-of-Things (IoT) devices, and/or may be implemented as NB-IoT (narrowband internet of things) devices. Some UEs may be considered a Customer Premises Equipment (CPE). UE120may be included inside a housing that houses components of UE120, such as processor components, memory components, and/or the like. In general, any number of wireless networks may be deployed in a given geographic area. Each wireless network may support a particular RAT and may operate on one or more frequencies. A RAT may also be referred to as a radio technology, an air interface, and/or the like. A frequency may also be referred to as a carrier, a frequency channel, and/or the like. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. In some cases, NR or 5G RAT networks may be deployed. In some aspects, two or more UEs120(e.g., shown as UE120aand UE120e) may communicate directly using one or more sidelink channels (e.g., without using a base station110as an intermediary to communicate with one another). For example, the UEs120may communicate using peer-to-peer (P2P) communications, device-to-device (D2D) communications, a vehicle-to-everything (V2X) protocol (e.g., which may include a vehicle-to-vehicle (V2V) protocol, a vehicle-to-infrastructure (V2I) protocol, and/or the like), a mesh network, and/or the like. In this case, UE120may perform scheduling operations, resource selection operations, and/or other operations described elsewhere herein as being performed by the base station110. As indicated above,FIG.1is provided merely as an example. Other examples may differ from what is described with regard toFIG.1. FIG.2shows a block diagram of a design200of base station110and UE120, which may be one of the base stations and one of the UEs inFIG.1. Base station110may be equipped with T antennas234athrough234t, and UE120may be equipped with R antennas252athrough252r, where in general T≥1 and R≥1. At base station110, a transmit processor220may receive data from a data source212for one or more UEs, select one or more modulation and coding schemes (MCS) for each UE based at least in part on channel quality indicators (CQIs) received from the UE, process (e.g., encode and modulate) the data for each UE based at least in part on the MCS(s) selected for the UE, and provide data symbols for all UEs. Transmit processor220may also process system information (e.g., for semi-static resource partitioning information (SRPI) and/or the like) and control information (e.g., CQI requests, grants, upper layer signaling, and/or the like) and provide overhead symbols and control symbols. Transmit processor220may also generate reference symbols for reference signals (e.g., the cell-specific reference signal (CRS)) and synchronization signals (e.g., the primary synchronization signal (PSS) and secondary synchronization signal (SSS)). A transmit (TX) multiple-input multiple-output (MIMO) processor230may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, the overhead symbols, and/or the reference symbols, if applicable, and may provide T output symbol streams to T modulators (MODs)232athrough232t. Each modulator232may process a respective output symbol stream (e.g., for OFDM and/or the like) to obtain an output sample stream. Each modulator232may further process (e.g., convert to analog, amplify, filter, and upconvert) the output sample stream to obtain a downlink signal. T downlink signals from modulators232athrough232tmay be transmitted via T antennas234athrough234t, respectively. According to various aspects described in more detail below, the synchronization signals can be generated with location encoding to convey additional information. At UE120, antennas252athrough252rmay receive the downlink signals from base station110and/or other base stations and may provide received signals to demodulators (DEMODs)254athrough254r, respectively. Each demodulator254may condition (e.g., filter, amplify, downconvert, and digitize) a received signal to obtain input samples. Each demodulator254may further process the input samples (e.g., for OFDM and/or the like) to obtain received symbols. A MIMO detector256may obtain received symbols from all R demodulators254athrough254r, perform MIMO detection on the received symbols if applicable, and provide detected symbols. A receive processor258may process (e.g., demodulate and decode) the detected symbols, provide decoded data for UE120to a data sink260, and provide decoded control information and system information to a controller/processor280. A channel processor may determine reference signal received power (RSRP), received signal strength indicator (RSSI), reference signal received quality (RSRQ), channel quality indicator (CQI), and/or the like. In some aspects, one or more components of UE120may be included in a housing. On the uplink, at UE120, a transmit processor264may receive and process data from a data source262and control information (e.g., for reports comprising RSRP, RSSI, RSRQ, CQI, and/or the like) from controller/processor280. Transmit processor264may also generate reference symbols for one or more reference signals. The symbols from transmit processor264may be precoded by a TX MIMO processor266if applicable, further processed by modulators254athrough254r(e.g., for DFT-s-OFDM, CP-OFDM, and/or the like), and transmitted to base station110. At base station110, the uplink signals from UE120and other UEs may be received by antennas234, processed by demodulators232, detected by a MIMO detector236if applicable, and further processed by a receive processor238to obtain decoded data and control information sent by UE120. Receive processor238may provide the decoded data to a data sink239and the decoded control information to controller/processor240. Base station110may include communication unit244and communicate to network controller130via communication unit244. Network controller130may include communication unit294, controller/processor290, and memory 292. Controller/processor240of base station110, controller/processor280of UE120, and/or any other component(s) ofFIG.2may perform one or more techniques associated with UE override and/or improvement for enhanced type-II CSI, as described in more detail elsewhere herein. For example, controller/processor240of base station110, controller/processor280of UE120, and/or any other component(s) ofFIG.2may perform or direct operations of, for example, process400ofFIG.4, process600ofFIG.6, and/or other processes as described herein. Memories242and282may store data and program codes for base station110and UE120, respectively. A scheduler246may schedule UEs for data transmission on the downlink and/or uplink. In some aspects, UE120may include means for determining that an enhanced type-II CSI report configuration, associated with transmitting CSI feedback to a base station, is to be overridden; means for transmitting, based at least in part on determining that the enhanced type-II CSI report configuration is to be overridden, a CSI report using another CSI report configuration, wherein the CSI report includes the CSI feedback and an indication that the enhanced type-II CSI report configuration has been overridden; and/or the like. In some aspects, such means may include one or more components of UE120described in connection withFIG.2. In some aspects, UE120may include means for determining that an identity matrix is to be used as a frequency domain basis for an enhanced type-II CSI report configuration associated with transmitting CSI feedback to a base station; means for transmitting a CSI report using the identity matrix as the frequency domain basis for the enhanced type-II CSI report configuration; and/or the like. In some aspects, such means may include one or more components of UE120described in connection withFIG.2. As indicated above,FIG.2is provided merely as an example. Other examples may differ from what is described with regard toFIG.2. A BS (e.g., BS110) may transmit many beams to a UE (e.g., UE120). For example, the BS may generate the beams using an antenna panel that generates beams at a spatial and/or phase displacement from each other. The BS and the UE may select a set of beams that are to be used for communication between the BS and the UE. For example, the set of beams transmitted from the BS to the UE may be referred to herein as a communication link, a downlink, and/or the like. The communication link between the BS and the UE may propagate in a medium and/or through various geometric paths, which are collectively referred to herein as a channel between the BS and the UE. In some aspects, the UE may select a set of beams for communication with the BS. For example, the UE may select the set of beams based at least in part on the set of beams being associated with favorable characteristics (e.g., a satisfactory receive power, a satisfactory signal to interference plus noise (SINR) value, etc.). The UE may generate a codeword that indicates the set of beams and parameters to be used for using a codebook based at least in part on performing channel estimation of the channel between the BS and the UE. One such codebook is the type-II codebook, prescribed in 5G/NR. The type-II codebook may use a two-stage procedure to generate the codeword: a first stage wherein the set of beams is selected for a wideband of the communication link (e.g., sometimes referred to herein as W1), and a second stage wherein linear combination is performed, for a set of subbands, using the set of beams for each set of subbands. The codeword may be based at least in part on the linear combination, and may indicate the set of beams and/or respective amplitudes, phase coefficients, and/or the like. Thus, the UE may provide an indication of channel state at the UE and may request the set of beams to be used for the UE. The type-II codebook may provide more precise specification of the channel state than a type-I codebook, which may provide a predefined codeword-based approach to specifying selected beams. Thus, the type-II codebook may be referred to as a high resolution codebook in comparison to the type-I codebook. The type-II codebook may improve multi-user multiple input multiple output (MU-MIMO) performance on the communication link. For one type of type-II codebook (e.g., the codebook specified in Release 15 of the 3GPP standard for 5G/NR), the precoder of the codebook is based at least in part on a linear combination of discrete Fourier transform (DFT) beams. The linear combination may define the precoder W as W=W1W2, wherein the spatial domain compression matrix W1=[v0⁢v1⁢⁢…⁢⁢vL-100v0⁢v1⁢⁢…⁢⁢vL-1], wherein {υi}i=0L−1are L spatial domain basis vectors of dimension N1N2×1 (mapped to the two polarizations, so 2L in total), P=2N1N2indicates a number of dimensions (sometimes represented as D), and the combination coefficient matrix W2is composed of K=2Lυ linear combination coefficients, where υ indicates a total number of layers. Each column in W2indicates the linear combination of complex coefficients (i.e., amplitude and phase) for one layer, wherein the amplitude coefficient is given by {pi(1)pi(2)}i=02L−1for l=0, . . . , ν−1, and pi(1)and pi(2)are the wideband and subband coefficients, respectively. The phase coefficient is given by {cl,i}i=02L−1for l=0, . . . , ν−1, and ciis one of the 8 phase shift keying (8PSK) or the quadrature phase shift keying (QPSK) constellation points. The UE may report the above values and/or other values associated with channel estimation using channel state information (CSI) feedback. CSI feedback for the type-II codebook may include two parts: a first part, sometimes referred to as CSI part I, and a second part, sometimes referred to as CSI part II. In some cases, the first part may have a smaller payload than the second part, and/or may have a fixed payload. For example, the first part may have a payload size of less than approximately 50 bits, whereas the second part may have a variable payload size that may be dependent on the first part. In some cases, the second part may have a payload size of approximately 100 bits to 600 bits, although other values may be used. In some cases, the first part may identify one or more of: a rank indicator (RI) (e.g., 1 bit to indicate one layer υ=1 or two layers υ=2 when the configured maximum rank is 2); wideband and subband differential channel quality indicators (CQI), for which a total payload size may be dependent on the number of subbands (e.g., approximately 4+18×2=40 bits for 19 subbands); an indication of the number of non-zero wideband amplitude coefficients Qlfor each layer; and/or the like. In some cases, the second part may identify one or more of: wideband and/or subband precoding matrix indicators (PMIs) including a spatial basis vector selection indication; wideband and subband amplitude coefficients; subband phase coefficients; and/or the like. In some cases, the type-II CSI feedback may use a compressed type-II precoder. This may reduce overhead of type-II CSI feedback. The compressed precoder may exploit the sparsity of the spatial domain and/or the frequency domain. For example, an example of a compressed type-II precoder W is given by W=W1{tilde over (W)}2WfH, wherein the precoder matrix W has P=2N1N2rows (representing the spatial domain and the number of ports) and N3columns (wherein N3is a frequency-domain compression unit of resource blocks or reporting subbands). The W1matrix, described above, is the spatial basis consisting of L beams per polarization group (hence a total of 2L beams). The {tilde over (W)}2matrix indicates all of the required linear combination complex coefficients (amplitude and co-phasing), similarly to what is described above. The Wfmatrix is composed of the basis vectors used to perform compression in frequency domain, Wf=[f0f1. . . fM−1], where {fm}m=0M−1are M size−N3×1 orthogonal DFT vectors for each spatial basis i=0, . . . , 2L−1. The above type-II CSI feedback may be referred to in some cases as enhanced or modified type-II CSI feedback (e.g., enhanced relative to an approach that does not use basis vectors in the spatial and frequency domains to compress feedback size). The CSI feedback for this enhanced type-II CSI feedback may include a spatial domain basis vector selection that is similar to the approach described in connection with the type-II CSI feedback configuration. The CSI feedback may further include a frequency-domain (FD) basis subset selection (wherein M out of a total N3basis vectors are selected). In some cases, common FD basis vectors for all the 2L spatial beams may be used, which is referred to herein as Alternative 1. In these cases, M basis vectors are dynamically selected and reported. The value of M may be configured by the network or reported by the UE. In other cases, referred to herein as Alternative 2, independent FD basis vectors may be used for each spatial domain basis vector, with potentially different numbers and/or selections of FD basis vectors for each spatial domain basis vector. The total number of FD basis vectors across all the 2L spatial beams may be configured. The enhanced type-II CSI feedback may further include the FD coefficients (e.g., amplitude and phase) in {tilde over (W)}2. For Alternative 1 (the common FD basis vector subset selection), the enhanced type-II CSI feedback may report only a subset K0<K=2LM of the coefficients. For Alternative 2 (the independent basis subset selection), the enhanced type-II CSI feedback may report K=Σi=02L−1Miamplitude and phase coefficients, wherein Miis the number of FD basis vectors associated with one spatial beam. A variety of quantization and reporting options may be used, two examples of which are provided below. As a first example, for each of the K or K0FD coefficients, the enhanced type-II CSI feedback may use 3-bit amplitude and QPSK or 8PSK phase. As a second example, the enhanced type-II CSI feedback may report a 3-bit wideband amplitude for each beam or spatial domain basis vector, a 2-bit or 3-bit differential amplitude for each FD coefficient, and a QPSK or 8PSK phase bit. However, in some cases, it may be desirable for the UE to override and/or alter a configuration that would otherwise cause the UE to transmit CSI feedback using the conventional enhanced type-II CSI report configuration described above. For example, if a configured number of FD basis vectors and/or a number of coefficients is insufficient, reconstruction performance may be degraded if the actual channel delay spread is significant. As a particular example, if a precoder comprises a large delay spread, a power of the precoder may not be well captured by the configured number of FD basis vectors and/or the number of coefficients. Thus, in some cases, it may be desirable for the UE to override the CSI report such that the UE uses a CSI report configuration other than the conventional enhanced type-II CSI report configuration. Similarly, in some cases, it may be desirable for the UE to use an improved type-II CSI report configuration rather than the conventional enhanced type-II CSI report configuration. Some techniques and apparatuses described herein provide UE override for enhanced type-II CSI. In some aspects, the UE may determine that an enhanced type-II CSI report configuration, associated with transmitting CSI feedback to a base station, is to be overridden, and may transmit a CSI report using another CSI report configuration. In some aspects, in such a case, the CSI report may include the CSI feedback and an indication that the enhanced type-II CSI report configuration has been overridden. Additionally, some techniques and apparatuses described herein provide an improved enhanced type-II CSI report configuration. In some aspects, the UE may determine that an identity matrix is to be used as a frequency domain basis for an enhanced type-II CSI report configuration associated with transmitting CSI feedback to a base station (e.g., rather than using a DFT matrix as the frequency basis), and may transmit a CSI report using the identity matrix as the frequency domain basis. In the case of either the override of the enhanced type-II CSI report configuration or use of the improved enhanced type-II CSI report configuration, communication performance between the UE and the BS is improved based at least in part on improved accuracy in reporting of CSI feedback. FIG.3is a diagram illustrating an example300of UE override for enhanced type-II CSI, in accordance with various aspects of the present disclosure. As shown, example300includes a UE120and a BS110that are associated with a communication link. As further shown, the communication link may be associated with a channel. For example, the communication link may be referred to as the channel, or may propagate via the channel. As shown inFIG.3, and by reference number305, BS110may transmit a reference signal transmission to UE120. The reference signal transmission may include, for example, a CSI reference signal, a demodulation reference signal, and/or the like. As shown by reference number310, UE120may perform CSI measurements on the reference signal transmission. For example, UE120may perform channel estimation or another operation based at least in part on the reference signal transmissions in order to determine CSI feedback. As shown by reference number315, UE120may determine that an enhanced type-II CSI report configuration, associated with transmitting the CSI feedback to BS110, is to be overridden. For example, UE120may be configured with a codebook indicating that UE120is to use the enhanced type-II CSI report configuration in association with transmitting a CSI report to BS110. However, in some aspects, as indicated by reference number315, UE120may determine that UE120is to override the enhanced type-II CSI report configuration. In some aspects, UE120may determine that the enhanced type-II CSI report configuration is to be overridden based at least in part on determining that a metric, associated with the enhanced type-II CSI report configuration, is inferior to a metric associated with a type-I CSI report configuration. For example, UE120may be configured with the type-I CSI report configuration. Here, based at least in part on the CSI measurements performed by UE120, UE120may compute a metric for the enhanced type-II CSI report configuration and a comparable metric for the type-I CSI report configuration. In this example, if the metric associated with the enhanced type-II CSI report configuration is inferior to (e.g., less than by a threshold amount) the metric associated with the type-I CSI report configuration, then UE120may determine that UE120is to override the enhanced type-II CSI report configuration. In some aspects, the metric used by UE120to make this determination may include a signal to noise ratio (SNR), a capacity metric, a channel quality indicator (CQI), and/or the like. In some aspects, UE120may determine that the enhanced type-II CSI report configuration is to be overridden based at least in part on determining that a difference between a metric, associated with the enhanced type-II CSI report configuration, and a metric associated with a type-II CSI report configuration satisfies a threshold. For example, UE120may be configured with the type-II CSI report configuration. Here, based at least in part on the CSI measurements performed by UE120, UE120may compute a metric for the enhanced type-II CSI report configuration and a comparable metric for the type-II CSI report configuration. In this example, if a difference between the metric associated with the enhanced type-II CSI report configuration and the metric associated with the type-II CSI report configuration, then UE120may determine that UE120is to override the enhanced type-II CSI report configuration. In some aspects, the metric used by UE120to make this determination may include, for example, an SNR, a capacity metric, a CQI, and/or the like. In some aspects, the threshold may be predefined on UE120(e.g., in accordance with an applicable 3GPP Standard), or may be configured on UE120(e.g., by BS110). In some aspects, UE120may determine that the enhanced type-II CSI report configuration is to be overridden based at least in part on a condition configured on or determined by UE120. The condition may include, for example, a determination by UE120that a configured parameter (e.g., a configured number of FD basis vectors, a configured number of coefficients, and/or the like) fails to satisfy a threshold (e.g., is less than a particular number) and/or is otherwise determined to be insufficient to a particular degree. As shown by reference number320, UE120may transmit, based at least in part on determining that the enhanced type-II CSI report configuration is to be overridden, a CSI report using another CSI report configuration. As shown, the CSI report may include the CSI feedback and an indication that the enhanced type-II CSI report configuration has been overridden. In some aspects, UE120may include the indication that the enhanced type-II CSI report configuration has been overridden by including a particular combination of values (e.g., one or more unused and/or invalid values) in the CSI report. For example, UE120may include the indication in the CSI report by including, in the CSI report, a particular combination of one or more unused and/or invalid rank indicators (RI), one or more unused and/or invalid precoding matrix indicators (PMI), one or more unused and/or invalid channel quality indicators (CQI), and/or the like. In such a case, the use of the particular combination may indicate, to BS110, that the enhanced type-II CSI report configuration has been overridden. As an example, in some aspects, UE120may include the indication that the enhanced type-II CSI report configuration has been overridden in the CSI report by using an out of range (OOR) CQI included the CSI report. In such a case, the use of the OOR CQI may indicate, to BS110, that the enhanced type-II CSI report configuration has been overridden. As another example, in some aspects, UE120may include the indication that the enhanced type-II CSI report configuration has been overridden in the CSI report by using a dedicated PMI included in the CSI report. For example, the dedicated PMI may identify a number of FD basis vectors as zero, a number of coefficients as zero, amplitudes of all coefficients as zero, and/or the like. In such a case, the use of the dedicated PMI may indicate, to BS110, that the enhanced type-II CSI report configuration has been overridden. As another example, in some aspects, UE120may include the indication that the enhanced type-II CSI report configuration has been overridden using an indication that an identity matrix has been selected as a frequency domain basis (e.g., (e.g., by including a dedicated basis selection indication that indicates that an identity matrix was selected as the frequency domain basis). In such a case, the other CSI report configuration used by UE120is an enhanced type-II CSI report configuration using the identity matrix as the frequency domain basis. In some aspects, the use of the identity matrix results in per subband level reporting in the CSI report. Additional detail regarding selection and use of the identity matrix as the frequency domain basis are provided below with regard toFIGS.5and6. In some aspects, UE120may include the indication that the enhanced type-II CSI report configuration has been overridden by including an addition in the CSI report (e.g., an additional item of information that would otherwise not be included in the CSI report). In some aspects, the addition may include, for example, one or more bits added to the CSI report. In such a case, the addition to the CSI report may indicate, to BS110, that the enhanced type-II CSI report configuration has been overridden. In some aspects, UE120may include the indication that the enhanced type-II CSI report configuration has been overridden in part 1 of the CSI report and/or in part 2 of the CSI report. In some aspects, the enhanced type-II CSI report configuration may be associated with a different codebook than that of the other CSI report configuration (i.e., the CSI report configuration with which UE120overrides the enhanced type-II CSI report configuration). For example, the enhanced type-II CSI report configuration may be associated with a first codebook configured on UE120, and the other CSI report configuration may be associated with a second codebook configured on UE120. In such a case, in some aspects, the other CSI report configuration may be a type-II CSI report configuration that is associated with the second codebook. For example, when the first codebook comprises Release 16 Type-II CSI reporting, the other report configuration may be associated with a codebook that comprises Release 15 Type-II CSI reporting. As another example, when the first codebook comprises Release 17 Type-II CSI reporting, the other report configuration may be associated with a codebook that comprises Release 16 Type-II CSI reporting. Alternatively, the other CSI report configuration may be a type-I CSI report configuration that is associated with the second codebook. In some aspects, the other CSI report configuration is a simplified enhanced type-II CSI report configuration. In other words, UE120may be configured with a simplified enhanced type-II CSI report configuration (e.g., a configuration that uses fewer and/or simpler parameters, matrices, and/or the like), and may override the enhanced type-II CSI report configuration using the simplified enhanced type-II CSI report configuration. In some aspects, the other CSI report configuration is a modified enhanced type-II CSI report configuration. In other words, UE120may be configured with a modified enhanced type-II CSI report configuration (e.g., a configuration that uses different and/or alternate parameters, matrices, and/or the like), and may override the enhanced type-II CSI report configuration using the modified enhanced type-II CSI report configuration. In some aspects, when using the other CSI report configuration in association with transmitting the CSI report, UE120may override one or more parameters of the other CSI report configuration. For example, UE120may override a quantization level, a spatial domain basis vector, a subband size, and/or another parameter of the other CSI report configuration when transmitting the CSI report. In some aspects, based at least in part on the other CSI report configuration, UE120may include information associated with a single coefficient and/or information associated with a single basis in the CSI report. For example, the other CSI report configuration may indicate that information associated with a single coefficient and/or information associated with a single basis is to be included in the CSI report, and UE120may transmit the CSI report according to the other CSI report configuration. In some aspects, the information associated with the single coefficient may include information that identifies the single coefficient or information that identifies a location of a strongest coefficient (e.g., such that the CSI report includes information associated with the single coefficient and/or the single basis). In some aspects, based at least in part on the other CSI report configuration, UE120may include only a spatial domain compression matrix (W1) in the CSI report. For example, the other CSI report configuration may indicate that only a spatial domain compression matrix is to be included in the CSI report, and UE120may transmit the CSI report according to the other CSI report configuration (e.g., such that the CSI report includes only W1). In some aspects, based at least in part on the other CSI report configuration, UE120may not include CSI part II in the CSI report. For example, the other CSI report configuration may indicate that CSI part II is not to be included in the CSI report, and UE120may transmit the CSI report according to the other CSI report configuration (e.g., such that the CSI report does not include part II). As shown by reference number325, BS110(and/or UE120) may perform communication on the communication link based at least in part on the CSI feedback included in the CSI report. For example, BS110may receive the CSI report, and may generate one or more beamformed beams for UE120using phase and amplitude FD coefficients, one or more spatial domain basis vectors, one or more frequency domain basis vectors, and/or other information included in the CSI feedback. In this way, UE120may override enhanced type-II CSI in order to improve accuracy in reporting of CSI feedback (e.g., in a case when the conventional enhanced type-II CSI report configuration is insufficient), thereby improving communication performance between UE120and BS110. As indicated above,FIG.3is provided as an example. Other examples may differ from what is described with respect toFIG.3. FIG.4is a diagram illustrating an example process400performed, for example, by a UE, in accordance with various aspects of the present disclosure. Example process400is an example where a UE (e.g., UE120) performs override for enhanced type-II CSI. As shown inFIG.4, in some aspects, process400may include determining that an enhanced type-II CSI report configuration, associated with transmitting CSI feedback to a base station, is to be overridden (block410). For example, the UE may determine (e.g., using antenna252, DEMOD254, MIMO detector256, receive processor258, transmit processor264, controller/processor280, and/or the like) that an enhanced type-II CSI report configuration, associated with transmitting CSI feedback to a base station, is to be overridden, as described above. As shown inFIG.4, in some aspects, process400may include transmitting, based at least in part on determining that the enhanced type-II CSI report configuration is to be overridden, a CSI report using another CSI report configuration (block420). For example, the UE may transmit (e.g., using antenna252, MOD254, TX MIMO processor266, transmit processor264, controller/processor280, and/or the like), based at least in part on determining that the enhanced type-II CSI report configuration is to be overridden, a CSI report using another CSI report configuration, as described above. In some aspects, the CSI report includes the CSI feedback and an indication that the enhanced type-II CSI report configuration has been overridden. Process400may include additional aspects, such as any single aspect and/or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, the determination that the enhanced type-II CSI report configuration is to be overridden is made based at least in part on determining that a metric, associated with the enhanced type-II CSI report configuration, is inferior to a metric associated with a type-I CSI report configuration. In a second aspect, alone or in combination with the first aspect, the metric associated with the enhanced type-II CSI report configuration and the metric associated with the type-I CSI report configuration include at least one of: a signal to noise ratio (SNR); a capacity metric; or a channel quality indicator (CQI). In a third aspect, alone or in combination with any one or more of the first and second aspects, the determination that the enhanced type-II CSI report configuration is to be overridden is made based at least in part on determining that a difference between a metric, associated with the enhanced type-II CSI report configuration, and a metric associated with a type-II CSI report configuration, satisfies a threshold. In a fourth aspect, alone or in combination with any one or more of the first through third aspects, the metric associated with the enhanced type-II CSI report configuration and the metric associated with the type-II CSI report configuration include at least one of: a signal to noise ratio (SNR); a capacity metric; or a channel quality indicator (CQI). In a fifth aspect, alone or in combination with any one or more of the first through fourth aspects, the threshold is predefined on the UE. In a sixth aspect, alone or in combination with any one or more of the first through fifth aspects, the threshold is configured on the UE by the base station. In a seventh aspect, alone or in combination with any one or more of the first through sixth aspects, the determination that the enhanced type-II CSI report configuration is to be overridden is made based at least in part on an override condition configured on or determined by the UE. In an eighth aspect, alone or in combination with any one or more of the first through seventh aspects, the indication that the enhanced type-II CSI report configuration has been overridden is included in the CSI report using a combination of unused or invalid values associated with at least one of: a rank indicator (RI); a precoding matrix indicator (PMI); or a channel quality indicator (CQI). In a ninth aspect, alone or in combination with any one or more of the first through eighth aspects, the indication that the enhanced type-II CSI report configuration has been overridden is an out of range (OOR) channel quality indicator (CQI) included the CSI report. In a tenth aspect, alone or in combination with any one or more of the first through ninth aspects, the indication that the enhanced type-II CSI report configuration has been overridden is a dedicated precoding matrix indicator (PMI) included in the CSI report. In an eleventh aspect, alone or in combination with any one or more of the first through tenth aspects, the dedicated PMI identifies a number of frequency domain (FD) basis vectors as zero or a number of coefficients as zero. In a twelfth aspect, alone or in combination with any one or more of the first through eleventh aspects, the dedicated PMI identifies amplitudes of all coefficients as zero. In a thirteenth aspect, alone or in combination with any one or more of the first through twelfth aspects, the indication that the enhanced type-II CSI report configuration has been overridden is included in the CSI report using an addition to the CSI report. In a fourteenth aspect, alone or in combination with any one or more of the first through thirteenth aspects, the addition includes one or more bits in the CSI report. In a fifteenth aspect, alone or in combination with any one or more of the first through fourteenth aspects, the indication that the enhanced type-II CSI report configuration has been overridden is included in part I of the CSI report. In a sixteenth aspect, alone or in combination with any one or more of the first through fifteenth aspects, the indication that the enhanced type-II CSI report configuration has been overridden is included in part II of the CSI report. In a seventeenth aspect, alone or in combination with any one or more of the first through sixteenth aspects, the enhanced type-II CSI report configuration is associated with a first codebook configured on the UE, and the other CSI report configuration is associated with a second codebook configured on the UE. In an eighteenth aspect, alone or in combination with any one or more of the first through seventeenth aspects, the other CSI report configuration is a type-II CSI report configuration associated with the second codebook. In a nineteenth aspect, alone or in combination with any one or more of the first through eighteenth aspects, the other CSI report configuration is a type-I CSI report configuration associated with the second codebook. In a twentieth aspect, alone or in combination with any one or more of the first through nineteenth aspects, the other CSI report configuration is a simplified enhanced type-II CSI report configuration. In a twenty-first aspect, alone or in combination with any one or more of the first through twentieth aspects, the other CSI report configuration is a modified type-II CSI report configuration. In a twenty-second aspect, alone or in combination with any one or more of the first through twenty-first aspects, one or more parameters, associated with the other CSI report configuration, are overridden in the CSI report. In a twenty-third aspect, alone or in combination with any one or more of the first through twenty-second aspects, based at least in part on the other CSI report configuration, the CSI report includes information associated with a single coefficient and information associated with a single basis. In a twenty-fourth, alone or in combination with any one or more of the first through twenty-third aspects, the information associated with the single coefficient includes information that identifies the single coefficient. In a twenty-fifth aspect, alone or in combination with any one or more of the first through twenty-fourth aspects, the information associated with the single coefficient includes information that identifies a location of a strongest coefficient. In a twenty-sixth aspect, alone or in combination with any one or more of the first through twenty-fifth aspects, based at least in part on the other CSI report configuration, the CSI report includes only a spatial domain compression matrix (W1). In a twenty-seventh aspect, alone or in combination with any one or more of the first through twenty-sixth aspects, based at least in part on the other CSI report configuration, the CSI report does not include part II. In a twenty-eighth aspect, alone or in combination with any one or more of the first through twenty-seventh aspects, the indication that the enhanced type-II CSI report configuration has been overridden is an indication that an identity matrix has been selected as a frequency domain basis, where the other CSI report configuration is an enhanced type-II CSI report configuration using the identity matrix as the frequency domain basis. In a twenty-ninth aspect, alone or in combination with any one or more of the first through twenty-eighth aspects, the use of the identity matrix results in per subband level reporting in the CSI report. In a thirtieth aspect, alone or in combination with any one or more of the first through twenty-ninth aspects, the identity matrix is used based at least in part on determining that a metric, associated with using a DFT matrix as the frequency basis, is inferior to a metric associated with using the identity matrix as the frequency basis. In a thirty-first aspect, alone or in combination with any one or more of the first through thirtieth aspects, the metric associated with using the DFT matrix as the frequency basis and the metric associated with using the identity matrix include at least one of: a SNR; a capacity metric; or a CQI. In a thirty-second aspect, alone or in combination with any one or more of the first through thirty-first aspects, the identity matrix is used based at least in part on determining that a difference between a metric associated with using a DFT matrix as the frequency basis, and a metric associated with using the identity matrix as the frequency basis, satisfies a threshold. In a thirty-third aspect, alone or in combination with any one or more of the first through thirty-second aspects the metric associated with using the DFT matrix as the frequency basis and the metric associated with using the identity matrix include at least one of: a SNR; a capacity metric; or a CQI. In a thirty-fourth aspect, alone or in combination with any one or more of the first through thirty-third aspects the threshold is predefined on the UE. In a thirty-fifth aspect, alone or in combination with any one or more of the first through thirty-fourth aspects the threshold is configured on the UE by the base station. In a thirty-sixth aspect, alone or in combination with any one or more of the first through thirty-fifth aspects the determination that the identity matrix is to be used is made based at least in part on a selection condition configured on or determined by the UE. AlthoughFIG.4shows example blocks of process400, in some aspects, process400may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.4. Additionally, or alternatively, two or more of the blocks of process400may be performed in parallel. FIG.5is a diagram illustrating an example500of UE override for enhanced type-II CSI, in accordance with various aspects of the present disclosure. As shown, example500includes a UE120and a BS110that are associated with a communication link. As further shown, the communication link may be associated with a channel. For example, the communication link may be referred to as the channel, or may propagate via the channel. As shown inFIG.5, and by reference number505, BS110may transmit a reference signal transmission to UE120. The reference signal transmission may include, for example, a CSI reference signal, a demodulation reference signal, and/or the like. As shown by reference number510, UE120may perform CSI measurements on the reference signal transmission. For example, UE120may perform channel estimation or another operation based at least in part on the reference signal transmissions in order to determine CSI feedback. As shown by reference number515, UE120may determine that an identity matrix is to be used as a frequency domain basis for an enhanced type-II CSI report configuration associated with transmitting CSI feedback to BS110. For example, UE120may be configured with a codebook indicating that UE120is to use the enhanced type-II CSI report configuration in association with transmitting a CSI report to BS110. However, as indicated by reference number515, UE120may determine that UE120is to use an identity matrix as a frequency basis for the enhanced type-II CSI report configuration, in some aspects. In some aspects, UE120may determine that UE120is to use the identity matrix as the frequency basis for the enhanced type-II CSI report configuration based at least in part on determining that a metric, associated with using a DFT matrix as the frequency basis for the enhanced type-II CSI report configuration, is inferior to a metric associated with using the identity matrix as the frequency basis for the enhanced type-II CSI report configuration. For example, based at least in part on the CSI measurements performed by UE120, UE120may compute a metric for the enhanced type-II CSI report configuration using the identity matrix as the frequency basis and a comparable metric for the enhanced type-II CSI report configuration using the DFT matrix as the frequency basis. In this example, if the metric associated with using the DFT matrix is inferior to (e.g., less than, by a threshold amount) the metric associated with the identity matrix, then UE120may determine that UE120is to use the identity matrix as the frequency basis for the enhanced type-II CSI report configuration. In some aspects, the metric used by UE120to make this determination may include a signal to noise ratio (SNR), a capacity metric, a channel quality indicator (CQI), and/or the like. In some aspects, UE120may determine that UE120is to use the identity matrix as the frequency basis for the enhanced type-II CSI report configuration based at least in part on determining that a difference between a metric associated with using a DFT matrix as the frequency basis for the enhanced type-II CSI report configuration, and a metric associated with using the identity matrix as the frequency basis for the enhanced type-II CSI report configuration, satisfies a threshold. For example, based at least in part on the CSI measurements performed by UE120, UE120may compute a metric for the enhanced type-II CSI report configuration using the identity matrix as the frequency basis and a comparable metric for the enhanced type-II CSI report configuration using the DFT matrix as the frequency basis. In this example, if a difference between the metric associated using the DFT matrix and the metric associated with using the identity matrix satisfies a threshold, then UE120may determine that UE120is to use the identity matrix as the frequency basis for the enhanced type-II CSI report configuration. In some aspects, the metric used by UE120to make this determination may include, for example, an SNR, a capacity metric, a CQI, and/or the like. In some aspects, the threshold may be predefined on UE120(e.g., in accordance with an applicable 3GPP standard), or may be configured on UE120(e.g., by BS110). In some aspects, UE120may determine that UE120is to use the identity matrix as the frequency basis for the enhanced type-II CSI report configuration based at least in part on a condition configured on or determined by UE120. The condition may include, for example, a determination by UE120that a configured parameter (e.g., a configured number of FD basis vectors, a configured number of coefficients, and/or the like) fails to satisfy a threshold (e.g., is less than a particular number) and/or is otherwise determined to be insufficient to a particular degree. As shown by reference number520, UE120may transmit a CSI report using the identity matrix as the frequency domain basis for the enhanced type-II CSI report configuration. In some aspects, the use of the identity matrix results in per subband level reporting in the CSI report (e.g., rather than a frequency domain compressed version of CSI reporting). In some aspects, UE120may transmit the CSI report in a manner similar to that described above in association withFIG.3. As shown by reference number525, BS110(and/or UE120) may perform communication on the communication link based at least in part on the CSI feedback. For example, BS110may generate one or more beamformed beams for UE120using phase and amplitude FD coefficients, one or more spatial domain basis vectors, one or more frequency domain basis vectors, and/or other information included in the CSI feedback. In this way, UE120may use an improved enhanced type-II CSI report configuration in order to improve accuracy in reporting of CSI feedback (e.g., in a case when the conventional enhanced type-II CSI report configuration is insufficient), thereby improving communication performance between UE120and BS110. As indicated above,FIG.5is provided as an example. Other examples may differ from what is described with respect toFIG.5. FIG.6is a diagram illustrating an example process600performed, for example, by a UE, in accordance with various aspects of the present disclosure. Example process600is an example where a UE (e.g., UE120) performs improvement of enhanced type-II CSI. As shown inFIG.6, in some aspects, process600may include determining that an identity matrix is to be used as a frequency domain basis for an enhanced type-II CSI report configuration associated with transmitting CSI feedback to a base station (block610). For example, the UE may determine (e.g., using antenna252, DEMOD256, MIMO detector256, receive processor258, transmit processor266, controller/processor280, and/or the like) that an identity matrix is to be used as a frequency domain basis for an enhanced type-II CSI report configuration associated with transmitting CSI feedback to a base station, as described above. As shown inFIG.6, in some aspects, process600may include transmitting a CSI report using the identity matrix as the frequency domain basis for the enhanced type-II CSI report configuration (block620). For example, the UE may transmit (e.g., using antenna252, MOD256, TX MIMO processor266, transmit processor266, controller/processor280, and/or the like) a CSI report using the identity matrix as the frequency domain basis for the enhanced type-II CSI report configuration, as described above. Process600may include additional aspects, such as any single aspect and/or any combination of aspects described herein and/or in connection with one or more other processes described elsewhere herein. In a first aspect, the use of the identity matrix results in per subband level reporting in the CSI report. In a second aspect, alone or in combination with any one or more of the first and second aspects, the determination that the identity matrix is to be used is made based at least in part on determining that a metric, associated with using a DFT matrix as the frequency basis, is inferior to a metric associated with using the identity matrix as the frequency basis. In a third aspect, alone or in combination with any one or more of the first and second aspects, the metric associated with using the DFT matrix as the frequency basis and the metric associated with using the identity matrix include at least one of: a SNR; a capacity metric; or a CQI. In a fourth aspect, alone or in combination with any one or more of the first through third aspects, the determination that the identity matrix is to be used is made based at least in part on determining that a difference between a metric associated with using a DFT matrix as the frequency basis, and a metric associated with using the identity matrix as the frequency basis, satisfies a threshold. In a fifth aspect, alone or in combination with any one or more of the first through fourth aspects, the metric associated with using the DFT matrix as the frequency basis and the metric associated with using the identity matrix include at least one of: a SNR; a capacity metric; or a CQI. In a sixth aspect, alone or in combination with any one or more of the first through fifth aspects, the threshold is predefined on the UE. In a seventh aspect, alone or in combination with any one or more of the first through sixth aspects, the threshold is configured on the UE by the base station. In an eighth aspect, alone or in combination with any one or more of the first through seventh aspects, the determination that the identity matrix is to be used is made based at least in part on a selection condition configured on or determined by the UE. AlthoughFIG.6shows example blocks of process600, in some aspects, process600may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.6. Additionally, or alternatively, two or more of the blocks of process600may be performed in parallel. The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the aspects to the precise forms disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the aspects. As used herein, the term “component” is intended to be broadly construed as hardware, firmware, and/or a combination of hardware and software. As used herein, a processor is implemented in hardware, firmware, and/or a combination of hardware and software. Some aspects are described herein in connection with thresholds. As used herein, satisfying a threshold may refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, and/or the like. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the aspects. Thus, the operation and behavior of the systems and/or methods were described herein without reference to specific software code—it being understood that software and hardware can be designed to implement the systems and/or methods based, at least in part, on the description herein. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various aspects. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various aspects includes each dependent claim in combination with every other claim in the claim set. A phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Furthermore, as used herein, the terms “set” and “group” are intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, and/or the like), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” and/or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise.
63,431
11863495
DETAILED DESCRIPTION Various aspects of the disclosure are described more fully below with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Based on the teachings, one skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the disclosure, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth. In addition, the scope of the disclosure is intended to cover such an apparatus or method, which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth. It should be understood that any aspect of the disclosure disclosed may be embodied by one or more elements of a claim. Several aspects of telecommunications systems will now be presented with reference to various apparatuses and techniques. These apparatuses and techniques will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, and/or the like (collectively referred to as “elements”). These elements may be implemented using hardware, software, or combinations thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. It should be noted that while aspects may be described using terminology commonly associated with 5G and later wireless technologies, aspects of the present disclosure can be applied in other generation-based communications systems, such as and including 3G and/or 4G technologies. Artificial intelligence (AI)/machine learning (ML) functions can improve wireless communications. Massive multiple-input multiple-output (MIMO) systems are an important area for 5G and later systems. To implement massive MIMO, downlink channel state information (CSI) is analyzed by a base station, having hundreds or even thousands of centralized or distributed antennas, to address inter-user interference and to increase channel capacity. The UE may perform CSI measurements based on signals, such as channel state information reference signals (CSI-RSs), received from the base station. The downlink CSI measurements are fed back from the UEs to the base station for processing. It is noted that although the term base station is used throughout this document, any network entity, such as a base station, transmission point, server or even another UE (in the case of sidelink communications) is contemplated. The large amount of CSI feedback can be compressed with neural network processing, for example, with an auto-encoder at the UE. The UE can encode the channel state feedback and transmit the encoded feedback over the air to the base station. Upon receiving the information, the base station inputs the received encoded channel state feedback values into the decoder to approximate the channel state feedback. In sub-6 GHz massive MIMO systems, it is common for a base station (e.g., gNB) to have a larger number of antenna ports than the number of channel state information reference signal (CSI-RS) ports configured for the UE (e.g., 256 vs. 32). In such cases, the UE only sees a snapshot of the entire channel. If the UE uses an auto-encoder for compressing (e.g., encoding) and feedback of the channel, then the auto-encoder works on this spatial snapshot. As the channel evolves in time, the time dependent machine learning blocks (e.g., recurrent neural network (RNN), long short term memory (LSTM), or gated recurring unit (GRU) blocks) in the auto-encoder capture the evolution of the complex coefficients over time. For example, with Doppler shifts, the time dependent machine learning blocks will capture the Doppler related channel variation. If the environment changes, however, and the set of beams used for the base station (e.g., gNB) itself changes, the change of the set of beams may impact performance of the UE's auto-encoder. In such cases, according to aspects of the present disclosure, the base station may notify the UE of a change in the set of downlink beams. The notification may trigger the UE to flush the hidden states of the auto-encoder and restart the compression algorithm with a fresh slate. That is, notifying the UE that the set of downlink beams used for the CSI-RS have changed, may help the UE reset the hidden states of its auto-encoder, thereby improving the optimization framework of the channel state feedback (CSF) performance, and thus the auto-encoder's performance. The signaling can also include context information. The context information may be associated with neural network weights and hidden and/or cell state values, and may be stored by the UE in memory for future use. This enables the UE to reduce training time and/or improve training performance. A certain amount of handshake can help the UE and the base station reset the hidden and/or cell states of the encoder and decoder at the same time. Once the UE resets the hidden and/or cell states, it may send another feedback signal to the base station, indicating that the reset has been performed. FIG.1is a diagram illustrating a network100in which aspects of the present disclosure may be practiced. The network100may be a 5G or NR network or some other wireless network, such as an LTE network. The wireless network100may include a number of BSs110(shown as BS110a, BS110b, BS110c, and BS110d) and other network entities. A BS is an entity that communicates with user equipment (UEs) and may also be referred to as a base station, a NR BS, a Node B, a gNB, a 5G node B (NB), an access point, a transmit receive point (TRP), and/or the like. Each BS may provide communications coverage for a particular geographic area. In 3GPP, the term “cell” can refer to a coverage area of a BS and/or a BS subsystem serving this coverage area, depending on the context in which the term is used. A BS may provide communications coverage for a macro cell, a pico cell, a femto cell, and/or another type of cell. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscription. A pico cell may cover a relatively small geographic area and may allow unrestricted access by UEs with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs having association with the femto cell (e.g., UEs in a closed subscriber group (CSG)). A BS for a macro cell may be referred to as a macro BS. A BS for a pico cell may be referred to as a pico BS. A BS for a femto cell may be referred to as a femto BS or a home BS. In the example shown inFIG.1, a BS110amay be a macro BS for a macro cell102a, a BS110bmay be a pico BS for a pico cell102b, and a BS110cmay be a femto BS for a femto cell102c. A BS may support one or multiple (e.g., three) cells. The terms “eNB,” “base station,” “NR BS,” “gNB,” “TRP,” “AP,” “node B,” “5G NB,” and “cell” may be used interchangeably. In some aspects, a cell may not necessarily be stationary, and the geographic area of the cell may move according to the location of a mobile BS. In some aspects, the BSs may be interconnected to one another and/or to one or more other BSs or network nodes (not shown) in the wireless network100through various types of backhaul interfaces such as a direct physical connection, a virtual network, and/or the like using any suitable transport network. The wireless network100may also include relay stations. A relay station is an entity that can receive a transmission of data from an upstream station (e.g., a BS or a UE) and send a transmission of the data to a downstream station (e.g., a UE or a BS). A relay station may also be a UE that can relay transmissions for other UEs. In the example shown inFIG.1, a relay station110dmay communicate with macro BS110aand a UE120din order to facilitate communications between the BS110aand UE120d. A relay station may also be referred to as a relay BS, a relay base station, a relay, and/or the like. The wireless network100may be a heterogeneous network that includes BSs of different types, e.g., macro BSs, pico BSs, femto BSs, relay BSs, and/or the like. These different types of BSs may have different transmit power levels, different coverage areas, and different impact on interference in the wireless network100. For example, macro BSs may have a high transmit power level (e.g., 5 to 40 Watts) whereas pico BSs, femto BSs, and relay BSs may have lower transmit power levels (e.g., 0.1 to 2 Watts). A network controller130may couple to a set of BSs and may provide coordination and control for these BSs. The network controller130may communicate with the BSs via a backhaul. The BSs may also communicate with one another, e.g., directly or indirectly via a wireless or wireline backhaul. UEs120(e.g.,120a,120b,120c) may be dispersed throughout the wireless network100, and each UE may be stationary or mobile. A UE may also be referred to as an access terminal, a terminal, a mobile station, a subscriber unit, a station, and/or the like. A UE may be a cellular phone (e.g., a smart phone), a personal digital assistant (PDA), a wireless modem, a wireless communications device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet, a camera, a gaming device, a netbook, a smartbook, an ultrabook, a medical device or equipment, biometric sensors/devices, wearable devices (smart watches, smart clothing, smart glasses, smart wrist bands, smart jewelry (e.g., smart ring, smart bracelet)), an entertainment device (e.g., a music or video device, or a satellite radio), a vehicular component or sensor, smart meters/sensors, industrial manufacturing equipment, a global positioning system device, or any other suitable device that is configured to communicate via a wireless or wired medium. Some UEs may be considered machine-type communications (MTC) or evolved or enhanced machine-type communications (eMTC) UEs. MTC and eMTC UEs include, for example, robots, drones, remote devices, sensors, meters, monitors, location tags, and/or the like, that may communicate with a base station, another device (e.g., remote device), or some other entity. A wireless node may provide, for example, connectivity for or to a network (e.g., a wide area network such as Internet or a cellular network) via a wired or wireless communications link. Some UEs may be considered Internet-of-Things (IoT) devices, and/or may be implemented as NB-IoT (narrowband internet of things) devices. Some UEs may be considered a customer premises equipment (CPE). UE120may be included inside a housing that houses components of UE120, such as processor components, memory components, and/or the like. In general, any number of wireless networks may be deployed in a given geographic area. Each wireless network may support a particular RAT and may operate on one or more frequencies. A RAT may also be referred to as a radio technology, an air interface, and/or the like. A frequency may also be referred to as a carrier, a frequency channel, and/or the like. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. In some cases, NR or 5G RAT networks may be deployed. In some aspects, two or more UEs120(e.g., shown as UE120aand UE120e) may communicate directly using one or more sidelink channels (e.g., without using a base station110as an intermediary to communicate with one another). For example, the UEs120may communicate using peer-to-peer (P2P) communications, device-to-device (D2D) communications, a vehicle-to-everything (V2X) protocol (e.g., which may include a vehicle-to-vehicle (V2V) protocol, a vehicle-to-infrastructure (V2I) protocol, and/or the like), a mesh network, and/or the like. In this case, the UE120may perform scheduling operations, resource selection operations, and/or other operations described elsewhere as being performed by the base station110. For example, the base station110may configure a UE120via downlink control information (DCI), radio resource control (RRC) signaling, a media access control-control element (MAC-CE), or via system information (e.g., a system information block (SIB). In certain aspects, a UE, such as the UE120, may include a reset component198configured to receive, from a network entity, a message indicating a change in a set of downlink beams for channel state information reference signals (CSI-RSs), and a context associated with the change. The reset component198may also be configured to save state values in an auto-encoder neural network in response to receiving the message; and to associate the saved state values in the auto-encoder neural network to the context in the received message. The reset component198may be configured to reset the state values in the auto-encoder neural network in response to receiving the message; and to estimate a channel state based on the CSI-RSs received on the changed set of downlink beams. The reset component198may also be configured to compress the channel state with the auto-encoder neural network based on the reset state values; and to send to the network entity, the compressed channel state. A base station, such as the base station110, may include a beam change signaling component199configured to change, for a user equipment (UE), a set of downlink beams for channel state information reference signals (CSI-RSs); and to transmit a message, to the UE, indicating the changing of the set of downlink beams and a context to associate with the changing. The beam change signaling component199may also be configured to receive, from the UE, a channel state compressed in accordance with the message. As indicated above,FIG.1is provided merely as an example. Other examples may differ from what is described with regard toFIG.1. FIG.2shows a block diagram of a design200of the base station110and UE120, which may be one of the base stations and one of the UEs inFIG.1. The base station110may be equipped with T antennas234athrough234t, and UE120may be equipped with R antennas252athrough252r, where in general T≥1 and R≥1. At the base station110, a transmit processor220may receive data from a data source212for one or more UEs, select one or more modulation and coding schemes (MCS) for each UE based at least in part on channel quality indicators (CQIs) received from the UE, process (e.g., encode and modulate) the data for each UE based at least in part on the MCS(s) selected for the UE, and provide data symbols for all UEs. Decreasing the MCS lowers throughput but increases reliability of the transmission. The transmit processor220may also process system information (e.g., for semi-static resource partitioning information (SRPI) and/or the like) and control information (e.g., CQI requests, grants, upper layer signaling, and/or the like) and provide overhead symbols and control symbols. The transmit processor220may also generate reference symbols for reference signals (e.g., the cell-specific reference signal (CRS)) and synchronization signals (e.g., the primary synchronization signal (PSS) and secondary synchronization signal (SSS)). A transmit (TX) multiple-input multiple-output (MIMO) processor230may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, the overhead symbols, and/or the reference symbols, if applicable, and may provide T output symbol streams to T modulators (MODs)232athrough232t. Each modulator232may process a respective output symbol stream (e.g., for OFDM and/or the like) to obtain an output sample stream. Each modulator232may further process (e.g., convert to analog, amplify, filter, and upconvert) the output sample stream to obtain a downlink signal. T downlink signals from modulators232athrough232tmay be transmitted via T antennas234athrough234t, respectively. According to various aspects described in more detail below, the synchronization signals can be generated with location encoding to convey additional information. At the UE120, antennas252athrough252rmay receive the downlink signals from the base station110and/or other base stations and may provide received signals to demodulators (DEMODs)254athrough254r, respectively. Each demodulator254may condition (e.g., filter, amplify, downconvert, and digitize) a received signal to obtain input samples. Each demodulator254may further process the input samples (e.g., for OFDM and/or the like) to obtain received symbols. A MIMO detector256may obtain received symbols from all R demodulators254athrough254r, perform MIMO detection on the received symbols if applicable, and provide detected symbols. A receive processor258may process (e.g., demodulate and decode) the detected symbols, provide decoded data for the UE120to a data sink260, and provide decoded control information and system information to a controller/processor280. A channel processor may determine reference signal received power (RSRP), received signal strength indicator (RSSI), reference signal received quality (RSRQ), channel quality indicator (CQI), and/or the like. In some aspects, one or more components of the UE120may be included in a housing. On the uplink, at the UE120, a transmit processor264may receive and process data from a data source262and control information (e.g., for reports comprising RSRP, RSSI, RSRQ, CQI, and/or the like) from the controller/processor280. Transmit processor264may also generate reference symbols for one or more reference signals. The symbols from the transmit processor264may be precoded by a TX MIMO processor266if applicable, further processed by modulators254athrough254r(e.g., for DFT-s-OFDM, CP-OFDM, and/or the like), and transmitted to the base station110. At the base station110, the uplink signals from the UE120and other UEs may be received by the antennas234, processed by the demodulators254, detected by a MIMO detector236if applicable, and further processed by a receive processor238to obtain decoded data and control information sent by the UE120. The receive processor238may provide the decoded data to a data sink239and the decoded control information to a controller/processor240. The base station110may include communications unit244and communicate to the network controller130via the communications unit244. The network controller130may include a communications unit294, a controller/processor290, and a memory292. The controller/processor240of the base station110, the controller/processor280of the UE120, and/or any other component(s) ofFIG.2may perform one or more techniques associated with machine learning for non-QCL CSI-RSs, as described in more detail elsewhere. For example, the controller/processor240of the base station110, the controller/processor280of the UE120, and/or any other component(s) ofFIG.2may perform or direct operations of, for example, the processes ofFIGS.7-8and/or other processes as described. Memories242and282may store data and program codes for the base station110and UE120, respectively. A scheduler246may schedule UEs for data transmission on the downlink and/or uplink. In some aspects, the UE120may include means for receiving, means for saving, means for associating, means for resetting, means for estimating, means for compressing, means for sending, means for transmitting, and/or means for feeding. The base station110may include means for receiving, means for transmitting, and/or means for changing. Such means may include one or more components of the UE120or base station110described in connection withFIG.2. As indicated above,FIG.2is provided merely as an example. Other examples may differ from what is described with regard toFIG.2. In some cases, different types of devices supporting different types of applications and/or services may coexist in a cell. Examples of different types of devices include UE handsets, customer premises equipment (CPEs), vehicles, Internet of Things (IoT) devices, and/or the like. Examples of different types of applications include ultra-reliable low-latency communications (URLLC) applications, massive machine-type communications (mMTC) applications, enhanced mobile broadband (eMBB) applications, vehicle-to-anything (V2X) applications, and/or the like. Furthermore, in some cases, a single device may support different applications or services simultaneously. FIG.3illustrates an example implementation of a system-on-a-chip (SOC)300, which may include a central processing unit (CPU)302or a multi-core CPU configured for signaling a change of downlink transmission beams, in accordance with certain aspects of the present disclosure. The SOC300may be included in the base station110or UE120. Variables (e.g., neural signals and synaptic weights), system parameters associated with a computational device (e.g., neural network with weights), delays, frequency bin information, and task information may be stored in a memory block associated with a neural processing unit (NPU)308, in a memory block associated with a CPU302, in a memory block associated with a graphics processing unit (GPU)304, in a memory block associated with a digital signal processor (DSP)306, in a memory block318, or may be distributed across multiple blocks. Instructions executed at the CPU302may be loaded from a program memory associated with the CPU302or may be loaded from a memory block318. The SOC300may also include additional processing blocks tailored to specific functions, such as a GPU304, a DSP306, a connectivity block310, which may include fifth generation (5G) connectivity, fourth generation long term evolution (4G LTE) connectivity, Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, and the like, and a multimedia processor312that may, for example, detect and recognize gestures. In one implementation, the NPU is implemented in the CPU, DSP, and/or GPU. The SOC300may also include a sensor processor314, image signal processors (ISPs)316, and/or navigation module320, which may include a global positioning system. The SOC300may be based on an ARM instruction set. In aspects of the present disclosure, the instructions loaded into the general-purpose processor302may comprise code to receive, from a base station, a message indicating a change in a set of downlink beams for channel state information reference signals (CSI-RSs), and a context associated with the change; code to save state values in an auto-encoder neural network in response to receiving the message; code to associate the saved state values in the auto-encoder neural network to the context in the received message; and code to reset the state values in the auto-encoder neural network in response to receiving the message. The instructions may also comprise code to estimate a channel state based on the CSI-RSs received on the changed set of downlink beams; code to compress the channel state with the auto-encoder neural network based on the reset state values; and code to send to the base station, the compressed channel state and an indication that resetting occurred. In other aspects of the present disclosure, the instructions loaded into the general-purpose processor302may comprise code to change, for a user equipment (UE), a set of downlink beams for channel state information reference signals (CSI-RSs); code to transmit a message, to the UE, indicating the changing of the set of downlink beams and a context to associate with the changing; and code to receive, from the UE, a channel state compressed in accordance with the message. Deep learning architectures may perform an object recognition task by learning to represent inputs at successively higher levels of abstraction in each layer, thereby building up a useful feature representation of the input data. In this way, deep learning addresses a major bottleneck of traditional machine learning. Prior to the advent of deep learning, a machine learning approach to an object recognition problem may have relied heavily on human engineered features, perhaps in combination with a shallow classifier. A shallow classifier may be a two-class linear classifier, for example, in which a weighted sum of the feature vector components may be compared with a threshold to predict to which class the input belongs. Human engineered features may be templates or kernels tailored to a specific problem domain by engineers with domain expertise. Deep learning architectures, in contrast, may learn to represent features that are similar to what a human engineer might design, but through training. Furthermore, a deep network may learn to represent and recognize new types of features that a human might not have considered. A deep learning architecture may learn a hierarchy of features. If presented with visual data, for example, the first layer may learn to recognize relatively simple features, such as edges, in the input stream. In another example, if presented with auditory data, the first layer may learn to recognize spectral power in specific frequencies. The second layer, taking the output of the first layer as input, may learn to recognize combinations of features, such as simple shapes for visual data or combinations of sounds for auditory data. For instance, higher layers may learn to represent complex shapes in visual data or words in auditory data. Still higher layers may learn to recognize common visual objects or spoken phrases. Deep learning architectures may perform especially well when applied to problems that have a natural hierarchical structure. For example, the classification of motorized vehicles may benefit from first learning to recognize wheels, windshields, and other features. These features may be combined at higher layers in different ways to recognize cars, trucks, and airplanes. Neural networks may be designed with a variety of connectivity patterns. In feed-forward networks, information is passed from lower to higher layers, with each neuron in a given layer communicating to neurons in higher layers. A hierarchical representation may be built up in successive layers of a feed-forward network, as described above. Neural networks may also have recurrent or feedback (also called top-down) connections. In a recurrent connection, the output from a neuron in a given layer may be communicated to another neuron in the same layer. A recurrent architecture may be helpful in recognizing patterns that span more than one of the input data chunks that are delivered to the neural network in a sequence. A connection from a neuron in a given layer to a neuron in a lower layer is called a feedback (or top-down) connection. A network with many feedback connections may be helpful when the recognition of a high-level concept may aid in discriminating the particular low-level features of an input. The connections between layers of a neural network may be fully connected or locally connected.FIG.4Aillustrates an example of a fully connected neural network402. In a fully connected neural network402, a neuron in a first layer may communicate its output to every neuron in a second layer, so that each neuron in the second layer will receive input from every neuron in the first layer.FIG.4Billustrates an example of a locally connected neural network404. In a locally connected neural network404, a neuron in a first layer may be connected to a limited number of neurons in the second layer. More generally, a locally connected layer of the locally connected neural network404may be configured so that each neuron in a layer will have the same or a similar connectivity pattern, but with connections strengths that may have different values (e.g.,410,412,414, and416). The locally connected connectivity pattern may give rise to spatially distinct receptive fields in a higher layer, because the higher layer neurons in a given region may receive inputs that are tuned through training to the properties of a restricted portion of the total input to the network. One example of a locally connected neural network is a convolutional neural network.FIG.4Cillustrates an example of a convolutional neural network406. The convolutional neural network406may be configured such that the connection strengths associated with the inputs for each neuron in the second layer are shared (e.g.,408). Convolutional neural networks may be well suited to problems in which the spatial location of inputs is meaningful. One type of convolutional neural network is a deep convolutional network (DCN).FIG.4Dillustrates a detailed example of a DCN400designed to recognize visual features from an image426input from an image capturing device430, such as a car-mounted camera. The DCN400of the current example may be trained to identify traffic signs and a number provided on the traffic sign. Of course, the DCN400may be trained for other tasks, such as identifying lane markings or identifying traffic lights. The DCN400may be trained with supervised learning. During training, the DCN400may be presented with an image, such as the image426of a speed limit sign, and a forward pass may then be computed to produce an output422. The DCN400may include a feature extraction section and a classification section. Upon receiving the image426, a convolutional layer432may apply convolutional kernels (not shown) to the image426to generate a first set of feature maps418. As an example, the convolutional kernel for the convolutional layer432may be a 5×5 kernel that generates 28×28 feature maps. In the present example, because four different feature maps are generated in the first set of feature maps418, four different convolutional kernels were applied to the image426at the convolutional layer432. The convolutional kernels may also be referred to as filters or convolutional filters. The first set of feature maps418may be subsampled by a max pooling layer (not shown) to generate a second set of feature maps420. The max pooling layer reduces the size of the first set of feature maps418. That is, a size of the second set of feature maps420, such as 14×14, is less than the size of the first set of feature maps418, such as 28×28. The reduced size provides similar information to a subsequent layer while reducing memory consumption. The second set of feature maps420may be further convolved via one or more subsequent convolutional layers (not shown) to generate one or more subsequent sets of feature maps (not shown). In the example ofFIG.4D, the second set of feature maps420is convolved to generate a first feature vector424. Furthermore, the first feature vector424is further convolved to generate a second feature vector428. Each feature of the second feature vector428may include a number that corresponds to a possible feature of the image426, such as “sign,” “60,” and “100.” A softmax function (not shown) may convert the numbers in the second feature vector428to a probability. As such, an output422of the DCN400is a probability of the image426including one or more features. In the present example, the probabilities in the output422for “sign” and “60” are higher than the probabilities of the others of the output422, such as “30,” “40,” “50,” “70,” “80,” “90,” and “100”. Before training, the output422produced by the DCN400is likely to be incorrect. Thus, an error may be calculated between the output422and a target output. The target output is the ground truth of the image426(e.g., “sign” and “60”). The weights of the DCN400may then be adjusted so the output422of the DCN400is more closely aligned with the target output. To adjust the weights, a learning algorithm may compute a gradient vector for the weights. The gradient may indicate an amount that an error would increase or decrease if the weight were adjusted. At the top layer, the gradient may correspond directly to the value of a weight connecting an activated neuron in the penultimate layer and a neuron in the output layer. In lower layers, the gradient may depend on the value of the weights and on the computed error gradients of the higher layers. The weights may then be adjusted to reduce the error. This manner of adjusting the weights may be referred to as “back propagation” as it involves a “backward pass” through the neural network. In practice, the error gradient of weights may be calculated over a small number of examples, so that the calculated gradient approximates the true error gradient. This approximation method may be referred to as stochastic gradient descent. Stochastic gradient descent may be repeated until the achievable error rate of the entire system has stopped decreasing or until the error rate has reached a target level. After learning, the DCN may be presented with new images (e.g., the speed limit sign of the image426) and a forward pass through the network may yield an output422that may be considered an inference or a prediction of the DCN. Deep belief networks (DBNs) are probabilistic models comprising multiple layers of hidden nodes. DBNs may be used to extract a hierarchical representation of training data sets. A DBN may be obtained by stacking up layers of Restricted Boltzmann Machines (RBMs). An RBM is a type of artificial neural network that can learn a probability distribution over a set of inputs. Because RBMs can learn a probability distribution in the absence of information about the class to which each input should be categorized, RBMs are often used in unsupervised learning. Using a hybrid unsupervised and supervised paradigm, the bottom RBMs of a DBN may be trained in an unsupervised manner and may serve as feature extractors, and the top RBM may be trained in a supervised manner (on a joint distribution of inputs from the previous layer and target classes) and may serve as a classifier. Deep convolutional networks (DCNs) are networks of convolutional networks, configured with additional pooling and normalization layers. DCNs have achieved state-of-the-art performance on many tasks. DCNs can be trained using supervised learning in which both the input and output targets are known for many exemplars and are used to modify the weights of the network by use of gradient descent methods. DCNs may be feed-forward networks. In addition, as described above, the connections from a neuron in a first layer of a DCN to a group of neurons in the next higher layer are shared across the neurons in the first layer. The feed-forward and shared connections of DCNs may be exploited for fast processing. The computational burden of a DCN may be much less, for example, than that of a similarly sized neural network that comprises recurrent or feedback connections. The processing of each layer of a convolutional network may be considered a spatially invariant template or basis projection. If the input is first decomposed into multiple channels, such as the red, green, and blue channels of a color image, then the convolutional network trained on that input may be considered three-dimensional, with two spatial dimensions along the axes of the image and a third dimension capturing color information. The outputs of the convolutional connections may be considered to form a feature map in the subsequent layer, with each element of the feature map (e.g.,220) receiving input from a range of neurons in the previous layer (e.g., feature maps218) and from each of the multiple channels. The values in the feature map may be further processed with a non-linearity, such as a rectification, max(0, x). Values from adjacent neurons may be further pooled, which corresponds to down sampling, and may provide additional local invariance and dimensionality reduction. Normalization, which corresponds to whitening, may also be applied through lateral inhibition between neurons in the feature map. The performance of deep learning architectures may increase as more labeled data points become available or as computational power increases. Modern deep neural networks are routinely trained with computing resources that are thousands of times greater than what was available to a typical researcher just fifteen years ago. New architectures and training paradigms may further boost the performance of deep learning. Rectified linear units may reduce a training issue known as vanishing gradients. New training techniques may reduce over-fitting and thus enable larger models to achieve better generalization. Encapsulation techniques may abstract data in a given receptive field and further boost overall performance. FIG.5is a block diagram illustrating a deep convolutional network550. The deep convolutional network550may include multiple different types of layers based on connectivity and weight sharing. As shown inFIG.5, the deep convolutional network550includes the convolution blocks554A,554B. Each of the convolution blocks554A,554B may be configured with a convolution layer (CONV)556, a normalization layer (LNorm)558, and a max pooling layer (MAX POOL)560. The convolution layers556may include one or more convolutional filters, which may be applied to the input data to generate a feature map. Although only two of the convolution blocks554A,554B are shown, the present disclosure is not so limiting, and instead, any number of the convolution blocks554A,554B may be included in the deep convolutional network550according to design preference. The normalization layer558may normalize the output of the convolution filters. For example, the normalization layer558may provide whitening or lateral inhibition. The max pooling layer560may provide down sampling aggregation over space for local invariance and dimensionality reduction. The parallel filter banks, for example, of a deep convolutional network may be loaded on a CPU302or GPU304of an SOC300to achieve high performance and low power consumption. In alternative embodiments, the parallel filter banks may be loaded on the DSP306or an ISP316of an SOC300. In addition, the deep convolutional network550may access other processing blocks that may be present on the SOC300, such as sensor processor314and navigation module320, dedicated, respectively, to sensors and navigation. The deep convolutional network550may also include one or more fully connected layers562(FC1 and FC2). The deep convolutional network550may further include a logistic regression (LR) layer564. Between each layer556,558,560,562,564of the deep convolutional network550are weights (not shown) that are to be updated. The output of each of the layers (e.g.,556,558,560,562,564) may serve as an input of a succeeding one of the layers (e.g.,556,558,560,562,564) in the deep convolutional network550to learn hierarchical feature representations from input data552(e.g., images, audio, video, sensor data and/or other input data) supplied at the first of the convolution blocks554A. The output of the deep convolutional network550is a classification score566for the input data552. The classification score566may be a set of probabilities, where each probability is the probability of the input data, including a feature from a set of features. Artificial intelligence (AI)/machine learning (ML) algorithms can improve wireless communications. An AI/ML module can run at the UE, the base station or in the case of distributed algorithms, jointly across the UE and base station. In an auto-encoder scenario, joint training may occur across the UE and the base station. Massive multiple-input multiple-output (MIMO) systems are an important area for 5G and later systems. To implement massive MIMO, downlink channel state information (CSI) is analyzed by a base station, having hundreds or even thousands of centralized or distributed antennas, to address inter-user interference and to increase channel capacity. The UE measures the CSI based on signals, such as channel state information reference signals (CSI-RSs), received from the base station. The downlink CSI measurements are fed back from the UEs to the base station for processing. The large amount of CSI feedback can be compressed (e.g., encoded) with neural network processing, for example, with an auto-encoder at the UE. The UE can encode the channel state feedback and transmit the encoded feedback over the air to the base station. The channel state feedback is sent from the UE in accordance with timelines configured by radio resource control (RRC) signaling. Upon receiving the information, the base station inputs the received compressed channel state feedback values into the decoder to approximate the channel state feedback. FIG.6is a block diagram illustrating an exemplary auto-encoder600, in accordance with aspects of the present disclosure. The auto-encoder600includes an encoder610having a convolutional layer (Cony) and a fully connected layer (FC). The encoder610receives the channel realization and/or interference realization as an input, and compresses the channel/interference realization. The channel realization can also be referred to as a channel estimate. The interference realization can also be referred to as an interference estimate. Interference depends upon the environment and can address uplink interference or inter-stream interference in MIMO scenarios. The compressed channel state feedback is output from the encoder610. The auto-encoder600also has a decoder620that receives the compressed channel state feedback output from the encoder610. The decoder620passes the received information through a fully connected layer and a series of convolutional layers to recover the channel state (e.g., approximate channel state). The UE trains the encoder610and decoder620, and occasionally transmits the decoder coefficients to the base station. At a higher frequency, the UE sends the outputs of the encoder610(e.g., channel state feedback or compressed output of the encoder610) to the base station. As the UE moves from location to location, the weights of the decoder620may change. That is, when the channel environment changes, the decoder weights (e.g., coefficients) may change. Updated decoder coefficients can thus be fed back to the base station from the UE to reflect the changing environment. In other words, the UE can train the decoder620, and not just the encoder610, based on the existing environment. The coefficients can be sent from the UE in accordance with timelines configured by RRC signaling. In one configuration, the coefficients are sent less frequently in comparison to a frequency of the channel state feedback. Each UE sends the decoder coefficients and the encoder coefficients. In massive multiple input multiple output (MIMO), a number of downlink antenna ports at the base station (e.g., gNB) can be greater than a number of ports on which channel state information reference signals (CSI-RSs) are sent to the UE. For example, the base station (e.g., gNB) may have 256 or 512 ports, while a UE may be sent only a 32-port CSI-RS. In sub-6 GHz massive MIMO systems, it is common for the base station (e.g., gNB) to have a larger number of antenna ports than the number of CSI-RS ports configured for the UE (e.g., 256 vs. 32). In such cases, the UE only sees a snapshot of the entire channel. If the UE uses an auto-encoder for compressing and feedback of the channel, then the auto-encoder works on this spatial snapshot. As the channel evolves in time, the time dependent machine learning blocks (e.g., recurrent neural network (RNN), long short term memory (LSTM), or gated recurring unit (GRU) blocks) in the auto-encoder capture the evolution of the complex coefficients over time. For example, with Doppler, the time dependent machine learning blocks will capture the Doppler related channel variation. Although the complex numbers of the machine learning coefficients evolve in time, the best downlink (DL) CSI-RS ports (e.g., DL beam indices) may not change. In a stationary channel, the fact that only a portion of the channel is observed by the UE may not make much of a difference to the UE, as the 32 ports are likely to remain unchanged. If the environment changes, however, and the set of beams used for the base station (e.g., gNB) itself changes, the change may impact performance of the UE's auto-encoder. In such cases, according to aspects of the present disclosure, the base station may notify the UE of a change in the set of downlink beams. The notification may trigger the UE to flush its hidden states and restart the compression algorithm with a fresh slate. That is, notifying the UE that the set of downlink transmit beams used for the CSI-RS have changed, can help the UE reset the hidden states of its auto-encoder, thereby improving the optimization framework of the channel state feedback (CSF) performance, and thus the auto-encoder's performance. In other aspects of the present disclosure, signaling from the base station indicates a context to which the UE can associate the hidden state values and/or cell state values. The context information includes information about neural network weights and hidden and/or cell states saved by the UE in memory for future use. Hidden state and/or cell values may be associated with each context. The UE may use the context information to reduce training time of the neural network or to improve training performance for a given context. With mobility, or change of the environment around the UE, it is likely that the best CSI-RS ports (e.g., DL beams) for the UE itself may change. For example, a UE may move into a new location, making a subset of previous scatterers irrelevant. In another example, a UE's environment may change without a significant change in location, such as when reflectors change while the UE remains stationary. For example, a truck parked in front of a café, which acted as a reflector for a signal from the base station, may leave. In another example, some UEs may enter or leave the cell, thereby changing a load on the cell. Thus, a set of downlink beams used for CSI-RS may change for the current UE, especially in the case of multi-user (MU)-MIMO where orthogonalizing will result in a different set of beams for the current UE. A change in the set of DL CSI-RS beams sent to the UE may cause the current and previous sets of CSI-RSs to become non-quasi-located (non-QCL'd). According to aspects of the present disclosure, when the CSI-RS ports change, a UE's channel compression algorithm accounts for the change. As described previously, the UE is only observing a snapshot of the channel and not the entire channel. The change in the snapshot may be captured by discarding the hidden and/or cell states in the time dependent machine learning blocks of the auto-encoder and starting the hidden states afresh. Because these hidden states are accumulated over a long period of time, resetting the hidden states improves performance of UE auto-encoders. According to aspects of the present disclosure, whenever the set of downlink beams used by the base station (e.g., gNB) to send CSI-RSs to the UE changes, the base station sends a message, such as a reset signal and context related information, to the UE. In some aspects, the message may be a bit within a radio resource control (RRC) message or a media access control-control element (MAC-CE) message. The UE (based on its capability to read this bit and context information) flushes the hidden state values in its auto-encoder neural networks, so as to improve the accuracy of the channel state feedback. In other aspects, the UE transmits updated auto-encoder weights in response to receiving the message, after flushing the hidden states. The reset signal and context information help the UE flush the hidden and/or cell states, and save the previous states and weights in memory, associated to the context received. In other aspects, a handshake may occur between the UE and the base station to synchronize the states. The UE may send feedback to the base station when a hidden state and/or cell state discard has occurred. In still other aspects, synchronization is maintained by the base station sending its own hidden states to the UE. As indicated above,FIGS.1-6are provided as examples. Other examples may differ from what is described with respect toFIGS.1-6. FIG.7is a flow diagram illustrating an example process700performed, for example, by a UE, in accordance with various aspects of the present disclosure. The example process700is an example of signaling for a change in channel state information reference signal (CSI-RSs). As shown inFIG.7, in some aspects, the process700may include receiving, from a network entity, a message indicating a change in a set of downlink beams for channel state information reference signals (CSI-RSs), and a context associated with the change (block702). For example, the user equipment (UE) (e.g., using the antenna252, DEMOD/MOD254, MIMO detector256, receive processor258, controller/processor280, and/or memory282) can receive the message and the context. The context may be the network environment. The changed set and a previous set of downlink beams may be non-quasi-collocated. In some aspects, the message maybe a radio resource control (RRC) message or a media access control-control element (MAC-CE) message In other aspects, the process700may include saving state values in an auto-encoder neural network in response to receiving the message (block704). For example, the UE (e.g., using the controller/processor280and/or memory282) can save state values. The state values in the auto-encoder neural network may be hidden and/or cell state values in a long short term memory (LSTM) network, a gated recurring unit (GRU), or a recurrent neural network (RNN) As shown inFIG.7, in some aspects, the process700may include associating the saved state values in the auto-encoder neural network to the context in the received message (block706). For example, the UE (e.g., using the controller/processor280and/or memory282) can associate the saved state values in the auto-encoder neural network. In other aspects, the process700may include resetting the state values in the auto-encoder neural network in response to receiving the message (block708). For example, the UE (e.g., using the controller/processor280and/or memory282) can reset the state values. In other words, the UE may restart the compression algorithm with a fresh slate. By resetting the hidden states of its auto-encoder, the UE may improve the optimization framework of the channel state feedback (CSF) performance, and thus the auto-encoder's performance. Because these hidden states are accumulated over a long period of time, resetting the hidden states improves performance of UE auto-encoders. As shown inFIG.7, in some aspects, the process700may include estimating a channel state based on the CSI-RSs received on the changed set of downlink beams (block710). For example, the UE (e.g., using the antenna252, DEMOD/MOD254, MIMO detector256, receive processor258, controller/processor280, and/or memory282) can estimate a channel state based on the CSI-RSs. For example, when the CSI-RS ports change, a UE's channel compression algorithm accounts for the change when estimating the channel state. In still other aspects, the process700may include compressing the channel state with the auto-encoder neural network based on the reset state values (block712). For example, the UE (e.g., using the controller/processor280and/or memory282) can compress the channel state. In yet other aspects, the process700may include sending to the network entity, the compressed channel state and optionally an indication that resetting occurred (block714). For example, the UE (e.g., using the antenna252, DEMOD/MOD254, TX MIMO processor266, transmit processor264, controller/processor280, and/or memory282) can send the compressed channel state and optionally the indication. FIG.8is a flow diagram illustrating an example process800performed, for example, by a network entity, in accordance with various aspects of the present disclosure. The example process800is an example of signaling for a change in beams for a channel state information reference signal (CSI-RS). As shown inFIG.8, in some aspects, the process800may include changing, for a user equipment (UE), a set of downlink beams for channel state information reference signals (CSI-RSs) (block802). For example, the network entity (e.g., using the antenna234, MOD/DEMOD232, TX MIMO processor230, transmit processor220, controller/processor240, and/or memory242) can change the set of downlink beams. With mobility, or change of the environment around the UE, it is likely that the best CSI-RS ports (e.g., DL beams) for the UE itself may change. For example, a UE may move into a new location, making a subset of previous scatterers irrelevant. In another example, a UE's environment may change without a significant change in location, such as when reflectors change while the UE remains stationary. Thus, a set of downlink beams used for CSI-RS may change for the current UE, especially in the case of multi-user (MU)-MIMO where orthogonalizing will result in a different set of beams for the current UE. In other aspects, the process800may include transmitting a message, to the UE, indicating the changing of the set of downlink beams and a context to associate with the changing (block804). For example, the network entity (e.g., using the antenna234, MOD/DEMOD232, TX MIMO processor230, transmit processor220, controller/processor240, and/or memory242) can transmit the message. Whenever the set of downlink beams used by the base station (e.g., gNB) to send CSI-RSs to the UE changes, the base station sends a message, such as a reset signal and context related information, to the UE. In some aspects, the message may be a bit within a radio resource control (RRC) message or a media access control-control element (MAC-CE) message. The process800may include receiving, from the UE, a channel state compressed in accordance with the message (block806). For example, the network entity (e.g., using the antenna234, MOD/DEMOD232, MIMO detector236, receive processor238, controller/processor240, and/or memory242) can receive the compressed channel state. The UE (based on its capability to read this message) flushes the hidden state values in its auto-encoder neural networks so as to improve the accuracy of the channel state feedback. The channel state may be compressed in accordance with the updated auto-encoder. Implementation examples are described in the following numbered clauses.1. A method of wireless communication by a user equipment (UE), comprising:receiving, from a network entity, a message indicating a change in a set of downlink beams for channel state information reference signals (CSI-RSs), and a context associated with the change;saving state values in an auto-encoder neural network in response to receiving the message;associating the saved state values in the auto-encoder neural network to the context in the received message;resetting the state values in the auto-encoder neural network in response to receiving the message;estimating a channel state based on the CSI-RSs received on the changed set of downlink beams;compressing the channel state with the auto-encoder neural network based on the reset state values; andsending to the network entity, the compressed channel state.2. The method of clause 1, in which the changed set of downlink beams and a previous set of downlink beams are non-quasi-collocated.3. The method of clause 1 or 2, further comprising transmitting auto-encoder weights to the network entity in response to receiving the message.4. The method of any of the preceding clauses, in which the state values in the auto-encoder neural network comprise hidden and/or cell state values in a long short term memory (LSTM) network, a gated recurring unit (GRU) or a recurrent neural network (RNN).5. The method of any of the preceding clauses, in which the changed set of beams comprises a subset of network entity downlink transmit beams.6. The method of any of the preceding clauses, in which the message comprises a radio resource control (RRC) message or a media access control-control element (MAC-CE) message.7. The method of any of the preceding clauses, further comprising feeding back an indication that the resetting occurred.8. The method of any of the preceding clauses, in which the message further comprises hidden and/or cell states of the network entity.9. A method of wireless communication by a network entity, comprising:changing, for a user equipment (UE), a set of downlink beams for channel state information reference signals (CSI-RSs);transmitting a message, to the UE, indicating the changing of the set of downlink beams and a context to associate with the changing; and receiving, from the UE, a channel state compressed in accordance with the message.10. The method of clause 9, in which current and previous sets of downlink beams are non-quasi-collocated.11. The method of clause 9 or 10, in which the set of downlink beams comprises a subset of network entity beams.12. The method of any of the clauses 9-11, further comprising receiving, from the UE, updated auto-encoder weights in response to transmitting the message.13. The method of any of the clauses 9-12, further comprising receiving, from the UE, an indication that state values have been reset.14. The method of any of the clauses 9-13, further comprising receiving, from the UE, hidden and/or cell states of the UE.15. An apparatus for wireless communications at a user equipment (UE), comprising:a processor,memory coupled with the processor; andinstructions stored in the memory and operable, when executed by the processor, to cause the apparatus:to receive, from a network entity, a message indicating a change in a set of downlink beams for channel state information reference signals (CSI-RSs), and a context associated with the change;to save state values in an auto-encoder neural network in response to receiving the message;to associate the saved state values in the auto-encoder neural network to the context in the received message;to reset the state values in the auto-encoder neural network in response to receiving the message;to estimate a channel state based on the CSI-RSs received on the changed set of downlink beams;to compress the channel state with the auto-encoder neural network based on the reset state values; andto send to the network entity, the compressed channel state.16. The apparatus of clause 15, in which the changed set of downlink beams and a previous set of downlink beams are non-quasi-collocated.17. The apparatus of clause 15 or 16, in which the processor causes the apparatus to transmit auto-encoder weights to the network entity in response to receiving the message.18. The apparatus of any of the clauses 15-17, in which the state values in the auto-encoder neural network comprise hidden and/or cell state values in a long short term memory (LSTM) network, a gated recurring unit (GRU) or a recurrent neural network (RNN).19. The apparatus of any of the clauses 15-18, in which the changed set of beams comprises a subset of network entity downlink transmit beams.20. The apparatus of any of the clauses 15-19, in which the message comprises a radio resource control (RRC) message or a media access control-control element (MAC-CE) message.21. The apparatus of any of the clauses 15-20, in which the processor causes the apparatus to feed back an indication that the resetting occurred.22. The apparatus of any of the clauses 15-21, in which the message further comprises hidden and/or cell states of the network entity.23. An apparatus for wireless communications at a network entity, comprising:a processor,memory coupled with the processor; andinstructions stored in the memory and operable, when executed by the processor, to cause the apparatus:to change, for a user equipment (UE), a set of downlink beams for channel state information reference signals (CSI-RSs);to transmit a message, to the UE, indicating the changing of the set of downlink beams and a context to associate with the changing; andto receive, from the UE, a channel state compressed in accordance with the message.24. The apparatus of clause 23, in which current and previous sets of downlink beams are non-quasi-collocated.25. The apparatus of clause 23 or 24, in which the set of downlink beams comprises a subset of network entity beams.26. The apparatus of any of the clauses 23-25, in which the processor causes the apparatus to receive updated auto-encoder weights in response to transmitting the message.27. The apparatus of any of the clauses 23-26, in which the processor causes the apparatus to receive, from the UE, an indication that state values have been reset.28. The apparatus of any of the clauses 23-27, in which the processor causes the apparatus to receive, from the UE, hidden and/or cell states of the UE. The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the aspects to the precise form disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the aspects. As used, the term “component” is intended to be broadly construed as hardware, firmware, and/or a combination of hardware and software. As used, a processor is implemented in hardware, firmware, and/or a combination of hardware and software. Some aspects are described in connection with thresholds. As used, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, and/or the like. It will be apparent that systems and/or methods described may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the aspects. Thus, the operation and behavior of the systems and/or methods were described without reference to specific software code—it being understood that software and hardware can be designed to implement the systems and/or methods based, at least in part, on the description. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various aspects. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various aspects includes each dependent claim in combination with every other claim in the claim set. A phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). No element, act, or instruction used should be construed as critical or essential unless explicitly described as such. Also, as used, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Furthermore, as used, the terms “set” and “group” are intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, and/or the like), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used, the terms “has,” “have,” “having,” and/or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise.
64,712
11863496
DETAILED DESCRIPTION Now, example embodiments will be described in detail with reference to the accompanying drawings. Embodiment 1 FIG.5shows the configuration of base station100according to Embodiment 1, andFIG.6shows the configuration of mobile station200according to Embodiment 1. To avoid complicated explanation,FIG.5shows components involving SRS reception closely relating to the present disclosure, and drawings and explanations of the components involving uplink and downlink data transmission and reception are omitted. Likewise,FIG.6shows components involving SRS transmission closely relating to the present disclosure and, drawings and explanations of the components involving uplink and downlink data transmission and reception are omitted. In base station100shown inFIG.5, SRS allocation determination section101determines allocation of SRSs in the frequency domain and the time domain based on the number of PUCCH channels, and outputs information related to the determined SRS allocation (hereinafter “SRS allocation information”), to control signal generation section102and SRS extraction section108. The processing in SRS allocation determination section101will be described later in detail. Control signal generation section102generates a control signal including SRS allocation information, and outputs the generated control signal to modulation section103. Modulation section103modulates the control signal, and outputs the modulated control signal to radio transmitting section104. Radio transmitting section104performs transmitting processing including D/A conversion, up-conversion and amplification, on the modulated signal, and transmits the resulting signal from antenna105. Radio receiving section106receives SRSs via radio from mobile station200via antenna105, performs receiving processing including down-conversion and A/D conversion on the SRSs and outputs the SRSs after receiving processing to demodulation section107. Demodulation section107demodulates the received SRSs and outputs the demodulated SRSs to SRS extraction section108. SRS extraction section108extracts SRSs allocated in the frequency domain and the time domain based on the SRS allocation information received as input from SRS allocation determination section101, and outputs the extracted SRSs to CQI/timing offset estimation section109. CQI/timing offset estimation section109estimates CQIs and timing offset from the SRSs. In mobile station200shown inFIG.6, SRS code generation section201generates a code sequence used as an SRS for measuring uplink data channel quality, that is, generates an SRS code, and outputs the SRS code to SRS allocation section202. SRS allocation section202maps the SRS code to resources in the time domain and frequency domain according to SRS allocation control section208, and outputs the mapped SRS code to modulation section203. Modulation section203modulates the SRS code and outputs the modulated SRS code to radio transmitting section204. Radio transmitting section204performs transmitting processing including D/A conversion, up-conversion and amplification, on the modulated signal, and transmits the resulting signal from antenna205. Radio receiving section206receives a control signal via radio from base station100via antenna205, performs receiving processing including down-conversion and A/D conversion on the control signal and outputs the control signal after receiving processing to demodulation section207. Demodulation section207demodulates the received control signal and outputs the demodulated control signal to SRS allocation control section208. SRS allocation control signal208controls SRS allocation section202according to the SRS allocation information included in the demodulated control signal. Next, the processing in SRS allocation determination section101in base station100will be explained in detail. FIG.7is a flow chart showing the processing steps in SRS allocation determination section101. First, in step (hereinafter “ST”)1010, SRS allocation determination section101determines an SRS bandwidth based on the accuracy of CQI estimation and the accuracy of timing offset estimation. Next, in ST1020, SRS allocation determination section101calculates the number of SRSs to be multiplexed in the frequency domain based on the system bandwidth, the number of PUCCH channels and the SRS bandwidth. To be more specific, the number of SRSs to be multiplexed in the frequency domain is the maximum number of SRSs which can be multiplexed on the SRS transmission bandwidth obtained by subtracting the PUCCH transmission bandwidth from the system bandwidth, and which each have a bandwidth of one transmission unit determined in ST1010. That is, the number of SRSs to be multiplexed in the frequency domain is the integer part of the quotient obtained by dividing the SRS transmission bandwidth by the SRS bandwidth determined in ST1010. Here, the PUCCH transmission bandwidth is determined by the number of PUCCH channels, and varies according to the number of items of control data to be accommodated. Next, in ST1030, SRS allocation determination section101first determines allocation of SRSs such that the SRSs are frequency-hopped (frequency-multiplexed) in the SRS transmission bandwidth at predetermined time intervals. To be more specific, SRS allocation determination section101determines that SRSs are mapped in the frequency domain and time domain such that the SRSs cover the frequency band to be subject to CQI estimation evenly and are mapped at predetermined time intervals in the time domain. FIGS.8A and8Bshow examples of SRS allocation determined in SRS allocation determination section101.FIG.8Ashows a case where the number of PUCCH channels is two, andFIG.8Bshows a case where the number of PUCCH channels is four. InFIGS.8A and8B, the SRS bandwidths are determined so as to fulfill the required accuracy of CQI estimation and the required accuracy of timing offset, and are not changed even when the number of PUCCH channels and SRS transmission bandwidth vary. Further, the number of PUCCH channels varies betweenFIGS.8A and8B, and therefore, the SRS transmission bandwidth varies and the number of SRSs to be frequency-multiplexed, that is, the number of SRS hopping, obtained by dividing the SRS transmission bandwidth by the SRS bandwidths determined in ST1010, varies. When the number of PUCCH channels is two inFIG.8A, the number of SRSs to be frequency-multiplexed is four, and, when the number of PUCCH channels is four inFIG.8B, the number of SRSs to be frequency-multiplexed is three. Then, as shown inFIG.8, the positions where SRSs are frequency-multiplexed in the SRS transmission bandwidth are positions to cover the SRS transmission band evenly, that is, the frequency band subject to CQI estimation. This results in dividing the band in which SRSs are not transmitted into a number of bands having smaller bandwidths, that is, this prevents SRSs from being not transmitted over a specific wide range of a band, so that it is possible to reduce the deterioration of the accuracy of CQI estimation due to bands in which SRSs are not transmitted. In this way, according to the present embodiment, in accordance with an increase and decrease of the number of PUCCH channels, SRS allocation is changed to cover a CQI estimation bandwidth with fixed SRS bandwidths evenly, so that, when the PUCCH transmission bandwidth varies, it is possible to prevent interference between SRSs and PUCCHs while maintaining the accuracy of CQI estimation and the accuracy of timing offset estimation, and reduce the deterioration of the accuracy of CQI estimation due to bands in which SRSs are not transmitted. Embodiment 2 The base station and the mobile station according to Embodiment 2 adopt the same configurations and basically perform the same operations as the base station and the mobile station according to Embodiment 1. Therefore, block diagrams are not shown here, and the description will be omitted in detail. The base station and the mobile station according to the present embodiment are different from the base station and the mobile station according to Embodiment 1 in the SRS allocation determination section in the base station. The SRS allocation determination section provided in the base station according to the present embodiment is different from SRS allocation determination section101provided in the base station according to Embodiment 1 in part of processing. Now, the processing in the SRS allocation determination section according to the present embodiment will be explained. FIG.9is a flow chart showing the processing steps in the SRS allocation determination section according to the present embodiment. The steps shown inFIG.9are basically the same as shown inFIG.7and the same reference numerals are assigned to the same steps, and therefore the explanation thereof will be omitted. The steps shown inFIG.9are different from the steps shown inFIG.7in having ST2030instead of ST1030. In ST2030, the SRS allocation determination section first calculates the time interval at which SRSs are mapped in the frequency domain and time domain according to the following equation 1. If the SRSs are transmitted using time interval τ(cPUCCH) calculated according to equation 1, the CQI estimation period in the CQI estimation target band is fixed even if the number of PUCCH channels varies. [1] τ(cPUCCH)≈T/n(cPUCCH)(Equation 1) In equation 1, T represents the CQI estimation period in the CQI estimation target band and cPUCCHrepresents the number of PUCCH channels. n(cPUCCH) represents the number of SRSs to be frequency-multiplexed, that is, the number of frequency hopping, when the number of PUCCH channels is cPUCCH. The transmission interval is based on a time slot unit, and therefore τ(cPUCCH) is a result of the value on the right hand side of equation 1 matched with a time slot. Further, in ST2030, the SRS allocation determination section determines allocation of SRSs such that SRSs are frequency-multiplexed in the SRS transmission bandwidth at the calculated time interval τ. To be more specific, SRS allocation determination section determines to map SRSs so as to cover the frequency band subject to CQI estimation target evenly in the frequency domain and to cover CQI estimation period T evenly in the time domain. FIGS.10A and10Bshow examples of SRS allocation determined in the SRS allocation determination section according to the present embodiment.FIG.10is basically the same asFIG.8and the overlapping explanation will be omitted. InFIGS.10A and10B, the SRS bands are not changed in accordance with a variation of SRS transmission bandwidth, and SRSs are frequency-multiplexed so as to cover the SRS transmission bandwidth evenly. Further, inFIG.10A, SRSs are mapped using time interval τ(2), and inFIG.10B, SRSs are mapped using time interval τ(4). That is, in the present embodiment, when the number of PUCCH channels decreases, the SRS transmission interval is made shorter and when the number of PUCCH channels increases, the SRS transmission interval is made longer. By this means, even when the number of PUCCH channels varies, CQI estimation period T does not vary. In this way, according to the present embodiment, in accordance with an increase and decrease of the number of PUCCH channels, SRS allocation is changed such that a CQI estimation bandwidth is covered with fixing SRS bandwidths evenly. Accordingly, when the PUCCH transmission bandwidth varies, it is possible to prevent SRSs and PUCCHs from interfering each other while maintaining the accuracy of CQI estimation and the accuracy of timing offset, and reduce the deterioration of the accuracy of CQI estimation due to bands in which SRSs are not transmitted. Further, according to the present embodiment, when the number of PUCCH channels decreases, the SRS transmission interval is made shorter and when the number of PUCCH channels increases, the SRS transmission interval is made longer. By this means, when the PUCCH transmission bandwidth varies, it is possible to maintain a constant CQI estimation period and prevent the accuracy of CQI estimation from deteriorating. Embodiment 3 The base station and the mobile station according to Embodiment 3 adopt the same configurations and basically perform the same operations as the base station and the mobile station according to Embodiment 1. Therefore, block diagrams are not shown here, and the description will be omitted in detail. The base station and the mobile station according to the present embodiment are different from the base station and the mobile station according to Embodiment 1 in the SRS allocation determination section in the base station. The SRS allocation determination section provided in the base station according to the present embodiment is different from SRS allocation determination section101provided in the base station according to Embodiment 1 in part of processing. Now, the allocation of SRSs determined in the SRS allocation determination section according to the present embodiment will be explained. FIGS.11A and11Bshow examples of SRS allocation determined in the SRS allocation determination section according to the present embodiment.FIG.11is basically the same asFIG.10and the overlapping explanation will be omitted. InFIGS.11A and11B, the SRS bands are not changed in accordance with a variation of SRS transmission bandwidth, and SRSs are frequency-multiplexed so as to cover the SRS transmission bandwidth evenly. Further, as shown inFIGS.11A and11B, the number of SRSs to be frequency-multiplexed is the number of when the number of PUCCH channels is the maximum, regardless of whether the number of PUCCHs increases or decreases. Here, the maximum value for the number of PUCCH channels is four and the number of SRSs to be frequency-multiplexed is three. Further, as shown inFIGS.11A and11B, a transmission interval between SRSs is the transmission interval of when the number of PUCCH channels is the maximum, regardless of whether the number of PUCCHs increases or decreases. Here, the maximum value for the number of PUCCH channels is four and the transmission interval is represented by τ(4). According to the method as shown inFIG.11, it is not necessary to calculate a transmission interval every time the number of PUCCH channels varies and it is possible to simplify the determination processing of SRS allocation. In this way, according to the present embodiment, in accordance with an increase and decrease of the number of PUCCH channels, SRS allocation is changed such that a CQI estimation bandwidth is evenly covered with fixing SRS bandwidths. By this means, when the PUCCH transmission bandwidth varies, it is possible to prevent SRSs and PUCCHs from interfering each other while maintaining the accuracy of CQI estimation and the accuracy of timing offset, and reduce the deterioration of the accuracy of CQI estimation due to bands in which SRSs are not transmitted. Furthermore, according to the present embodiment, in accordance with an increase and decrease of the number of PUCCH channels, SRSs are mapped without changing the number of SRSs to be frequency-multiplexed and the SRS transmission interval, so that it is possible to simplify the SRS allocation process. Embodiment 4 In Embodiment 4, the method of SRS allocation from a plurality of mobile stations in accordance with a variation of the PUCCH transmission bandwidth, will be explained. The base station and the mobile station according to Embodiment 4 adopt the same configurations and basically perform the same operations as the base station and the mobile station according to Embodiment 1. Therefore, block diagrams are not shown here, and the description will be omitted in detail. The base station and the mobile station according to the present embodiment are different from the base station and the mobile station according to Embodiment 1 in the SRS allocation determination section in the base station. The SRS allocation determination section provided in the base station according to the present embodiment is different from SRS allocation determination section101provided in the base station according to Embodiment 1 in part of processing. Now, the allocation of SRSs determined in the SRS allocation determination section according to the present embodiment will be explained. FIGS.12A and12Bshow examples of SRS allocation determined in the SRS allocation determination section according to the present embodiment.FIG.12is basically the same asFIG.8and the overlapping explanation will be omitted. InFIGS.12A and12B, the SRS bands are not changed in accordance with a variation of SRS transmission bandwidth, and SRSs are frequency-multiplexed so as to cover the SRS transmission bandwidth evenly. Further, as shown inFIGS.12A and12B, in accordance with the variation of the PUCCH transmission bandwidth, the SRS allocation determination section according to the present embodiment maps SRSs without changing the hopping pattern of SRSs in a predetermined frequency band. In other words, SRS allocation to be changed is controlled so as to make different hopping patterns in the same band. To be more specific, by transmitting and not transmitting SRSs mapped to the specific band according to an increase and decrease of the PUCCH transmission bandwidth, it is not necessary to change the hopping pattern in other bands. In this way, according to the present embodiment, in accordance with an increase and decrease of the number of PUCCH channels, SRS allocation is changed such that a CQI estimation bandwidth is evenly covered with fixing SRS bandwidths. By this means, when the PUCCH transmission bandwidth varies, it is possible to prevent SRSs and PUCCHs from interfering each other while maintaining the accuracy of CQI estimation and the accuracy of timing offset, and reduce the decrease of the accuracy of CQI estimation due to bands in which SRSs are not transmitted. Further, according to the present embodiment, in accordance with an increase and decrease of the number of PUCCH channels, SRSs are mapped in the frequency domain and time domain without changing the SRS hopping pattern, so that, when the PUCCH transmission bandwidth varies, it is possible to maintain the number of SRSs from mobile stations to be multiplexed and the CQI estimation period in the CQI estimation target band of each mobile station. Embodiment 5 The base station and the mobile station according to Embodiment 5 adopt the same configurations and basically perform the same operations as the base station and the mobile station according to Embodiment 1. Therefore, block diagrams are not shown here, and the description will be omitted in detail. The base station and the mobile station according to the present embodiment are different from the base station and the mobile station according to Embodiment 1 in the SRS allocation determination section in the base station. The SRS allocation determination section provided in the base station according to the present embodiment is different from SRS allocation determination section101provided in the base station according to Embodiment 1 in part of processing. Now, the allocation of SRSs determined in the SRS allocation determination section according to the present embodiment will be explained. FIGS.13A and13Bshow examples of SRS allocation determined in the SRS allocation determination section according to the present embodiment. InFIGS.13A and13B, the SRS bands are not changed in accordance with a variation of SRS transmission bandwidth, and SRSs are frequency-multiplexed so as to cover the SRS transmission bandwidth evenly. Further, inFIGS.13A and13B, the number of SRSs to be frequency-multiplexed is the number of when the number of PUCCH channels is the minimum and is fixed regardless of whether the number of PUCCHs increases or decreases. InFIGS.13A and13B, the minimum value for the number of PUCCH channels is two and the number of SRSs to be frequency-multiplexed is four. Further, inFIGS.13A and13B, while the SRS transmission bandwidth varies in accordance with an increase and decrease of the number of PUCCH channels, the number of SRSs to be frequency-multiplexed is fixed, and therefore SRSs are mapped in the frequency domain such that a plurality of SRSs partly overlap. Further, inFIGS.13A and13B, the number of SRSs to be frequency-multiplexed does not change in accordance with an increase and decrease of the number of PUCCH channels, and therefore SRS transmission intervals do not change. In this way, according to the present embodiment, in accordance with an increase and decrease of the number of PUCCH channels, SRS allocation is changed such that a CQI estimation bandwidth is covered with fixing SRS bandwidths evenly. Accordingly, when the PUCCH transmission bandwidth varies, it is possible to prevent interference between an SRS and a PUCCH while maintaining the accuracy of CQI estimation and the accuracy of timing offset, and reduce the deterioration of the accuracy of CQI estimation due to bands in which SRSs are not transmitted. Further, according to the present embodiment, in accordance with an increase and decrease of the number of PUCCH channels, SRS are mapped such that bands of frequency-multiplexed SRSs partly overlap, without changing the number of SRSs to be frequency-multiplexed, so that it is possible to improve the accuracy of CQI estimation more and prevent the accuracy of CQI estimation from deteriorating due to bands in which SRSs are not transmitted. The example embodiments have been explained. Although cases have been explained with the above embodiments where the number of PUCCH channels is two or four, the number is explained with examples only and the present disclosure is not limited to this. Further, although cases have been explained with the above embodiments where the SRS transmission bandwidth is the band obtained by subtracting the PUCCH transmission bandwidth from the system bandwidth, the present disclosure is not limited to this, and the SRS transmission bandwidth may be a specific band varying according to an increase and decrease of the number of PUCCH channels. Further, although cases have been explained with the above embodiments as examples where the SRS bands are not changed in accordance with an increase and decrease of the number of PUCCH channels and the positions on which SRSs are frequency-multiplexed in the SRS transmission band change, the present disclosure is not limited to this, and it is possible to change the positions where SRSs are frequency-multiplexed in the SRS transmission band according to an increase and decrease of the number of PUCCH channels, and change the SRS bandwidths. A variation of an SRS bandwidth may be limited within a range in which the deterioration of the accuracy of CQI estimation and the accuracy of timing offset can be ignored, for example within ±1 to 2 RBs, and this facilitates reducing the deterioration of the accuracy of CQI estimation. Here, an RB (Resource Block) refers to a unit representing a specific range of radio resources.FIG.14Ashows an example where the SRS bands extend in a predetermined range and the range of each extended band inFIG.14Ais 1 RB or less. Further, to extend and contract the SRS transmission band here, CAZAC (Constant Amplitude Zero Auto-Correlation) sequence or cyclic extension and truncation of a sequence having the same characteristics as CAZAC may be adopted. Further, it is possible to allocate uplink data channels for which CQIs cannot be estimated using narrowband SRSs with the above embodiments, to mobile stations transmitting wideband SRSs with priority.FIG.14Billustrates to explain a case where uplink data channels for which CQIs cannot be estimated using narrowband SRSs are allocated with priority to mobile stations transmitting wideband SRSs. The above packet allocation method makes it possible to prevent the frequency scheduling effect from lowering. Further, as shown inFIG.15A, SRSs may be mapped so as to neighbor PUCCHs. Further, as shown inFIG.15B, allocation of SRSs may vary between hopping cycles. Further, an SRS may be named as simply a “pilot signal,” “reference signal” and so on. Further, a known signal used for an SRS may include a CAZAC sequence or a sequence having the same characteristics as a CAZAC. Further, the SRS allocation information acquired in the base station according to the above embodiments may be reported to mobile stations using a PDCCH (Physical Downlink Control Channel), which is an L1/L2 control channel, or using a PDSCH (Physical Downlink Shared Channel) as an L3 message. Further, in the above embodiments, DFT-s-OFDM (Discrete Fourier Transform-spread-Orthogonal Frequency Division Multiplexing) employed in LTE may be adopted to the uplink. Further, in the above embodiments, OFDM employed in LTE may be adopted to downlink. Further, the SRS allocation information according to the above embodiments may be uniquely associated in advance with a broadcast channel, for example, PUCCH configuration information reported in a BCH (Broadcast Channel). By this means, it is not necessary to transmit SRS allocation information on a per UE basis, so that signaling overhead is reduced. For example, each UE may calculate SRS allocation from the number of PUCCH channels as follows. Now, an example of equations to calculate SRS allocation from the number of PUCCH channels will be shown below. If the subcarrier to which an SRS starts to be mapped in the frequency domain is k0, k0is represented as the following equation 2. [2]k0=kRB(n)·NSCRB(Equation 2) In equation 2, n represents the multiplexing number of an SRS in the frequency domain and NscRBrepresents the number of subcarriers per RB. Further, kRB(n) represents the RB number to which the SRS with frequency multiplex number n is mapped and is represented by the following equation 3 or 4. [3]kR⁢B(n)=NS⁢R⁢SB⁢A⁢S⁢E+⌊(n+1)·NR⁢BU⁢L-NR⁢BP⁢U⁢C⁢C⁢H-NS⁢R⁢SB⁢A⁢S⁢E·NS⁢R⁢SNSRS+1⌋+⌊NR⁢BP⁢U⁢C⁢C⁢H2⌋(Equation⁢3)n=0,1,…⁢NS⁢R⁢S-1[4]kR⁢B(n)=n·NS⁢R⁢SBASE+⌊(2⁢n+1)·NR⁢BU⁢L-NR⁢BP⁢U⁢C⁢C⁢H-NS⁢R⁢SB⁢A⁢S⁢E·NS⁢R⁢S2⁢NS⁢R⁢S⌋+⌊NR⁢BP⁢U⁢C⁢C⁢H2⌋⁢(Equation⁢4)n=0,1,…⁢NS⁢R⁢S-1 In equations 3 and 4, NSRSrepresents the number of SRSs to be frequency-multiplexed and is represented by the following equation 5. [5]NS⁢R⁢S=⌊NR⁢BU⁢L-NR⁢BP⁢U⁢C⁢C⁢HNS⁢R⁢SB⁢A⁢S⁢E⌋(Equation⁢5) In equations 3, 4 and 5, NRBPUCCHrepresents the number of RBs included in the PUCCH transmission band and NRBULrepresents the number of RBs included in the system band. NSRSBASErepresents the number of RBs included in the SRS transmission bandwidth. In the above parameters, the parameters other than NRBPUCCBare system parameters, so that the system parameters can be used in a fixed manner once they are signaled or reported. Accordingly, when a mobile station is given NRBPUCCH, SRS allocation is able to be derived according to the above equation 2 to equation 5. Here, NRBPUCCHis the parameter determined by the number of PUCCH channels, so that a mobile station is able to derive SRS allocation and transmit SRSs if the mobile station is provided the number of PUCCH channels from the base station. Further, the mobile station may derive SRS allocation from the number of PUCCH channels with reference to an SRS allocation definition table instead of above equation 2 to equation 5.FIG.16shows an example of the SRS allocation definition table. The SRS allocation definition table shown inFIG.16defines the RB numbers of RBs to which SRSs are mapped in cases where the number of PUCCH channels is one and four. Further, t represents a transmission timing in hopping cycles. Further, as shown inFIG.16, the hopping patterns vary according to varying multiplexing number of SRSs to n. Further, “−” in the table shows that SRSs are not allocated. By holding an SRS allocation definition table, a mobile station is able to derive SRS allocation and transmit SRSs if the mobile station is provided the number of PUCCH channels from the base station. Further, the information uniquely associated in advance with PUCCH configuration information may include other SRS configuration information including variable information about the above SRS bandwidth and SRS sequence information, in addition to the SRS allocation information. Further, although examples have been explained with the above embodiments where the narrowband SRS bandwidths evenly cover one SRS transmission bandwidth in the frequency domain, the present disclosure is not limited to this, and, with the present disclosure, one SRS transmission bandwidth may be divided into a plurality of smaller SRS transmission bandwidths (hereinafter “SRS subbands”) and the narrowband SRS bandwidths may be mapped so as to cover each SRS subband bandwidth evenly in the frequency domain. FIGS.17A and17Bshow an example of a case where two SRS subbands1and2are provided in one SRS transmission bandwidth and three SRSs are mapped to each subband. In the example shown inFIG.17A, the allocation and the intervals of SRSs mapped in SRS subband1are changed according to the variation of a bandwidth of SRS subband1such that CQI estimation bandwidth is covered evenly in SRS subband1. Likewise, the allocation and the intervals of SRSs mapped in SRS subband2are changed according to the variation of a bandwidth of SRS subband2such that CQI estimation bandwidth is covered evenly in SRS subband2. Further, as in the example shown inFIG.17B, the bandwidths of SRS subbands may vary. In this case, the allocation and the intervals of SRSs mapped in SRS subbands may be changed on a per SRS subband basis such that CQI estimation bandwidth is evenly covered. Although a case has been explained as an example where the number of SRS subbands is two inFIGS.17A and17B, the number of SRS subbands may be three or more with the present disclosure. Further, although a case has been explained as an example where the number of SRSs in the SRS subband is three inFIGS.17A and17B, with the present disclosure, a plurality of SRSs besides three SRSs may be mapped in the SRS subband. Further, although mapping examples have been explained with the above embodiments where SRSs are neighboring each other evenly in the SRS transmission bandwidth, in practical systems, SRS bandwidths and positions where SRSs are allocated in the frequency domain are discrete values. Therefore, cases may occur where the SRS transmission bandwidth is not divided by one SRS band. In this case, without using frequency allocation units that have fractions left as a remainder of division, it is also possible to map SRSs so as to cover the CQI estimation bandwidth evenly in the frequency domain in a range that is divisible (FIG.18A). Further, it is also possible to allocate frequency allocation units that have fractions left as a remainder of division between SRSs on a per frequency unit basis (FIG.18B). Here, the RB (Resource Block) inFIGS.18A and18Brepresents an allocation unit in the frequency domain.FIGS.18A and18Bare examples where the SRS bandwidth is 4 RBs and the SRS transmission bandwidth is 18 RBs. Further, although cases have been explained with the above embodiments where SRSs are frequency-hopped (frequency-multiplexed) in the SRS transmission bandwidth at predetermined time intervals, the present disclosure is not limited to this, and provides the same advantage as in cases where frequency hopping is not carried out, as explained with the above embodiments. The SRSs in the above embodiments may be mapped in RB units or subcarrier units, and may not be limited to any unit. Further, a CQI showing channel quality information may be referred to as “CSI (Channel State Information).” Further, a base station apparatus may be referred to as “Node B” and a mobile station may be referred to as “UE.” Further, although cases have been described with the above embodiments as examples where the present disclosure is configured by hardware, the present disclosure can also be realized by software. Each function block employed in the description of each of the aforementioned embodiments may typically be implemented as an LSI constituted by an integrated circuit. These may be individual chips or partially or totally contained on a single chip. “LSI” is adopted here but this may also be referred to as “IC,” “system LSI,” “super LSI,” or “ultra LSI” depending on differing extents of integration. Further, the method of circuit integration is not limited to LSIs, and implementation using dedicated circuitry or general purpose processors is also possible. After LSI manufacture, utilization of a programmable FPGA (Field Programmable Gate Array) or a reconfigurable processor where connections and settings of circuit cells within an LSI can be reconfigured is also possible. Further, if integrated circuit technology comes out to replace LSI's as a result of the advancement of semiconductor technology or a derivative other technology, it is naturally also possible to carry out function block integration using this technology. Application of biotechnology is also possible. The disclosures of Japanese Patent Application No. 2007-211548, filed on Aug. 14, 2007, and Japanese Patent Application No. 2008-025535, filed on Feb. 5, 2008, including the specifications, drawings and abstracts, are incorporated herein by reference in their entirety. INDUSTRIAL APPLICABILITY The present disclosure is applicable to, for example, mobile communication systems.
33,632
11863497
DETAILED DESCRIPTION An example implementation will now be described in the context of a system operating according to 4G LTE or 5G NR by way of example. It should be understood, however, that the principles disclosed herein could extend to apply with respect to other RATs as well. Further, it should be understood that other variations from the specific arrangements and processes described are possible. For instance, various described entities, connections, functions, and other elements could be added, omitted, distributed, re-located, re-ordered, combined, or changed in other ways. In addition, it will be understood that technical operations disclosed as being carried out by one or more entities could be carried out at least in part by a processing unit programmed to carry out the operations or to cause one or more other entities to carry out the operations. Referring to the drawings, as noted above,FIG.1is a simplified block diagram of an example wireless communication system in which features of the present disclosure can be implemented. In particular,FIG.1depicts a representative network that functions primarily to serve UEs with wireless packet data communication service, including possibly voice-over-packet service, but may also provide other functions. As shown, the network includes an example access node12, which could be a 4G LTE access node (e.g., evolved Node-B (eNB)) or a 5G NR access node (e.g., next generation Node-B (gNB)), among other possibilities. This access node could be a macro access node of the type configured to provide a wide a range of coverage or could take other forms, such as a small cell access node, a relay node, a femtocell access node, or the like, possibly configured to provide a smaller range of coverage. Further, the access node could be configured to provide coverage on at least one carrier14, which could be FDD or TDD as discussed above. In an example implementation, the air interface on this carrier could be configured to define various air-interface resources for carrying communications between the access node and UEs. By way of example, in the time domain, the air interface could define a continuum of 10-millisecond (ms) frames, each divided into ten 1-ms subframes as TTIs, and each TTI could be further divided into a number of timeslots, each additionally divided into symbol time segments. And in the frequency domain, the bandwidth of the carrier could be divided into subcarriers with specified subcarrier spacing on the order of 15 to 240 kHz. With this example arrangement, the air interface would define the array of resource elements as noted above, with each resource element spanning a respective symbol time segment and occupying a respective subcarrier, and the access node and UEs could communicate with each other through modulation of the subcarriers to carry data in those resource elements. Further, particular sets of resource elements on the air interface could be grouped together to define the PRBs discussed above. In an example implementation, each PRB could span one timeslot in the time domain and a group of subcarriers in the frequency domain. Depending on the carrier bandwidth, the air interface could thus support a certain finite number of such PRBs across the bandwidth of the carrier within each TTI. In addition, certain resource elements on the downlink and uplink of the example air interface could be designated for particular use as discussed above. For instance, on the downlink, certain resource elements per TTI could define a downlink control region for carrying control signaling such as scheduling directives and HARQ messaging from the access node to UEs. And other resource elements per TTI could define a shared channel in which the access node could allocate PRBs on an as-needed basis to carry data communications from the access node to UEs. Further, resource elements distributed in a predefined pattern throughout the carrier bandwidth per TTI could carry a broadcast reference signal as noted above, which UEs could measure as a basis to evaluate coverage strength and quality and to provide channel estimates to facilitate precoding, beamforming, or the like. In addition, in certain downlink subframes, a group of resource elements centered on the center frequency of the carrier in certain TTIs could carry the broadcast synchronization signals noted above, which UEs could detect as a way to discover coverage of the access node on the carrier and to establish frame timing. And in certain downlink subframes, a group of resource elements also centered on the center frequency of the carrier in certain TTIs could carry broadcast system information messages, such as a master information block (MIB) and system information block (SIB) messages that UEs could read to obtain operational parameters such as carrier bandwidth (e.g., downlink bandwidth and/or uplink bandwidth) and other information. On the uplink, on the other hand, certain resource elements per TTI, such as sets of PRBs at the low-frequency end of the carrier and at the high-frequency end of the carrier, could define an uplink control region for carrying control signaling such as access requests, channel-quality reports, scheduling requests, and HARQ messaging, from UEs to the access node. And other resource elements per TTI could define a shared channel in which the access node could allocate PRBs on an as-needed basis to carry data communications from UEs to the access node. Further, still other resources on the uplink could be used for other purposes as well, such as to carry uplink reference signals or the like. In the example ofFIG.1, the access node is shown interconnected with a core network16that provides connectivity with a transport network18. The core network16could be a packet-switched network configured as an Evolved Packet Core (EPC) network or a Next Generation Core (NGC) core network, among other possibilities, with entities having Internet Protocol (IP) addresses and being configured to communicate with each other through virtual packet-tunnels or the like. In an example EPC arrangement, as shown, the core network16includes a serving gateway (SGW)20and a packet-data-network gateway (PGW)22, for carrying user-plane communications through the core network16between the access node12and the transport network18. Further, the core network16includes a mobility management entity (MME)24, which functions as a core-network controller, responsible for managing UE attachment and bearer setup, among other operations, and a home subscriber server (HSS)26, which stores UE profile records and may specify service-subscription plans, UE device type and configuration, and/or other such UE profile information. The example core network16is also shown including an element management system (EMS)28, which could operate as a central repository of operational data for the wireless communication network and to control and manage operation of various network elements, to help ensure optimal use of their resources. In practice, entities such as the access node12could regularly report to the EMS28various operational data, such as data regarding connectivity and service of UEs, and data regarding access node load (e.g., PRB utilization) and performance, among others. And the EMS28could oversee operation of the access node12and other entities, providing operation directives or the like to which the entities could be configured to respond accordingly. In addition, as further shown, the core network16and/or transport network18in the example arrangement could include or provide connectivity with an example Internet Multimedia Subsystem (IMS)30. The IMS30could include various proxy servers and media servers configured to provide packet-based real-time media services, such as VoIP-call services for served UEs. For instance, to facilitate VoIP-call service, a UE served by access node30might engage in packet-based call-setup signaling, such as Session Initiation Protocol (SIP) signaling, with the IMS30to establish a packet-based real-time media session that extends between the UE and the IMS30via the access node12and the core network16, and the IMS30might establish a connection with a remote call party and bridge that connection with the UE's packet-based real-time media session, so that the UE and remote party could then engage in voice-call communication. For representative VoIP communication, voice could digitized and encoded using a codec that might encode and output voice frames of 20 milliseconds each or so. The encoded data could then be packetized and transmitted to the other end, where the data could be de-packetized, decoded, and played out. Thus, as a UE is engaged in a VoIP call, a sequence of voice packets could pass respectively in each direction to and from the UE, carrying voice communications respectively in each direction. FIG.1depicts multiple example UEs32that may be within coverage of and connect with access node12from time to time. These UEs could be of various types, including for instance any of the types noted above, among other possibilities. When each such UE initially enters into coverage of the system, the UE could discover threshold strong coverage of access node12and, as noted above, could then engage in random-access and connection signaling, to establish an RRC connection with the access node12, thus putting the UE in an RRC connected mode. Further, the UE could engage in attach signaling through the access node12with the MME24. And after authentication of the UE and/or at other times during service of the UE, the MME24could coordinate setup for the UE of one or more user-plane bearers each including a radio-access bearer (RAB) that has a data radio bearer (DRB) extending over the air between the access node12and the UE and an S1-U tunnel extending between the access node12and the SGW20, and including an S5 tunnel extending between the SGW20and the PGW22. In addition, the access node12could establish for the UE a context record, indicating the UE's connected state and identifying each such bearer configured for the UE, and could report the UE connection data to the EMS28. In relation to this attachment process or at another time, the access node12could also obtain configuration and capabilities data regarding the UE, such as data indicating the UE device type (e.g., whether the device is an IoT device or rather a consumer device such as a cell phone) and service subscription details (e.g., whether the device supports VoIP-call communication, etc.), and could store this data in the UE context record for reference while serving the UE. For instance, during the attachment process, the MME24could obtain this data from the HSS26and could covey the data to the access node12for storage, and/or the UE could provide the access node12with a report of this data. Further, the EMS28could also have access to this data regarding the UE, perhaps obtaining the data from the HSS26or access node12, among other possibilities. Each bearer that the MME24sets up for the UE could have a corresponding quality of service class indicator (QCI) level, which could indicate a class or type of communication that would be carried by the bearer, and which the access node12could note in its context record for the UE connection. For instance, upon initial attachment, the MME might set up for the UE a best-efforts general Internet bearer (e.g., QCI 8 or 9) for use to carry general Internet communications. And if the UE is a particular type of device, such as an IoT device, the MME24might set up a bearer with a QCI level deemed appropriate for that type of UE (e.g., QCI 7). Further, if the UE subscribes to VoIP service, the MME24might set up for the UE an IMS-signaling bearer (e.g., QCI 5) for carrying SIP signaling between the UE and the IMS. And if and when a VoIP call is set up for the UE, the MME24might set up for the UE a dedicated VoIP bearer (e.g., QCI 1). Other examples are possible as well. Once the UE is so connected with an access node12and attached, as noted above, the access node12could then serve the UE. For instance, when the access node12receives data destined to the UE, the access node12could allocate downlink PRBs of an upcoming TTI to carry a block of that data to the UE and, in the downlink control region of that upcoming TTI could transmit to the UE a DCI message designating the allocated PRBs of that TTI. And the access node12could accordingly transmit the block of data to the UE in the allocated PRBs of that TTI. The UE might then determine if the UE successfully receives the scheduled transmission (e.g., based on a CRC analysis) and, as noted above, then transmit to the access node12either an ACK, which would signal successful completion of the transmission, or a NACK, which may cause the access node12to engage in retransmission. Alternatively, the access node12could apply TTI bundling for this downlink transmission to the UE. For instance, the access node could allocate downlink PRBs of each of a series of upcoming TTIs to carry respective transmissions of the block of data to the UE, perhaps each with different error-correction coding, and the access node could transmit to the UE a DCI message that designates the TTI bundling factor and allocated PRBs per TTI. The access node12could then accordingly engage in the multiple transmissions to the UE. And based on whether the UE successfully receives the block of data through these multiple transmissions from the access node, the UE could then transmit to the access node12either an ACK or NACK. Likewise, when the UE has data to transmit, UE could send a scheduling request to the access node, the access node could allocate uplink PRBs of an upcoming TTI to carry a block of the data from UE and could transmit to the UE a DCI message designating the allocated PRBs of the upcoming TTI, and the UE could accordingly transmit the block of data to the access node12in the allocated PRBs of that TTI. The access node12might then determine if the access node successfully receives the scheduled transmission (e.g., based on a CRC analysis) and, as noted above, then transmit to the UE either an ACK, which would signal successful completion of the transmission, or a NACK, which may cause the UE to engage in retransmission. Alternatively, for this uplink transmission from the UE, the access node12might similarly apply TTI bundling. For instance, the access node could allocate uplink PRBs of each of a series of upcoming TTIs to carry respective transmissions of the block of data from the UE, perhaps each with different error-correction coding, and access node could transmit to the UE a DCI message that designates the TTI bundling factor and allocated PRBs per TTI. The UE could then accordingly engage in the multiple transmissions to the access node12. And based on whether the access node12successfully receives the block of data through these multiple transmissions from the UE, the access node12could then transmit to the UE either an ACK or NACK. In an example implementation, the access node12could have a default configuration and thus default mode of operation in which the access node12is configured to apply TTI bundling for a given communication based on the communication type, among other possible factors. For instance, the access node12could be provisioned with data that specifies various communication types as to which the access node12is to automatically apply TTI bundling and/or various types of communications as to which the access node12is to not apply TTI bundling. Example communication types as to which the access node12could be so configured to apply TTI bundling might include VoIP communication and other latency-sensitive communications. Whereas, example communication types as to which the access node could be so configured to not apply TTI bundling might include best-efforts communications such as general Internet communications for instance. With this default configuration, if and when the access node12is serving a VoIP communication (e.g., as indicated by the communication being on a QCI-1 bearer or when deep packet inspection or other analysis so indicates), the access node12could automatically apply TTI bundling to the communication. Whereas, if and when the access node12is serving a best-efforts communication (e.g., as indicated by the communication being on a QCI-9 bearer or when deep packet inspection or other analysis so indicates), the access node12could automatically not apply TTI bundling to the communication. As further indicated, the access node12could also be configured to proactively reserve some of the PRBs of carrier14in response to the access node12detecting that at least a predefined threshold high number of UEs of a particular type are RRC connected with access node12. This could be on the downlink and/or the uplink. For instance, the access node12might be configured to proactively reserve some of the PRBs of the carrier14for use to serve Cat-M1 IoT devices in response to the access node12determining that at least a predefined threshold high number of Cat-M1 IoT devices are currently RRC-connected with the access node12. This predefined threshold high number could be set by engineering design and/or business policy to be a number where it would be important or useful to help ensure that there will be sufficient PRB availability for use to serve such devices. Further, the threshold could be predefined to vary per time of day and/or based on consideration of one or more other factors, such as load (e.g., PRB utilization) on the carrier, among other possibilities. And the device type at issue could be a particular class of devices that is defined in advance to be at issue. As an example implementation of this process, assume that carrier14defines 100 PRBs per TTI. In a default mode of operation, the access node12might not have a portion of those PRBs proactively reserved for use to carry communications with Cat-M1 devices. Upon determining that the number of Cat-M1 devices currently RRC-connected with the access node12has risen to the predefined threshold number, the access node12could then reconfigure itself from the default mode of operation to a mode of operation in which the access node12has a portion of the PRBs proactively reserved for use to carry communications with Cat-M1 devices. For instance, the access node12might designate 8 PRBs per TTI as PRBs reserved for use to carry communications with the Cat-M1 device and may record this proactive resource reservation in its internal memory or other data storage for reference when it later becomes necessary to schedule communications with the connected Cat-M1 devices. With this proactive resource reservation, the access node12may then more likely and readily schedule communications with Cat-M1 devices when necessary. As noted above, however, this proactive resource reservation could also contribute to reduced PRB availability for other UEs served by the access node12. In line with the discussion above, to help address this issue, when the access node12proactively imposes this or another such resource reservation on the carrier, the access node12could responsively also reconfigure itself to reduce the access node's application of TTI bundling on the carrier. For instance, the access node12could responsively reconfigure itself from (i) a default mode of operation where the access node12would automatically apply TTI bundling in response to particular TTI-bundling triggers such as communication type to (ii) a revised mode of operation in which the access node12would not apply TTI bundling in response to such triggers. Or the access node12could responsively reconfigure itself from (i) a default mode of operation in which, when the access node12applies TTI bundling, the access node would apply the TTI bundling with a first TTI-bundling factor to (ii) a revised mode of operation in which, when the access node12applies TTI bundling, the access node12would apply the TTI bundling with a second TTI-bundling factor that defines a smaller bundle, with less automatic repeat transmission, than the first TTI-bundling factor. In an example implementation, the access node12could so reconfigure itself by setting a flag or other configuration setting in its internal memory or other data storage specifying the reconfigured state of operation. For instance, the access node12might normally have a stored setting indicating that the access node12is to apply TTI bundling when the communication type at issue is VoIP communication. But in response to the access node12proactively imposing a PRB reservation for Cat-M1 devices in view of the access node12having at least a threshold high number of RRC-connected Cat-M1 devices, the access node12could clear that stored setting or change the stored setting to no longer indicate that the access node12is to apply TTI bundling when the communication type at issue is VoIP communication In accordance with the default setting, the access node12would thus apply TTI bundling when the communication type at issue is VoIP communication. But then in accordance with the revised setting, the access node12would not apply TTI bundling when the communication type at issue is VoIP communication. Similar processing could apply to cause a reduction in TTI-bundling factor rather than disabling TTI bundling. And as noted above, the access node12could base the extent of its reduction in TTI-bundling factor on various considerations. For instance, the access node12could base the extent of its reduction in TTI-bundling factor on the size of its proactive PRB reservation. In addition or alternatively, the access node12could base the extent of its reduction in TTI-bundling factor on an evaluation of load, such as PRB-utilization (e.g., percentage of PRBs allocated per unit time) on the carrier14, perhaps reducing the TTI-bundling factor more when the carrier is more highly loaded and less when the carrier is less highly loaded. When the access node imposes such a reduction in application of TTI bundling in response to the access node proactively imposing the resource reservation due to having a threshold high number of connected UEs of a particular class, the access node12could impose the reduction in application of TTI bundling generally for all UEs served by the access node12or just for specific UEs or in specific situations. For example, the access node could impose the reduction in TTI bundling for UEs of a relatively low service-level class and/or only when carrier14has at least a predefined high level of load, among other possibilities. Further, as noted above, when the access node12has applied this reduction in its application of TTI bundling, the access node12could keep the reduction in place temporarily and could automatically revert to its default mode of operation in response to one or more reversion triggers. For example, the access node12could automatically revert to its default mode of operation when the number of RRC-connected devices of the class at issue drops to below (e.g., sufficiently below) the predefined threshold level, as a result of UEs transitioning to RRC-idle mode or otherwise disconnecting from the access node12, among other possibilities. Note also that, while various operations have been described so far as being carried out by the access node12, various such operations could be coordinated and/or carried out by one or more other entities. By way of example, in the arrangement ofFIG.1, the EMS28might coordinate the disclosed process. For instance, the EMS28might learn, from reports from access node12or the like, when the access node has at least the predefined threshold high number of RRC-connected UEs of the class at issue, and the EMS28might responsively transmit to the access node12a signaling message to which the access node12is configured to respond by proactively imposing an associated PRB reservation and, per the present disclosure, therefore also transitioning to reduce the extent of TTI-bundling that the access node is configured to apply. FIG.2is a flow chart depicting a method that could be carried out in accordance with the present disclosure, to control application of TTI bundling on an RF carrier on which an access node provides wireless communication service, where the carrier defines air-interface resources (e.g., PRBs), and where the access node supports application of TTI bundling on the carrier. For instance, the access node could be access node12as described above, configured by default to apply TTI bundling for certain types of communications. And the method could be carried out by the access node and/or carried out or otherwise coordinated by an external computing system such as the EMS28discussed above. As shown inFIG.2, at block34, the method includes detecting that at least a predefined threshold number of devices of a predefined class are connected with the access node on the carrier. And at block36, the method includes, responsive to the detecting that at least the predefined threshold number of devices of the predefined class are connected with the access node on the carrier, (i) proactively reserving a portion of the air-interface resources for use to serve communications between the access node and the devices of the predefined class and (ii) in view of (e.g., in response to) the proactive reserving of the portion of the air-interface resources, imposing a reduction in the application of the TTI bundling by the access node on the carrier. In line with the discussion above, the predefined class of devices could be IoT devices or a particular type of IoT devices, among other possibilities. In line with the discussion above, the reduction in the application of the TTI bundling by the access node could additionally be based on a determination that the carrier has at least a predefined threshold high level of load. For instance, the access node could impose the reduction in the application of the TTI bundling in view of the proactive resource reservation being made and there being at least a predefined threshold high level of PRB utilization on the carrier. Further, as discussed above, the imposing of the reduction in the application of the TTI bundling by the access node could take various forms. For instance, it could involve reconfiguring the access node from (i) a first mode in which the access node is configured to apply the TTI bundling in response to a TTI-bundling trigger (e.g., the communication at issue being VoIP communication) to (ii) a second mode in which the access node is not configured to apply TTI bundling in response to the TTI-bundling trigger. Alternatively or additionally, it could involve reconfiguring the access node from (i) a first mode in which, when the access node applies TTI bundling, the access node applies the TTI bundling with a first bundling factor defining a first quantity of repeated transmissions per block of data to (ii) a second mode in which, when the access node applies TTI bundling, the access node applies the TTI bundling with a second bundling factor that defines a second quantity of repeated transmissions per block of data, the second quantity being less than the first quantity. And in this case, the method could also include setting an extent of the reduction based on a size of the portion of the air-interfaces proactively reserved. Still further, as discussed above, the method could additionally include, responsive to a reversion trigger, undoing the imposed reduction in the application of the TTI bundling. And the reversion trigger could include detecting that fewer than the predefined threshold number of devices of the predefined class are connected with the access node on the carrier. FIG.3is next a simplified block diagram of an example computing system that could carry out various features as described above, to control application of TT bundling on an RF carrier on which an access node provides wireless communication service, where the carrier defines air-interface resources, and where the access node supports application of TTI bundling on the carrier. As noted above, this computing system could be the EMS28, among other possibilities. As shown inFIG.3, the example computing system includes a network communication interface38, a processing unit40, and non-transitory data storage42, which could be integrated or communicatively linked together by a system bus, network, or other connection mechanism44. The network communication interface38could comprise a wired or wireless network communication module, such as an Ethernet interface, through which the computing system can communicate with other entities. And the processing unit40could comprise one or more processors, such as one or more general purpose processors (e.g., microprocessors) and/or specialized processors (e.g., application specific integrated circuits). Further, the non-transitory data storage42could comprise one or more volatile and/or non-volatile storage components, such as magnetic, optical, or flash storage media. And as shown, the data storage42could hold, store, encode, or otherwise embody program instructions46. In a representative implementation, those program instructions46could be executable by the processing unit40to carry out various features described herein such as those described with respect toFIG.2for instance. Finally,FIG.4is a simplified block diagram of an example access node, such as access node12discussed above for instance, operable in accordance with the present disclosure to control application of TTI bundling on an RF carrier on which the access node provides wireless communication service, where the carrier defines air-interface resources, and where the access node supports application of TTI bundling on the carrier. As shown, the example access node includes a wireless communication interface48, a backhaul communication interface50, and a controller52, which could be integrated together in various ways and/or interconnected by a system bus, network, or other communication mechanism54as shown. The wireless communication interface48could include a radio and antenna structure through which the first access node could be configured to communicate with and serve UEs on the carrier. And the backhaul communication interface50could comprise a wired or wireless network communication module, such as an Ethernet interface, through which to communicate with other entities, such as entities on or via a core network. Further, the controller52could comprise a processing unit (e.g., one or more processing units such as microprocessors and/or specialized processors), non-transitory data storage (e.g., one or more volatile and/or non-volatile storage components, such as magnetic, optical, or flash storage), and program instructions stored in the data storage and executable by the processing unit to carry out (e.g., cause the access node to carry out) various operations as described herein. Various features discussed above can be implemented in this context as well, and vice versa. Further, the present disclosure contemplates a computer-readable medium encoded with, storing, or otherwise embodying program instructions executable by a processing unit to carry out various operations described herein. Exemplary embodiments have been described above. Those skilled in the art will understand, however, that changes and modifications may be made to these embodiments without departing from the true scope and spirit of the invention.
31,689
11863498
DESCRIPTION The following contains specific information related to implementations of the present disclosure. The drawings and their accompanying detailed disclosure are merely directed to implementations. However, the present disclosure is not limited to these implementations. Other variations and implementations of the present disclosure will be obvious to those skilled in the art. Unless noted otherwise, like or corresponding elements among the drawings may be indicated by like or corresponding reference numerals. Moreover, the drawings and illustrations in the present disclosure are generally not to scale and are not intended to correspond to actual relative dimensions. For the purpose of consistency and ease of understanding, like features may be identified (although, in some examples, not illustrated) by the same numerals in the drawings. However, the features in different implementations may be different in other respects and shall not be narrowly confined to what is illustrated in the drawings. The phrases “in one implementation,” or “in some implementations,” may each refer to one or more of the same or different implementations. The term “coupled” is defined as connected whether directly or indirectly via intervening components and is not necessarily limited to physical connections. The term “comprising” means “including, but not necessarily limited to” and specifically indicates open-ended inclusion or membership in the so-disclosed combination, group, series or equivalent. The expression “at least one of A, B and C” or “at least one of the following: A, B and C” means “only A, or only B, or only C, or any combination of A, B and C.” For the purposes of explanation and non-limitation, specific details such as functional entities, techniques, protocols, and standards are set forth for providing an understanding of the disclosed technology. In other examples, detailed disclosure of well-known methods, technologies, systems, and architectures are omitted so as not to obscure the present disclosure with unnecessary details. Persons skilled in the art will immediately recognize that any network function(s) or algorithm(s) disclosed may be implemented by hardware, software or a combination of software and hardware. Disclosed functions may correspond to modules which may be software, hardware, firmware, or any combination thereof. A software implementation may include computer executable instructions stored on a computer readable medium such as memory or other type of storage devices. One or more microprocessors or general-purpose computers with communication processing capability may be programmed with corresponding executable instructions and perform the disclosed network function(s) or algorithm(s). The microprocessors or general-purpose computers may include Applications Specific Integrated Circuitry (ASIC), programmable logic arrays, and/or using one or more Digital Signal Processor (DSPs). Although some of the disclosed implementations are oriented to software installed and executing on computer hardware, alternative implementations implemented as firmware or as hardware or as a combination of hardware and software are well within the scope of the present disclosure. The computer readable medium includes but is not limited to Random Access Memory (RAM), Read Only Memory (ROM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory, Compact Disc Read-Only Memory (CD-ROM), magnetic cassettes, magnetic tape, magnetic disk storage, or any other equivalent medium capable of storing computer-readable instructions. A radio communication network architecture such as a Long-Term Evolution (LTE) system, an LTE-Advanced (LTE-A) system, an LTE-Advanced Pro system, or a 5G NR Radio Access Network (RAN) typically includes at least one base station (BS), at least one UE, and one or more optional network elements that provide connection within a network. The UE communicates with the network such as a Core Network (CN), an Evolved Packet Core (EPC) network, an Evolved Universal Terrestrial RAN (E-UTRAN), a 5G Core (5GC), or an internet via a RAN established by one or more B Ss. A UE may include but is not limited to a mobile station, a mobile terminal or device, or a user communication radio terminal. The UE may be a portable radio equipment that includes but is not limited to a mobile phone, a tablet, a wearable device, a sensor, a vehicle, or a Personal Digital Assistant (PDA) with wireless communication capability. The UE is configured to receive and transmit signals over an air interface to one or more cells in a RAN. A BS may be configured to provide communication services according to at least a Radio Access Technology (RAT) such as Worldwide Interoperability for Microwave Access (WiMAX), Global System for Mobile communications (GSM) that is often referred to as 2G, GSM Enhanced Data rates for GSM Evolution (EDGE) RAN (GERAN), General Packet Radio Service (GPRS), Universal Mobile Telecommunication System (UMTS) that is often referred to as 3G based on basic wideband-code division multiple access (W-CDMA), high-speed packet access (HSPA), LTE, LTE-A, evolved LTE (eLTE) that is LTE connected to 5GC, NR (often referred to as 5G), and/or LTE-A Pro. However, the scope of the present disclosure is not limited to these protocols. The BS may include but is not limited to a node B (NB) in the UMTS, an evolved node B (eNB) in LTE or LTE-A, a radio network controller (RNC) in UMTS, a BS controller (BSC) in the GSM/GERAN, an ng-eNB in an Evolved Universal Terrestrial Radio Access (E-UTRA) BS in connection with 5GC, a next generation Node B (gNB) in the 5G-RAN, or any other apparatus capable of controlling radio communication and managing radio resources within a cell. The BS may serve one or more UEs via a radio interface. The BS is operable to provide radio coverage to a specific geographical area using a plurality of cells forming the RAN. The BS supports the operations of the cells. Each cell is operable to provide services to at least one UE within its radio coverage. Each cell (often referred to as a serving cell) may provide services to serve one or more UEs within its radio coverage such that each cell schedules the DL and optionally UL resources to at least one UE within its radio coverage for DL and optionally UL packet transmissions. The BS can communicate with one or more UEs in the radio communication system via the plurality of cells. A cell may allocate sidelink (SL) resources for supporting Proximity Service (ProSe) or Vehicle to Everything (V2X) service. Each cell may have overlapped coverage areas with other cells. In Multi-RAT Dual Connectivity (MR-DC) cases, the primary cell of a Master Cell Group (MCG) or a Secondary Cell Group (SCG) may be called a Special Cell (SpCell). A Primary Cell (PCell) may refer to the SpCell of an MCG. A Primary SCG Cell (PSCell) may refer to the SpCell of an SCG. MCG may refer to a group of serving cells associated with the Master Node (MN), comprising of the SpCell and optionally one or more Secondary Cells (SCells). An SCG may refer to a group of serving cells associated with the Secondary Node (SN), comprising of the SpCell and optionally one or more SCells. As discussed above, the frame structure for NR supports flexible configurations for accommodating various next generation (e.g., 5G) communication requirements such as Enhanced Mobile Broadband (eMBB), Massive Machine Type Communication (mMTC), and Ultra-Reliable and Low-Latency Communication (URLLC), while fulfilling high reliability, high data rate and low latency requirements. The Orthogonal Frequency-Division Multiplexing (OFDM) technology in the 3rd Generation Partnership Project (3GPP) may serve as a baseline for an NR waveform. The scalable OFDM numerology such as adaptive sub-carrier spacing, channel bandwidth, and Cyclic Prefix (CP) may also be used. Additionally, two coding schemes are considered for NR, specifically Low-Density Parity-Check (LDPC) code and Polar Code. The coding scheme adaption may be configured based on channel conditions and/or service applications. Moreover, it is also considered that in a transmission time interval TX of a single NR frame, downlink (DL) transmission data, a guard period, and uplink (UL) transmission data should at least be included, where the respective portions of the DL transmission data, the guard period, and the UL transmission data should also be configurable, for example, based on the network dynamics of NR. In addition, sidelink resources may also be provided in an NR frame to support ProSe services, (E-UTRA/NR) sidelink services, or (E-UTRA/NR) V2X services. In addition, the terms “system” and “network” herein may be used interchangeably. The term “and/or” herein is only an association relationship for describing associated objects, and represents that these relationships may exist. For example, A and/or B may indicate that: A exists alone, A and B exist at the same time, or B exists alone. In addition, the character “/” herein generally represents that the former and latter associated objects are in an “or” relationship. Examples of some selected terms are provided as follows. Half duplex-frequency division duplex (HD-FDD): It is a duplex scheme whereby communication is possible in two directions, but communication is not possible in both directions at a time. Bandwidth part (BWP) is a new feature introduced in NR to enable more flexibility in the way resources are assigned in a given carrier, and each BWP may be applied with a different sub-carrier spacing (SCS). Thus, there is a need to clarify the behavior when a BWP for an UL transmission and a BWP for a DL reception are applied with a different SCS for HD-FDD operation. Since the BWP for the DL reception and the BWP for the UL transmission may be applied with different sub-carrier spacing configurations, the starting point of a switching time period and the duration of the switching time period may become unclear. In the present disclosure, an original BWP maybe an active BWP where an UL transmission or a DL reception is configured/scheduled before the switching time period, and a target BWP may be an active BWP where an UL transmission or a DL reception is scheduled/configured after the switching time period. FIG.1is a schematic diagram illustrating that the starting symbol of a switching time period does not align with a symbol boundary in an active UL BWP according to an example implementation of the present disclosure. As illustrated inFIG.1, if a UE is provided with the active DL BWP101with the SCS configuration μULand the active UL BWP103with the SCS configuration μUL, and the SCS configuration μULis larger than the SCS configuration μUL, the end of a DL reception (e.g., physical downlink control channel (PDCCH) reception105) in the active DL BWP101may not align with a symbol boundary in the active UL BWP103. In this circumstance, the starting symbol of a switching time period107may not align with a symbol boundary in the active UL BWP103. FIG.2is a schematic diagram illustrating that the starting symbol of a switching time period aligns with a symbol boundary in an active UL BWP according to an example implementation of the present disclosure. As illustrated inFIG.2, if a UE is provided with the active DL BWP201with the SCS configuration μULand the active UL BWP203with the SCS configuration μUL, and the SCS configuration μULis larger than the SCS configuration μUL, the end of a DL reception (e.g., PDCCH reception205) in the active DL BWP201may not align with a symbol boundary in the active UL BWP203. In this circumstance, the starting symbol of a switching time period207may align with a symbol boundary in the active UL BWP203. In the circumstances as illustrated inFIG.1andFIG.2, which symbol in the active UL BWP (e.g., symbol #1 or symbol #2 in the active UL BWP) should be scheduled as the starting symbol of the switching time period may lead to ambiguity. Thus, the starting symbol of the switching time period may need to be defined. FIG.3is a schematic diagram illustrating that the switching time period is different based on the different SCS configurations according to an example implementation of the present disclosure. As illustrated inFIG.3, if a UE is provided with the active DL BWP301with the SCS configuration μULand the active UL BWP303with the SCS configuration μUL, and the SCS configuration μULand the SCS configuration μULare different, which SCS configuration (e.g., the SCS configuration μULof the active DL BWP301or the SCS configuration μULof the active UL BWP303) is the reference for a switching time period305may remain unclear. For example, if the SCS configuration μDL=1 and the SCS configuration μUL=0, and 3 symbols for the switching time period305is provided, the absolute time for the switching time period305may become different while applying different SCS configurations for the switching time period305. On the other hand, if the SCS configuration μDL=1 and the SCS configuration μUL=0, and an absolute time for the switching time period305is provided, the number of symbols on the active UL BWP303and the active DL BWP301will be different, then the scheduling ambiguity may arise. To solve the problems as illustrated inFIGS.1to3, the starting symbol of the switching time period may be defined and the SCS configuration for the switching time period may be defined. The switching time period may start from the end of last symbol of a DL reception or an UL transmission with larger SCS configuration no matter whether the starting symbol of the switching time period aligns with the symbol boundary of an active BWP with smaller SCS configuration or not. After the end of a DL reception or an UL transmission on the active BWP with larger SCS configuration, the switching time period may start from the first symbol with an aligned symbol boundary between different active BWPs with different SCS configurations. A value K indicating a gap between the end of a transmission and the first symbol of the switching time period may be defined, value K may be different corresponding to different SCS configurations. The SCS configuration applied to the switching time period may be the SCS configuration μULfor an active UL BWP. The SCS configuration applied to the switching time period may be the SCS configuration μDLfor an active DL BWP. The SCS configuration applied to the switching time period may be the minimum or smallest SCS configuration among the active BWPs. The SCS configuration applied to the switching time period may be the maximum or largest SCS configuration μmaxamong the active BWPs. I. Starting Point of Switching Time In some implementations, the starting symbol of the switching time period may be started from the end of the last symbol of a DL reception or/and an UL transmission corresponding to the active BWP with the largest SCS configuration.FIG.4is a schematic diagram illustrating that the starting symbol of a switching time period is started from the end of a last symbol of a physical downlink control channel (PDCCH) corresponding to an active DL BWP with the largest SCS configuration according to an example implementation of the present disclosure. As illustrated inFIG.4, if a UE is provided with the active DL BWP401with the SCS configuration μ=1, the active DL BWP403with the SCS configuration μ=0, and the active UL BWP405with the SCS configuration μ=0, the starting symbol of the switching time period407may be started from the end of the last symbol of a DL reception (e.g., the PDCCH reception409) corresponding to the active DL BWP401with the largest SCS configuration μ=1. In some implementations, the starting symbol of the switching time period may be started from the end of the last symbol of a DL reception or/and an UL transmission corresponding to the active BWP with the largest SCS configuration when the last transmission/reception before the switching direction is a dynamically scheduled transmission/reception and the first transmission/reception after the switching direction is a configured transmission/reception. In some implementations, the starting symbol of the switching time period may depend on the starting symbol of the earliest symbol with a configured/scheduled transmission/reception which has a different transmission direction from the previous scheduling. The configured/scheduled transmission may be an UL transmission, and the previous scheduling may be a DL reception (e.g., a PDCCH reception). Specifically, the switching time period may be started from the symbol which is several symbols earlier than the starting symbol of the earliest transmission. The switching time period may use the target BWP as reference. The switching time period may use the active UL BWP as reference. The switching time period may be regarded as a part of the UL transmission. In some implementations, the starting symbol of the switching time period may depend on the starting symbol of the earliest symbol with a configured/scheduled transmission/reception which has a different transmission direction from the previous scheduling when the last transmission/reception before the switching direction is a configured transmission/reception and the first transmission/reception after the switching direction is a dynamically scheduled transmission/reception, or when the last transmission/reception before the switching direction and the first transmission/reception after the switching direction are both dynamically scheduled transmission/reception or are both configured transmission/reception. In some implementations, the starting symbol of the switching time period may depend on the starting symbol of the earliest symbol with a configured/scheduled transmission/reception which has a different transmission direction from the previous scheduling. The configured/scheduled transmission may be an UL transmission, and the previous scheduling may be a DL reception (e.g., a PDCCH reception). Specifically, the switching time period may be started from the symbol which is a number of symbols earlier than the starting symbol of the earliest transmission, and the number of symbols is the smallest number of symbols greater than a duration of the switching time period plus a timing advance (TA) time period. The starting symbol of the switching time period may correspond to the first symbol after the symbol in an active UL/DL BWP which overlaps with the last symbol of the transmission/reception in an active UL/DL BWP with the largest SCS configuration. If the collision between the configured receptions/transmissions and the switching time period occurs, the UE may perform the switching and may omit the configured receptions/transmissions. The configured reception may be a synchronization signal block (SSB), a channel status information-reference signal (CSI-RS) or a semi persistent scheduling-physical downlink shared channel (SPS-PDSCH), and the configured transmission may be a physical uplink control channel (PUCCH), a sounding reference signal (SRS) or a configured grant-physical uplink shared channel (CG-PUSCH). The UE may not be dynamically scheduled with a reception/transmission that overlaps with the switching time period. When determining whether a DL reception before the switching direction is colliding with a switching time period, the TA time period is applied to the switching time period. When determining whether an UL transmission after the switching direction is colliding with a switching time period, the TA time period is not applied to the switching time period. When determining whether an UL transmission before the switching direction is colliding with a switching time period, the TA time period is not applied to the switching time period. When determining whether a DL reception after the switching direction is colliding with a switching time period, the TA time period is applied to the switching time period. The end of a last symbol of the DL reception or/and the UL transmission may not align with the symbol/slot/sub-slot boundary of the transmission/reception in other active BWPs. The active BWP may be the original BWP or/and the target BWP. The active BWP may be a BWP between the original BWP and the target BWP. FIG.5is a schematic diagram illustrating that the starting symbol of the transmission/reception is configured/scheduled in the symbol with an aligned symbol boundary after the switching time period according to an example implementation of the present disclosure. As illustrated in theFIG.5, if the switching time period505is configured in the original BWP (e.g., the active DL BWP501) with the SCS configuration μ=1, and the ending symbol of switching time period505does not correspond to the symbol boundary of the target BWP (e.g., the active UL BWP503) with the SCS configuration μ=0, the starting symbol of the transmission/reception (e.g., the PUSCH transmission507) may be configured/scheduled in the symbol with an aligned symbol boundary after the switching time period505. In some implementations, the starting symbol of the transmission/reception (e.g., the PUSCH transmission507) may not need to be configured/scheduled in the symbol with an aligned symbol boundary after the switching time period505. If the starting symbol of the transmission/reception is configured/scheduled in the symbol with aligned symbol boundary after the switching time period, the switching time period may be considered as a variable time that may be required to be larger than a pre-determined/(pre-)configured/indicated value. The switching time period may be started from the active BWP with larger transmission unit (e.g., slot>sub-slot>symbol). The switching time period may be started from the active BWP with smaller transmission unit (e.g., slot>sub-slot>symbol). The end of a last symbol of the DL reception or/and the UL transmission may align with the symbol/slot/sub-slot boundary of the transmission/reception in other active BWPs. The active BWP may be a reference BWP, and the starting symbol of the switching time period may be defined/determined based on the reference BWP. The reference BWP may be (pre-)configured/indicated, or pre-determined (e.g., the active BWP of the lowest component carrier (CC) index or the active BWP of special cell (SPCell)). The reference BWP may be the active BWP of a configured CC with largest SCS configuration. If the starting symbol of the switching time period corresponds to the last X symbols of a slot in the target BWP, the switching time period may be determined to cross a slot boundary; otherwise, the switching time period may not be determined to cross a slot boundary.FIG.6is a schematic diagram illustrating that a switching time period is configured starting from the last X symbols (X=2) of a slot in the target BWP according to an example implementation of the present disclosure. As illustrated in theFIG.6, if the starting symbol of the switching time period601corresponds to the last X symbols (e.g., X=2) of a slot in the target BWP (e.g., slot #0 in the UL BWP603), the switching time period601may be determined to cross a slot boundary; otherwise, the switching time period601may not be determined to cross a slot boundary. Whether to cross the boundary may be configured or/and indicated. X may be a pre-defined value. X may be configured by a radio resource control (RRC) message. X may be reported by UE capability. X may be dynamically indicated. In some implementations, the switching time period may not be configured to cross a slot boundary. In some implementations, the starting symbol of the switching time period may correspond to the first symbol after a symbol (e.g., symbol x) of the active BWP with the smallest SCS configuration, and the symbol xl overlaps with the last symbol of the DL reception or/and UL transmission before the switching direction.FIG.7is a schematic diagram illustrating that the starting symbol of a switching time period corresponds to the active BWP with smallest SCS according to an example implementation of the present disclosure. As illustrated inFIG.7, if a UE is provided with the active DL BWP701with the SCS configuration μ=1, the active DL BWP703with the SCS configuration μ=0, and the active UL BWP705with the SCS configuration μ=0, the starting symbol of the switching time period707may correspond to the first symbol (e.g., symbol #2) after a symbol (e.g., symbol x) of the active UL BWP705with the smallest SCS configuration μ=0, and the symbol xl overlaps with the last symbol of a DL reception (e.g., the PDCCH reception709) before switching direction. In some implementations, the starting symbol of the switching time period may correspond to the first symbol after a symbol (e.g., symbol x) of the active BWP with the smallest SCS configuration, and the symbol x overlaps with the last symbol of the DL reception or/and UL transmission before the switching direction when the last transmission/reception before the switching direction is a dynamically scheduled transmission/reception and the first transmission/reception after the switching direction is a configured transmission/reception. The starting symbol of the switching time period may be started from the beginning of a symbol/sub-slot/slot in the target BWP with an aligned boundary. Specifically, the symbol/sub-slot/slot where the switching time period begins may be in the first symbol after the end of a last symbol of the DL reception or/and UL transmission in the original BWP. In some implementations, the end of a last symbol of the DL reception or/and UL transmission in the original BWP may not align with the symbol/slot/sub-slot boundary of the transmission/reception in the target BWP.FIG.8is a schematic diagram illustrating that the end of a last symbol of the DL reception in the original BWP does not align with the symbol/slot/sub-slot boundary of the UL transmission in the target BWP according to an example implementation of the present disclosure. As illustrated inFIG.8, if the original BWP (e.g., the active DL BWP801) with a SCS configuration μ=1 and the target BWP (e.g., the active UL BWP803) with a SCS configuration μ=0 are configured with different scheduling units, the end of a last symbol of the DL reception (e.g., the PDSCH reception805) in the original BWP may not align with the symbol/slot/sub-slot boundary of the UL transmission in the target BWP. If the starting symbol of the switching time period corresponds to the last X symbols of a slot in the target BWP, the switching time period may be determined to cross a slot boundary; otherwise, the switching time period may not be determined to cross a slot boundary. Whether to cross the boundary may be configured or/and indicated. X may be a pre-defined value. X may be configured by a RRC message. X may be reported by UE capability. X may be dynamically indicated. In some implementations, the switching time period may not be configured to cross a slot boundary. In some implementations, the switching time period may be started at the beginning of the first symbol with a fixed/pre-defined SCS configuration for a frequency range (FR) (e.g., 15 kHz for FR1 and 60 kHz for FR2), after the last symbol of the end of a DL reception or/and an UL transmission before the switching direction. In some implementations, a value K indicating a gap may be indicated/defined between the end of a transmission and the first symbol of the switching time period or between the end of the switching time period and the starting symbol of a transmission/reception after switching the transmission direction, value K may or may not be different corresponding to different SCS configurations. The value K may be determined based on the configured/indicated value corresponding to the active BWP with largest SCS configuration. The value K may be determined based on the configured/indicated value corresponding to the active BWP with smallest SCS configuration. The value K may be determined based on the indicated value corresponding to the active BWP where the indicator is detected. The value K may be activated or/and deactivated based on some conditions. The condition may be referred to as a specific configuration. The condition may be referred to as a specific parameter. The condition may be referred to as an invalid symbol pattern. The value K may be an absolute value (e.g., in millisecond, microsecond unit) regardless of the applied SCS configuration. The value K may be a variable value based on different SCS configurations or based on whether a symbol boundary between BWPs of different CCs is aligned. The value K may be regarded as a part of the switching time period. The value K may include the PDCCH decoding time, the TA time period, or/and the configured reception/transmission symbols. Each active BWP may have a priority or/and a configured order, and the priority may be used to determine where the starting symbol of the switching time period is configured or/and which SCS configuration the switching time period is applied. The priority index may be configured in the BWP configuration. The priority index with higher value may be referred to as a high priority. The priority index with lower value may be referred to as a low priority. II. Length of Switching Time Period The length of the switching time period may be defined in number of symbols, which may be the same or different for each BWP depending on the SCS of the applied BWPs, and the length of the switching time period may be defined based on the SCS configuration μULfor an active UL BWP. The SCS configuration μULmay refer to the SCS configuration for the switching time period. The SCS configuration μULmay correspond to the largest SCS configuration among the SCS configurations (e.g., μUL1, μUL2, μUL3, . . . ) for all active UL BWPs of the configured CCs. That is, μUL=max (μUL1, μUL2, μUL3, . . . ).FIG.9is a schematic diagram illustrating that the length of a switching time period is determined based on the active UL BWP with the largest SCS configuration according to an example implementation of the present disclosure. As illustrated inFIG.9, if a UE is provided with the active DL BWP901with the SCS configuration μ=1, the active UL BWP903with the SCS configuration μUL1=1, and the active UL BWP905with the SCS configuration μUL2=0, the length of the switching time period907may be determined based on the active UL BWP903with the larger SCS configuration μUL1=1 between the active UL BWP903and the active UL BWP905. If more than one active UL BWPs are provided, the length of the switching time period may be determined based on the active UL BWP with the largest SCS configuration among the SCS configurations (e.g., μUL1, μUL2, μUL3, . . . ) for the more than one active UL BWPs. The SCS configuration μULmay correspond to the smallest SCS configuration among the SCS configurations (e.g., μUL1, μUL2, μUL3, . . . ) for all active UL BWPs of the configured CCs. That is, μUL=min (μUL1, μUL2, μUL3, . . . ).FIG.10is a schematic diagram illustrating that the length of a switching time period is determined based on the active UL BWP with the smallest SCS configuration according to an example implementation of the present disclosure. As show inFIG.10, if a UE is provided with the active DL BWP1001with the SCS configuration μ=1, the active UL BWP1003with the SCS configuration μUL1=1, and the active UL BWP1005with the SCS configuration μUL2=0, the length of the switching time period1007may be determined based on the active UL BWP1005with the smaller SCS configuration μUL2=0 between the active UL BWP1003and the active UL BWP1005. If more than one active UL BWPs are provided, the length of the switching time period may be determined based on the active UL BWP with the smallest SCS configuration among the SCS configurations (e.g., μUL1, μUL2, μUL3, . . . ) for all active UL BWPs. The SCS configuration μULmay correspond to the smallest SCS configuration among the SCS configurations (e.g., μUL1, μUL2, μUL3, . . . ) for all configured UL BWPs of the CCs in which UL transmission are performed after the switching direction. That is, μUL=min (μUL1, μUL2, μUL3, . . . ). The SCS configuration μULmay correspond to the smallest SCS configuration among the SCS configurations (e.g., μUL1, μUL2, μUL3, . . . ) provided in scs-SpecificCarrierList of FrequencyInfoUL or FrequencyInfoUL-SIB of the CCs in which UL transmission are performed after the switching direction. That is, μUL=min (μUL1, μUL2, μUL3, . . . ). The SCS configuration μULmay correspond to the smallest SCS configuration among the SCS configurations (e.g., μUL1, μUL2, μUL3, . . . ) for all configured UL BWPs of the configured CCs. That is, μUL=min (μUL1, μUL2, μUL3, . . . ). The SCS configuration μULmay correspond to the smallest SCS configuration among the SCS configurations (e.g., μUL1, μUL2, μUL3, . . . ) provided in scs-SpecificCarrierList of FrequencyInfoUL or FrequencyInfoUL-SIB of the configured CCs. That is, μUL=min (μUL1, μUL2, μUL3, . . . ). In some implementations, the SCS configuration μULmay correspond to the SCS configuration for the UL active BWP where the earliest configured/scheduled transmission is performed after the switching time period.FIG.11is a schematic diagram illustrating that the length of a switching time period is determined based on the active UL BWP with the earliest transmission after the switching time period according to an example implementation of the present disclosure. As illustrated inFIG.11, if a UE is provided with the active DL BWP1101with the SCS configuration μ=1, the active UL BWP1103with the SCS configuration μ=1, and the active UL BWP1105with the SCS configuration μ=0, the SCS configuration μULmay correspond to the SCS configuration μ=1 for the active UL BWP1103with earliest transmission (e.g., the PUCCH transmission1109) after the switching time period1107, and the length of the switching time period1107may be determined based on the active UL BWP1103with the earliest transmission (e.g., the PUCCH transmission1109) after the switching time period. In some implementations the SCS configuration μULmay correspond to the SCS configuration for the UL active BWP where the latest configured/scheduled transmission is performed before the switching time period1107. The length of the switching time period may be defined in number of symbols, which may be the same or different for each BWP depending on the SCS of the applied BWPs, and the length of the switching time period may be defined based on the SCS configuration μDL for an active DL BWP. The SCS configuration μDLmay refer to the SCS configuration for switching time period. The SCS configuration μDLmay correspond to the largest SCS configuration among the SCS configurations (e.g., μDL1, μDL2, μDL3, . . . ) for all DL active BWPs of the configured CCs. That is, μDL=max (μDL1, μDL2, μDL3, . . . ). The SCS configuration μDLmay correspond to the smallest SCS configuration among the SCS configurations (e.g., μDL1, μDL2, μDL3, . . . ) for all DL active BWPs of the configured CCs. That is, μDL=min (μDL1, μDL2, μDL3, . . . ). The SCS configuration μDL may correspond to the smallest SCS configuration among the SCS configurations (e.g., μDL1, μDL2, μDL3, . . . ) for all configured DL BWPs of the CCs in which DL reception are performed before the switching direction. That is, μDL=min (μDL1, μDL2, μDL3, . . . ). The SCS configuration μDLmay correspond to the smallest SCS configuration among the SCS configurations (e.g., μDL1, μDL2, μDL3, . . . ) provided in scs-SpecificCarrierList of FrequencyInfoDL or FrequencyInfoDL-SIB of the CCs in which DL reception are performed before the switching direction. That is, μDL=min (μDL1, μDL2, μDL3, . . . ). The SCS configuration μDL may correspond to the smallest SCS configuration among the SCS configurations (e.g., μDL1, μDL2, μDL3, . . . ) between all configured DL BWPs of the configured CCs. That is, μDL=min (μDL1, μDL2, μDL3, . . . ). The SCS configuration μDLmay correspond to the smallest SCS configuration among the SCS configurations (e.g., μDL1, μDL2, μDL3, . . . ) provided in scs-SpecificCarrierList of FrequencyInfoDL or FrequencyInfoDL-SIB of the configured CCs. That is, μDL=min (μDL1, μDL2, μDL3, . . . ). The SCS configuration μDLmay correspond to the SCS configuration for the DL active BWP where the earliest configured/scheduled reception is performed after the switching time period. In some implementations, the SCS configuration μDLmay correspond to the SCS configuration for the DL active BWPs where the latest configured/scheduled reception are performed before the switching time period.FIG.12is a schematic diagram illustrating that the length of a switching time period is determined based on the active DL BWP with the latest reception before the switching time period according to an example implementation of the present disclosure. As illustrated inFIG.12, if a UE is provided with the active DL BWP1201with the SCS configuration μ=1, the active DL BWP1203with the SCS configuration μ=0, and the active UL BWP1205with the SCS configuration μ=0, the SCS configuration μDLmay correspond to the SCS configuration μ=0 for the active UL BWP1203with the latest reception (e.g., the PDSCH reception1209) before the switching time period1207, and the length of the switching time period1207may be determined based on the active DL BWP1203with the latest reception (e.g., the PDSCH reception1209) before the switching time period1207. The SCS configuration applied to the length of the switching time period may correspond to the SCS configuration μ for an active BWP. The SCS configuration μ may correspond to the largest SCS configuration among the SCS configuration (e.g., μ1, μ2, μ3, . . . ) for all active BWPs. That is, μ=max (μ1, μ2, μ3, . . . ). The SCS configuration μ may correspond to the smallest SCS configuration among the SCS configuration (e.g., μ1, μ2, μ3, . . . ) for all active BWPs. That is, μ=min (μ1, μ2, μ3, . . . ). If the configured scheduling is in different units (e.g., symbol, sub-slot, or slot), the SCS configuration μ applied to the length of the switching time period may be dependent on which unit the scheduling is applied. The SCS configuration μ may correspond to the BWP with scheduling in larger unit (e.g., slot>sub-slot>symbol). For example, if a DL reception with SCS μDL is in symbol unit and an UL transmission with SCS μULis in sub-slot unit, the μ applied to the switching time period may be the μUL. The SCS configuration μ may correspond to the BWP with scheduling in smaller unit (e.g., slot>sub-slot>symbol). For example, if a DL reception with SCS μDLis in symbol unit and an UL transmission with SCS μULis in sub-slot unit, the μ applied to the switching time period may be the μDL. The SCS configuration applied to the length of the switching time period may be dynamically indicated or/and configured by higher layer. The indication may be a new downlink control information (DCI) format, a new field in existing DCI formats, a new medium access control-control element (MAC-CE) or/and a new field in MAC-CE. The SCS may be configured in a dedicated configuration for scheduling half-duplex (HD) operation. The SCS may be configured by a new parameter in existing configuration. For example, the configuration may be a time division duplex (TDD) configuration. The switching time period may be an absolute value. For example, the switching time period may not be in symbol unit but in a millisecond (ms) or microsecond (μs) unit. FIG.13is a flowchart illustrating a method1300performed by a UE for handling a switching time period of downlink (DL)-uplink (UL) switching for half duplex-frequency division duplex (HD-FDD) operation according to an example implementation of the present disclosure. Although actions1302,1304,1306,1308and1310are illustrated as separate actions represented as independent blocks inFIG.13, these separately illustrated actions should not be construed as necessarily order dependent. The order in which the actions are performed inFIG.13is not intended to be construed as a limitation, and any number of the disclosed blocks may be combined in any order to implement the method, or an alternate method. Moreover, each of actions1302,1304,1306,1308and1310may be performed independent of other actions and can be omitted in some implementations of the present disclosure. In action1302, the UE may receive a first configuration for a first active bandwidth part (BWP) with a first sub-carrier spacing (SCS). The first active BWP is one of an active UL BWP and an active DL BWP. In action1304, the UE may receive a second configuration for a second active BWP with a second SCS. The second active BWP is another one of the active UL BWP and the active DL BWP. That is, the second active BWP is the active UL BWP if the first active BWP is the active DL BWP, and the second active BWP is the active DL BWP if the first active BWP is the active UL BWP. In action1306, the UE may receive a third configuration for the switching time period. The switching time period has a unit of symbol. That is, the unit for switching time period is per symbol. If the first SCS is larger than the second SCS, the communication (e.g., a DL reception or an UL transmission) on the first active BWP ends at an ending symbol, and the switching time period begins at a starting symbol, the starting symbol of the switching time period may be determined based on the ending symbol of the communication on the first active BWP. That is, a starting symbol or a starting position of a switching time period may be determined based on the ending symbol of a communication (e.g., a DL reception or an UL transmission) in an active BWP with the largest SCS among the SCSs of all active DL and UL BWPs. If the first SCS is larger than the second SCS, the communication (e.g., a DL reception or an UL transmission) on the second active BWP begins at a first starting symbol, the switching time period begins at a second starting symbol, and the second starting symbol of the switching time period may be determined based on the first starting symbol of the communication (e.g., a DL reception or an UL transmission) on the second active BWP. That is, a starting symbol or a starting position of a switching time period may be determined based on the starting symbol of a communication (e.g., a DL reception or an UL transmission) in an active BWP with the smallest SCS among the SCSs of all active DL and UL BWPs. If the communication (e.g., a DL reception or an UL transmission) on the second BWP begins at a first starting symbol, and the switching time period begins at a second starting symbol, the second starting symbol of the switching time period may be determined based on the first starting symbol of the communication (e.g., a DL reception or an UL transmission). That is, a starting symbol or a starting position of a switching time period may be determined based on the starting symbol of a communication (e.g., a DL reception or an UL transmission) in a target BWP. If the first active BWP is the active DL BWP, and the second active BWP is the active UL BWP, the switching time period may include a timing advance (TA) time period. A length of the switching time period may be determined based on the second SCS. That is, a length of the switching time period may be determined based on the SCS of the target BWP. In action1308, the UE may perform communication with a Base Station (BS) on the first active BWP with the first SCS. The communication may be a DL reception or an UL transmission. In action1310, the UE may perform, after the switching time period, communication with the BS on the second active BWP with the second SCS. The communication may be a DL reception or an UL transmission. The method1300provided in the present disclosure enables a UE to specify a gap in time domain (e.g., the switching time period) between DL reception and UL transmission when multiple SCS configurations are configured. Moreover, the method1300provided in the present disclosure specifies the switching time period in unit of symbol to avoid the ambiguity of scheduling upon NR frame structure. FIG.14is a block diagram illustrating a node1400for wireless according to an example implementation of the present disclosure. As illustrated inFIG.14, a node1400may include a transceiver1420, a processor1428, a memory1434, one or more presentation components1438, and at least one antenna1436. The node1400may also include a radio frequency (RF) spectrum band module, a BS communications module, a network communications module, and a system communications management module, Input/Output (I/O) ports, I/O components, and a power supply (not illustrated inFIG.14). Each of the components may directly or indirectly communicate with each other over one or more buses1440. The node1400may be a UE or a BS that performs various functions disclosed with reference toFIG.13. The transceiver1420has a transmitter1422(e.g., transmitting/transmission circuitry) and a receiver1424(e.g., receiving/reception circuitry) and may be configured to transmit and/or receive time and/or frequency resource partitioning information. The transceiver1420may be configured to transmit in different types of subframes and slots including but not limited to usable, non-usable and flexibly usable subframes and slot formats. The transceiver1420may be configured to receive data and control channels. The node1400may include a variety of computer-readable media. Computer-readable media may be any available media that may be accessed by the node1400and include volatile (and/or non-volatile) media and removable (and/or non-removable) media. The computer-readable media may include computer-storage media and communication media. Computer-storage media may include both volatile (and/or non-volatile media), and removable (and/or non-removable) media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or data. Computer-storage media may include RAM, ROM, EPROM, EEPROM, flash memory (or other memory technology), CD-ROM, Digital Versatile Disks (DVD) (or other optical disk storage), magnetic cassettes, magnetic tape, magnetic disk storage (or other magnetic storage devices), etc. Computer-storage media may not include a propagated data signal. Communication media may typically embody computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanisms and include any information delivery media. The term “modulated data signal” may mean a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. Communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the previously listed components should also be included within the scope of computer-readable media. The memory1434may include computer-storage media in the form of volatile and/or non-volatile memory. The memory1434may be removable, non-removable, or a combination thereof. Example memory may include solid-state memory, hard drives, optical-disc drives, etc. As illustrated inFIG.14, the memory1434may store a computer-readable and/or computer-executable program1432(e.g., software codes) that are configured to, when executed, cause the processor1428to perform various functions disclosed herein, for example, with reference toFIG.13. Alternatively, the program1432may not be directly executable by the processor1428but may be configured to cause the node1400(e.g., when compiled and executed) to perform various functions disclosed herein. The processor1428(e.g., having processing circuitry) may include an intelligent hardware device, e.g., a Central Processing Unit (CPU), a microcontroller, an ASIC, etc. The processor1428may include memory. The processor1428may process the data1430and the program1432received from the memory1434, and information transmitted and received via the transceiver1420, the base band communications module, and/or the network communications module. The processor1428may also process information to send to the transceiver1420for transmission via the antenna1436to the network communications module for transmission to a CN. One or more presentation components1438may present data indications to a person or another device. Examples of presentation components1438may include a display device, a speaker, a printing component, a vibrating component, etc. In view of the present disclosure, it is obvious that various techniques may be used for implementing the disclosed concepts without departing from the scope of those concepts. Moreover, while the concepts have been disclosed with specific reference to certain implementations, a person of ordinary skill in the art may recognize that changes may be made in form and detail without departing from the scope of those concepts. As such, the disclosed implementations are to be considered in all respects as illustrative and not restrictive. It should also be understood that the present disclosure is not limited to the particular implementations disclosed and many rearrangements, modifications, and substitutions are possible without departing from the scope of the present disclosure.
49,765
11863499
DETAILED DESCRIPTION Technical solutions in embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are part of the embodiments of the present disclosure, but not all of the embodiments. Based on the embodiments in the present disclosure, many other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present disclosure. Reference is made toFIG.1, which is a flowchart of a method for controlling activation of a bandwidth part (BWP) according to an embodiment of the present disclosure. As shown inFIG.1, the method includes the following steps101to103. Step101includes: receiving and saving BWP configuration information transmitted by a base station, where the BWP configuration information includes BWP identification information. The method for controlling activation of a BWP provided by embodiments of the present disclosure is applied in a user equipment (UE) to manage an activated state of BWP(s) corresponding to each component carrier. In this step, the base station first configures a BWP for a UE to access the base station. When configuring the BWP, the base station transmits the BWP configuration information to the UE, and the UE may receive and save the BWP configuration information, i.e., saving a correspondence between each component carrier and a default BWP. Specifically, in a practical application, each component carrier is configured with a BWP set, and information about one or more BWPs is stored in the BWP set, that is, one component carrier may correspond to one or more BWPs. In an embodiment, every BWPs may be numbered. The BWP identification information may be configured to a UE in an explicit or implicit manner. The explicit manner includes that a dedicated information bit string is configured in the configuration information for each BWP to indicate index information of the BWP. The implicit manner includes that a serial number of each BWP in a BWP list of the configuration information is an index of the BWP. As an example, the first BWP in the list is numbered as 0, and BWPs subsequent to the first BWP are respectively numbered as 1, 2, 3, and so on. Step102includes: receiving a BWP activation command transmitted by the base station. The base station may transmit a BWP activation command to the base station through Layer 1 (L1) signaling or Layer 2 (L2) signaling, and the BWP activation command may indicate index information of to-be-activated BWP(s) in an explicit or implicit manner. The explicit manner includes: carrying index information of a target and to-be-activated BWP in an activation signaling. The implicit manner includes: carrying a bitmap in an activation signaling, each bit corresponding to one BWP, indicating to activate the BWP when the corresponding bit takes a first value; and indicating to deactivate the BWP when the corresponding bit takes a second value. A position where each indication bit is located in the bitmap corresponds to an index of a corresponding BWP. As an example, the first bit in the bitmap corresponds to a BWP numbered as 0, the second bit in the bitmap corresponds to a BWP numbered as 1, and so on. In a case that a BWP is in an activated state, no operation is performed when a bit corresponding to the BWP in the received activation signaling takes a first value. Similarly, in a case that a BWP is in a deactivated state, no operation is performed when a bit corresponding to the BWP in the received activation signaling takes a second value. Step103includes: performing BWP activation with a BWP identifier indicated by the BWP activation command. Upon receipt of the BWP activation command, a UE may perform BWP activation with the BWP identifier indicated by the BWP activation command, thereby implementing a control of BWP activation. Thus, in the embodiment of the present disclosure, BWP configuration information transmitted by a base station is received and saved, and the BWP configuration information includes BWP identification information; a BWP activation command transmitted by the base station is received; and a BWP is activated based on a BWP identifier indicated by the BWP activation command BWPs are numbered, the activation command indicates a BWP that needs to be activated, and the BWP is activated by a UE, thereby improving the flexibility of BWP activation control. Reference is further made toFIG.2, and the above-mentioned BWP configuration information is used to indicate a default BWP corresponding to each component carrier. Subsequent to step101, the method further includes:step104, receiving a component carrier activation signaling transmitted by the base station; andstep105, activating, based on the component carrier activation signaling, a first target component carrier and a default BWP corresponding to the first target component carrier. In a case that a component carrier is activated, data can be normally transmitted and received on the component carrier only when there is an activated BWP on the component carrier. The above-mentioned default BWP refers to a BWP that needs to be activated by default, if a command indicates to activate a component carrier, but the command does not specify a BWP on the component carrier requiring to be activated. The number of the default BWP corresponding to each component carrier may be set according to actual needs, and may be one or more, which is not specifically limited herein. It should be appreciated that when a target component carrier is in a deactivated state, all BWPs on this component carrier are in a deactivated state. In this step, the base station may transmit a component carrier activation signaling to the base station by a control element (CE) of a medium access control (MAC) layer. The component carrier activation signaling carries a first target component carrier that needs to be activated, and all BWPs on the first target component carrier are in a deactivated state. After receiving the component carrier activation signaling, the UE obtains the first target component carrier that needs to be activated, which is indicated by the component carrier activation signaling, then obtains a default BWP corresponding to the first target component carrier according to the previously stored BWP configuration information, and finally activates the first target component carrier and the default BWP(s) corresponding to the first target component carrier. In this embodiment, a default BWP corresponding to each component carrier is configured in BWP configuration information, and then a first target component carrier and a default BWP corresponding to a first target component carrier are directly activated based on the component carrier activation signaling. Therefore, an activated state of the component carrier and an activated state of a BWP on the component carrier can be controlled through a single signaling, thereby reducing signaling overhead. In addition, since a single signaling is used to simultaneously activate a component carrier and activate a BWP on the component carrier, a transmission delay can be avoided, which is caused by controlling an activated state of the component carrier and an activated state of the BWP on the component carrier through separate signalings. It should be appreciated that a manner where a base station configures BWP configuration information for indicating a default BWP corresponding to each component carrier can be set according to actual needs. For example, in an embodiment, the configuring manner can be implemented in any of the following manners:a first manner including that, for a component carrier configured with a BWP set, one bit is used to indicate whether each BWP in the BWP set is the default BWP;a second manner including that, for a component carrier configured with a BWP set, a BWP ranked first or last in the BWP set is the default BWP;a third manner including that, for a component carrier configured with a BWP set, each BWP has an index value, and a BWP with an initial index value in the BWP set is the default BWP; for example, if the BWP identifier value in a BWP set starts from 0, a BWP with the index value 0 is determined as a default BWP;a fourth manner including that, for a component carrier configured with a BWP set, a BWP with the widest or narrowest bandwidth in BWPs in the BWP set is the default BWP; ora fifth manner including that, for a component carrier configured with a BWP set, a BWP with the lowest or highest starting frequency in BWPs in the BWP set is the default BWP. Further, after the base station transmits the BWP configuration information, the activated BWP may be deactivated based on signaling. Specifically, referring toFIG.3, subsequent to the above step101, the method further includes steps106and107. Step106includes: receiving a BWP deactivation command transmitted by the base station, where the BWP deactivation command is configured to adjust an activated BWP on a second target component carrier to a deactivated state. In this step, the base station may transmit a BWP deactivation command to a UE through L1 or L2 signaling. The signaling may include an identification indication of a BWP that needs to be deactivated. The UE may obtain the second target component carrier where the BWP needs to be deactivated is located, by inquiring the previously saved BWP configuration information. Step107includes: deactivating the second target component carrier, or deactivating the second target component carrier and a corresponding BWP on the second target component carrier, in a case that an activated BWP to be adjusted on the second target component carrier is a last activated BWP. In this step, when an deactivation operation is performed on a BWP on the second target component carrier, in a case that there are multiple activated BWPs on the second target component carrier, a BWP specified in the BWP deactivation command may be directly deactivated; and in a case that only the last activated BWP exists on the second target component carrier, the second target component carrier may be deactivated, or both the second target component carrier and a corresponding BWP (that is, a BWP specified in the deactivation command) on the second target component carrier. In addition, in a case that a BWP specified in the BWP deactivation command includes all activated BWPs on the second target component carrier, which also means performing a deactivation on the last activated BWP on the second target component carrier, the second target component carrier may be deactivated in this case, or both the second target component carrier and a corresponding BWP on the second target component carrier (that is, a BWP specified in the deactivation command) may be deactivated in this case. Since the deactivation of a component carrier can be achieved by only indicating a BWP deactivation during a deactivation process, the signaling overhead is further reduced. Further, after the base station transmits the BWP configuration information, the activated BWP may be deactivated in accordance with a signaling. Specifically, referring toFIG.4, subsequent to the above step101, the method further includes steps108and109. Step108includes: receiving a component carrier deactivation signaling transmitted by the base station. In this step, the base station may transmit a component carrier deactivation signaling to a UE through a control element of a medium access control layer (MAC CE). The component carrier deactivation signaling includes a third target component carrier that needs to be deactivated. The UE may obtain all BWPs on the third target component carrier based on the previously saved BWP configuration information. The third target component carrier may include one or more BWPs in an activated state, and may also include one or more BWPs in a deactivated state. Step109includes: deactivating, based on the component carrier deactivation signaling, a third target component carrier, or a third target component carrier and all activated BWPs on the third target component carrier. In this step, upon receiving the component carrier deactivation signaling, a UE may deactivate the third target component carrier, or may deactivate the third target component carrier and all the activated BWPs on the third target component carrier. Since the deactivation of a BWP can be achieved only based on a component carrier deactivation signaling, the signaling overhead is further reduced. Further, the BWP activation command is configured to adjust a deactivated BWP on a fourth target component carrier to an activated state. The performing BWP activation with the BWP identifier indicated by the BWP activation command includes: activating the fourth target component carrier and a BWP designated to be activated through the BWP activation command, in a case that the fourth target component carrier is in a deactivated state. In an embodiment, the base station may transmit a BWP activation command to a UE through L1 signaling or L2 signaling, and the BWP activation command includes a BWP that needs to be activated. The UE may obtain the fourth target component carrier corresponding to the BWP(s) that needs to be activated by inquiring the previously saved BWP configuration information. The UE determines whether the fourth target component carrier is in an activated state, may directly activate a BWP designated to be activated through the BWP activation command if the fourth target component carrier is currently in an activated state, and may activate the fourth target component carrier and a BWP designated to be activated through the BWP activation command if the fourth target component carrier is currently in a deactivated state. In this embodiment, a component carrier can be controlled to be activated only through a BWP activation command, thereby further reducing the signaling overhead. It should be noted that in the related art, a UE usually accesses to only one base station, and of course, a UE can also access to two base stations, where one of the base stations is a primary base station and the other one is a secondary base station. In an embodiment, the foregoing BWP configuration information may include BWP configuration information of the primary base station and BWP configuration information of the secondary base station. In a case that a secondary base station needs to be added to the UE, the primary base station configures BWP configuration information of the secondary base station for the UE. In this case, when the UE receives the BWP configuration information of the secondary base station, the BWP configuration information of the secondary base station is used to indicate a component carrier where a primary cell of the secondary base station is located, and default BWP information corresponding to each component carrier. Subsequent to the above step101, the above method further includes: actively activating, by the UE, a default BWP corresponding to the component carrier where the primary cell in the secondary base station is located. In this embodiment, after receiving the BWP configuration information of the secondary base station, the UE may activate the default BWP corresponding to the component carrier where the primary cell in the secondary base station is located in accordance with indications, thereby implementing data transmission to the secondary base station. It should be understood that for processes of activating and deactivating a BWP corresponding to a component carrier where a secondary cell in the secondary base station is located and a non-default BWP corresponding to a component carrier where a primary cell in the secondary base station is located, reference can be made to the foregoing embodiments, and details are not described herein again. Further, the base station may further perform handover on the primary cell. Specifically, subsequent to the above step101, the method further includes:receiving a primary cell handover command transmitted by the base station, where the primary cell handover command is configured to indicate a target primary cell as a handover cell and a default BWP on the target primary cell; andperforming a cell handover based on the default BWP on the target primary cell and the target primary cell, for example, performing a random access procedure on the default BWP of the target primary cell. In this embodiment, a base station can control a UE to be switched to a target primary cell through a primary cell handover command, so as to supply a better service to the UE. In order to enable normal data transmission on the target primary cell, a default BWP on the target primary cell is indicated in the primary cell handover command Therefore, the UE can complete the subsequent target cell handover procedure on the default BWP. Reference is made toFIG.5, and the present disclosure further provides a method for controlling activation of a bandwidth part (BWP), which includes steps501and502. Step501includes: transmitting BWP configuration information to a UE, where the BWP configuration information includes BWP identification information. The method for controlling activation of a BWP provided by embodiments of the present disclosure is applied in a base station to control an activated state of BWP(s) corresponding to each component carrier. In this step, the base station first configures a BWP for a UE to access the base station. When configuring the BWP, the base station transmits the BWP configuration information to the UE, and the UE may receive and save the BWP configuration information, thereby saving a correspondence between each component carrier and a default BWP. Specifically, in a practical application, each component carrier is configured with a BWP set, and information about one or more BWPs is stored in the BWP set, that is, one component carrier may correspond to one or more BWPs. In an embodiment, all the BWPs may be numbered. The BWP identification information may be configured to a UE in an explicit or implicit manner. The explicit manner includes that a dedicated information bit string is configured in the configuration information for each BWP to indicate index information of the BWP. The implicit manner includes that a serial number of each BWP in a BWP list of the configuration information is an index of the BWP. As an example, the first BWP in the list is numbered as 0, and BWPs subsequent to the first BWP are respectively numbered as 1, 2, 3, and so on. Step502includes: transmitting a BWP activation command to the UE, where the activation command is configured for the UE to perform BWP activation with a BWP identifier indicated by the BWP activation command. The base station may transmit a BWP activation command to the UE through L1 signaling or L2 signaling, and the BWP activation command may indicate index information of to-be-activated BWP(s) in an explicit or implicit manner. The explicit manner includes: carrying index information of a target and to-be-activated BWP in an activation signaling. The implicit manner includes: carrying a bitmap in an activation signaling, each bit corresponding to one BWP, indicating to activate the BWP when the corresponding bit takes a first value; and indicating to deactivate the BWP when the corresponding bit takes a second value. A position where each indication bit is located in the bitmap corresponds to an index of a corresponding BWP. As an example, the first bit in the bitmap corresponds to a BWP numbered as 0, the second bit in the bitmap corresponds to a BWP numbered as 1, and so on. In a case that a BWP is in an activated state, no operation is performed when a bit corresponding to the BWP in the received activation signaling takes a first value. Similarly, in a case that a BWP is in a deactivated state, no operation is performed when a bit corresponding to the BWP in the received activation signaling takes a second value. When receiving the BWP activation command, a UE may perform BWP activation with the BWP identifier indicated by the BWP activation command, thereby implementing a control of BWP activation. Thus, in the embodiment of the present disclosure, BWP configuration information transmitted by a base station is received and saved, where the BWP configuration information includes BWP identification information; a BWP activation command transmitted by the base station is received; and BWP activation is performed based on a BWP identifier indicated by the BWP activation command BWPs are numbered, the activation command indicates a BWP that needs to be activated, and the BWP is activated by a UE, thereby improving the flexibility of BWP activation control. Further, the BWP configuration information is used to indicate a default BWP corresponding to each component carrier. After transmitting the BWP configuration information to the user equipment, the method further includes: transmitting a component carrier activation signaling to the user equipment, where the component carrier activation signaling is used for the user equipment to activate a first target component carrier and a default BWP corresponding to the first target component carrier. When a component carrier is activated, data can be normally transmitted and received on the component carrier only when there is an activated BWP on the component carrier. The above-mentioned default BWP refers to a BWP that needs to be activated by default when a command indicates to activate a BWP, but the command does not indicate that other BWPs on the component carrier require to be activated. The number of the default BWPs corresponding to each component carrier may be set according to actual demands, and may be one or more, which is not specifically limited herein. It should be appreciated that when a target component carrier is in a deactivated state, all BWPs on the target component carrier are in a deactivated state. In this step, the base station may transmit a component carrier activation signaling to the base station by an MAC CE. The component carrier activation signaling carries a first target component carrier that needs to be activated, and all BWPs on the first target component carrier are in a deactivated state. After receiving the component carrier activation signaling, the UE obtains the first target component carrier that needs to be activated, which is indicated by the component carrier activation signaling, then obtains a default BWP corresponding to the first target component carrier according to the previously stored BWP configuration information, and finally activates the first target component carrier and the default BWP(s) corresponding to the first target component carrier. In this embodiment, a default BWP corresponding to each component carrier is configured in BWP configuration information, and then a first target component carrier and a default BWP corresponding to a first target component carrier are directly activated in accordance with the component carrier activation signaling. Therefore, an activated state of the component carrier and an activated state of a BWP on the component carrier can be controlled through a single signaling, thereby reducing signaling overhead. In addition, since a single signaling is used to activate a component carrier and activate a BWP on the component carrier at a same time, a transmission delay can be avoided, which is caused by controlling an activated state of the component carrier and an activated state of the BWP on the component carrier through separate signalings. It should be appreciated that a manner where a base station configures BWP configuration information for indicating a default BWP corresponding to each component carrier can be set according to actual needs. For example, in an embodiment, the configuring manner can be implemented in any of the following manners:a first manner including that, for a component carrier configured with a BWP set, one bit is used to indicate whether each BWP in the BWP set is the default BWP;a second manner including that, for a component carrier configured with a BWP set, a BWP ranked first or last in the BWP set is the default BWP;a third manner including that, for a component carrier configured with a BWP set, each BWP has an index value, and a BWP with an initial index value in the BWP set is the default BWP; for example, if the BWP identifier value in a BWP set starts from 0, a BWP with the index value 0 is determined as a default BWP;for a component carrier configured with a BWP set, a BWP with a maximum bandwidth or a minimum bandwidth in the BWP set is the default BWP; orfor a component carrier configured with a BWP set, a BWP with a maximum starting frequency or a minimum starting frequency in the BWP set is the default BWP. Further, after the base station transmits the BWP configuration information, the activated BWP may be deactivated through signaling. Specifically, subsequent to the above step501, the method further includes: transmitting a BWP deactivation command to the user equipment. The BWP deactivation command is configured to instruct the user equipment to adjust an activated BWP on a second target component carrier to a deactivated state; and the BWP deactivation command is configured to instruct the user equipment to deactivate the second target component carrier, or deactivate the second target component carrier and a corresponding BWP on the second target component carrier, in a case that the activated BWP to be adjusted is a last activated BWP on the second target component carrier. In this step, the base station may transmit a BWP deactivation command to a UE through L1 or L2 signaling, which may include a BWP that needs to be deactivated. The UE may obtain the second target component carrier where the BWP needs to be deactivated is located, by inquiring previously saved BWP configuration information. In an embodiment, when a UE performs deactivation on a BWP on the second target component carrier, in a case that there are multiple activated BWPs on the second target component carrier, the UE may directly deactivate a BWP specified in the BWP deactivation command; and in a case that only the last activated BWP exists on the second target component carrier, the UE may deactivate the second target component carrier, or both the second target component carrier and a corresponding BWP (that is, a BWP specified in the deactivation signaling) on the second target component carrier. Since the deactivation of a component carrier can be achieved by only indicating a BWP deactivation during a deactivation process, the signaling overhead is further reduced. Further, after the base station transmits the BWP configuration information, the activated BWP may be deactivated through signaling. Specifically, subsequent to the above step501, the method further includes: transmitting a component carrier deactivation signaling to the user equipment. The component carrier deactivation signaling is configured to instruct the user equipment to deactivate a third target component carrier, or to instruct the user equipment to deactivate a third target component carrier and all activated BWPs on the third target component carrier. In this step, the base station may transmit a component carrier deactivation signaling to a UE through a control element of a medium access control layer (MAC CE). The component carrier deactivation signaling includes a third target component carrier that needs to be deactivated. The UE may obtain all BWPs on the third target component carrier based on the previously saved BWP configuration information. The third target component carrier may include one or more BWPs in an activated state, and may also include one or more BWPs in a deactivated state. In this embodiment, upon receiving the component carrier deactivation signaling, a UE may deactivate the third target component carrier, or may deactivate the third target component carrier and all the activated BWPs on the third target component carrier. Since the deactivation of a BWP can be achieved only based on a component carrier deactivation signaling, the signaling overhead is further reduced. Further, the BWP activation command is configured to adjust a deactivated BWP on a fourth target component carrier to an activated state; and the user equipment activates the fourth target component carrier and a BWP designated to be activated through the BWP activation command, in a case that the fourth target component carrier is in an inactive state. In this step, the base station may transmit a BWP activation command to a UE through L1 signaling or L2 signaling, and the BWP activation command includes a BWP that needs to be activated. The UE may obtain the fourth target component carrier corresponding to the BWP(s) that needs to be activated by inquiring the previously saved BWP configuration information. The UE determines whether the fourth target component carrier is in an activated state, may directly activate a BWP designated to be activated through the BWP activation command if the fourth target component carrier is currently in an activated state, and may activate the fourth target component carrier and a BWP designated to be activated through the BWP activation command if the fourth target component carrier is currently in a deactivated state. In this embodiment, a component carrier can be controlled to be activated only through a BWP activation command, thereby further reducing the signaling overhead. It should be noted that in the related art, a UE usually accesses to only one base station, and of course, a UE can also access to two base stations, where one of the base stations is a primary base station and the other one is a secondary base station. In an embodiment, the foregoing BWP configuration information may include BWP configuration information of the primary base station and BWP configuration information of the secondary base station. In a case that a secondary base station needs to be added to the UE, the primary base station configures BWP configuration information of the secondary base station for the UE. The BWP configuration information of the secondary base station is used to indicate a component carrier where a primary cell of the secondary base station is located, and default BWP information corresponding to each component carrier. In an embodiment, after receiving the BWP configuration information of the secondary base station, the UE may activate the default BWP corresponding to the component carrier where the primary cell of the secondary base station is located based on an indication, thereby implementing data transmission of the secondary base station. It should be understood that, for processes of activating and deactivating a BWP corresponding to a component carrier where a secondary cell in the secondary base station is located, and a non-default BWP corresponding to a component carrier where a primary cell in the secondary base station is located, reference can be made to the foregoing embodiments, and details are not described herein again. Further, the base station may also perform handover on the primary cell. Specifically, subsequent to the above step101, the method further includes: transmitting a primary cell handover command to the user equipment, where the primary cell handover command is configured to indicate a target primary cell as a handover cell and a default BWP on the target primary cell. In this embodiment, a base station can control a UE to be switched to a target primary cell through a primary cell handover command, so as to supply a better service to the UE. In order to enable normal data transmission on the target primary cell, a default BWP on the target primary cell is indicated in the primary cell handover command Therefore, the UE can complete the subsequent target cell handover procedure on the default BWP. Reference is made toFIG.6, which is a schematic structural diagram of a UE according to an embodiment of the present disclosure, which can implement details of the method for controlling activation of a bandwidth part (BWP) in the foregoing embodiments, and can achieve the same effects. As shown inFIG.6, the UE includes:a configuration reception module601, configured to receive and save BWP configuration information transmitted by a base station, where the BWP configuration information includes BWP identification information;a command reception module602, configured to receive a BWP activation command transmitted by the base station; anda processing module603, configured to perform BWP activation with a BWP identifier indicated by the BWP activation command. Optionally, the BWP configuration information is used to indicate a default BWP corresponding to each component carrier. The command reception module602is further configured to receive a component carrier activation signaling transmitted by the base station. The processing module603is further configured to activate, based on the component carrier activation signaling, a first target component carrier and a default BWP corresponding to the first target component carrier. Optionally, a manner where the BWP configuration information indicates the default BWP corresponding to each component carrier includes any of the following manners:a manner in which, for a component carrier configured with a BWP set, one bit is used to indicate whether each BWP in the BWP set is the default BWP;a manner in which, for a component carrier configured with a BWP set, a BWP ranked first or last in the BWP set is the default BWP;a manner in which, for a component carrier configured with a BWP set, each BWP has an index value, and a BWP with an initial index value in the BWP set is the default BWP;a manner in which, for a component carrier configured with a BWP set, a BWP with a maximum bandwidth or a minimum bandwidth in the BWP set is the default BWP; ora manner in which, for a component carrier configured with a BWP set, a BWP with a maximum starting frequency or a minimum starting frequency in the BWP set is the default BWP.Optionally, the command reception module602is further configured to receive a BWP deactivation command transmitted by the base station, and the BWP deactivation command is configured to adjust an activated BWP on a second target component carrier to a deactivated state. The processing module603is further configured to deactivate the second target component carrier, or deactivate the second target component carrier and a corresponding BWP on the second target component carrier, in a case that the activated BWP to be adjusted is a last activated BWP on the second target component carrier. Optionally, the command reception module602is further configured to receive a component carrier deactivation signaling transmitted by the base station. The processing module603is further configured to deactivate, based on the component carrier deactivation signaling, a third target component carrier, or a third target component carrier and all activated BWPs on the third target component carrier. Optionally, the BWP activation command is configured to adjust a deactivated BWP on a fourth target component carrier to an activated state. The processing module603is further configured to activate the fourth target component carrier and a BWP designated to be activated through the BWP activation command, in a case that the fourth target component carrier is in an inactive state. Optionally, the command reception module602is further configured to: receive BWP configuration information of a secondary base station, and the BWP configuration information of the secondary base station is used to indicate a component carrier where a primary cell of the secondary base station is located, and default BWP information corresponding to each component carrier. The processing module603is further configured to activate a default BWP corresponding to the component carrier where the primary cell of the secondary base station is located. Optionally, the command reception module602is further configured to receive a primary cell handover command transmitted by the base station, and the primary cell handover command is configured to indicate a target primary cell as a handover cell and a default BWP on the target primary cell. The processing module603is further configured to perform a cell handover based on the default BWP on the target primary cell and the target primary cell. Thus, in the embodiments of the present disclosure, BWP configuration information transmitted by a base station is received and saved, where the BWP configuration information includes BWP identification information; a BWP activation command transmitted by the base station is received; and a BWP is activated based on a BWP identifier indicated by the BWP activation command BWPs are numbered, the activation command indicates a BWP that needs to be activated, and the BWP is activated by a UE, thereby improving the flexibility of BWP activation control. Referring toFIG.7,FIG.7is a schematic structural diagram of a base station according to an embodiment of the present disclosure, which can implement details of the method for controlling activation of a bandwidth part (BWP) in the foregoing embodiments, and can achieve the same effects. As shown inFIG.7, the base station includes:a configuration transmission module701, configured to transmit BWP configuration information to a user equipment, where the BWP configuration information includes BWP identification information; anda command transmission module702, configured to transmit a BWP activation command to the user equipment, where the BWP activation command is configured for the user equipment to perform BWP activation with a BWP identifier indicated by the BWP activation command. Optionally, the BWP configuration information is used to indicate a default BWP corresponding to each component carrier. The command transmission module is further configured to transmit a component carrier activation signaling to the user equipment, where the component carrier activation signaling is used for the user equipment to activate a first target component carrier and a default BWP corresponding to the first target component carrier. Optionally, the BWP configuration information indicates a default BWP corresponding to each component carrier as follows:for a component carrier configured with a BWP set, one bit is used to indicate whether each BWP in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP ranked first or last in the BWP set is the default BWP;for a component carrier configured with a BWP set, each BWP has an index value, and a BWP with an initial index value in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP with a maximum or minimum bandwidth in the BWP set is the default BWP; orfor a component carrier configured with a BWP set, a BWP with a maximum or minimum starting frequency in the BWP set is the default BWP. Optionally, the command transmission module702is further configured to transmit a BWP deactivation command to the user equipment. The BWP deactivation command is configured to instruct the user equipment to adjust an activated BWP on a second target component carrier to a deactivated state; and the BWP deactivation command is configured to instruct the user equipment to deactivate the second target component carrier, or to deactivate the second target component carrier and a corresponding BWP on the second target component carrier, in a case that the activated BWP to be adjusted is a last activated BWP on the second target component carrier. Optionally, the command transmission module702is further configured to transmit a component carrier deactivation signaling to the user equipment, and the component carrier deactivation signaling is configured to instruct the user equipment to deactivate a third target component carrier, or to deactivate a third target component carrier and all activated BWPs on the third target component carrier. Optionally, the BWP activation command is configured to adjust a deactivated BWP on a fourth target component carrier to an activated state; and the BWP activation command is configured to instruct the user equipment to activate the fourth target component carrier and a BWP designated to be activated through the BWP activation command, in a case that the fourth target component carrier is in an inactive state. Optionally, the BWP configuration information includes BWP configuration information of a primary base station and BWP configuration information of a secondary base station; and the BWP configuration information of the secondary base station is used to indicate a component carrier where a primary cell of the secondary base station is located, and default BWP information corresponding to each component carrier. Optionally, the command transmission module702is further configured to transmit a primary cell handover command to the user equipment, and the primary cell handover command is configured to indicate a target primary cell as a handover cell and a default BWP on the target primary cell. In view of the above, in the embodiments of the present disclosure, BWP configuration information transmitted by a base station is received and saved, where the BWP configuration information includes BWP identification information; a BWP activation command transmitted by the base station is received; and a BWP is activated based on a BWP identifier indicated by the BWP activation command BWPs are numbered, the activation command indicates a BWP that needs to be activated, and the BWP is activated by a UE, thereby improving the flexibility of BWP activation control. Reference is made toFIG.8, which is a schematic structural diagram of a UE according to an embodiment of the present disclosure, which can implement details of the method for controlling activation of a bandwidth part (BWP) in the foregoing embodiments, and can achieve the same effects. As shown inFIG.8, the UE800includes: at least one processor801, a memory802, at least one network interface804, and a user interface803. Various modules in the UE800are coupled together through a bus system805. It is understandable that the bus system805is configured to implement connections and communications between these components. The bus system805includes a power bus, a control bus, and a signal status bus in addition to a data bus. However, for the sake of clarity, various buses are denoted by the bus system805inFIG.13. The user interface803may include a display, a keyboard, or a click device (for example, a mouse, a track ball, a touch pad, or a touch screen). It can be understood that the memory802in the embodiments of the present disclosure may be a volatile memory or a non-volatile memory, or may include both the volatile memory and the non-volatile memory. The non-volatile memory may be a read-only memory (ROM), a programmable ROM (PROM), an erasable PROM (EPROM), an electrically EPROM (EEPROM) or a flash memory. The volatile storage may be a random access memory (RAM), which is used as an external cache. By way of example and without any limitation, many forms of RAMs may be used, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM) and direct Rambus RAM (DRRAM). The memory802of the system and method described in this specification is meant to include, without limitation, these and any other suitable types of memories. In some implementations, the memory802stores the following elements, an executable module or a data structure, or a subset or extension set thereof, such as an operating system8021and an application8022. The operating system8021includes various system programs, such as a framework layer program, a core library layer program and a driver layer program, to implement various fundamental services and process hardware-based tasks. The application8022includes various applications, such as a media player and a browser, to implement a variety of application services. The program implementing the method according to embodiments of the present disclosure may be included in the application8022. In an embodiment of the present disclosure, the UE further includes a computer program stored in the memory802and executable on the processor801, which may be specifically the computer program in the application8022. The computer program is executed by the processor801to implement the following steps:receiving and saving BWP configuration information transmitted by a base station, where the BWP configuration information includes BWP identification information;receiving a BWP activation command transmitted by the base station; andperforming BWP activation with a BWP identifier indicated by the BWP activation command. The methods disclosed in the foregoing embodiments of the present disclosure may be applied in the processor801or implemented by the processor801. The processor801may be an integrated circuit chip with signal processing capabilities. During an implementation process, steps of the methods may be realized in form of hardware by integrated logical circuits in the processor801, or in form of software by instructions. The processor801may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, a discrete gate or a transistor logic component, a discrete hardware transistor logic component, discrete hardware assembly, that is capable of implementing or executing the various methods, steps and logic block diagrams disclosed in the embodiments of the present disclosure. The general purpose processor may be a microprocessor, or any conventional processor, etc. The steps of the methods disclosed with reference to the embodiments of the present disclosure may be embodied in hardware in the form of a coding processor, or performed by the hardware in the coding processor and the software modules in combination. The software modules may reside in well-established storage medium in the art, such as a RAM, flash memory, ROM, PROM or EEPROM, register, etc. The storage medium resides in the memory802. The processor801reads information from the memory802and performs the steps of the methods in combination with its hardware. It is understood that, the embodiments described in the present disclosure may be implemented by hardware, software, firmware, middleware, microcode or a combination thereof. For hardware implementation, processing units may be implemented in one or more application specific integrated circuits (ASIC), digital signal processor (DSP), DSP device (DSPD), programmable logic device (PLD), field-programmable gate array (FPGA), general purpose processor, controller, microcontroller, microprocessor, other electronic unit configured to perform the function described in this specification or a combination thereof. For software implementation, the technical solutions described in the embodiments of the present disclosure may be implemented by a module (e.g., process, or function, etc.) configured to perform the functions described in the embodiments of the present disclosure. Software codes may be stored in a memory and executed by the processor. The memory may be implemented internal or external to the processor. Optionally, the BWP configuration information is used to indicate a default BWP corresponding to each component carrier, and the computer program is executed by the processor801to further implement the following steps:receiving a component carrier activation signaling transmitted by the base station; andactivating, based on the component carrier activation signaling, a first target component carrier and a default BWP corresponding to the first target component carrier. Optionally, a manner where the BWP configuration information indicates the default BWP corresponding to each component carrier includes any of the following manners:for a component carrier configured with a BWP set, one bit is used to indicate whether each BWP in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP ranked first or last in the BWP set is the default BWP;for a component carrier configured with a BWP set, each BWP has an index value, and a BWP with an initial index value in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP with a maximum bandwidth or a minimum bandwidth in the BWP set is the default BWP; orfor a component carrier configured with a BWP set, a BWP with a maximum starting frequency or a minimum starting frequency in the BWP set is the default BWP. Optionally, the computer program is executed by the processor801to further implement the following steps:receiving a BWP deactivation command transmitted by the base station, where the BWP deactivation command is configured to adjust an activated BWP on a second target component carrier to a deactivated state; anddeactivating the second target component carrier, or deactivating the second target component carrier and a corresponding BWP on the second target component carrier, in a case that the activated BWP to be adjusted on the second target component carrier is a last activated BWP. Optionally, the computer program is executed by the processor801to further implement the following steps:receiving a component carrier deactivation signaling transmitted by the base station; anddeactivating, based on the component carrier deactivation signaling, a third target component carrier, or a third target component carrier and all activated BWPs on the third target component carrier. Optionally, the BWP activation command is configured to adjust a deactivated BWP on a fourth target component carrier to an activated state; and the fourth target component carrier and a BWP designated to be activated through the BWP activation command is activated, in a case that the fourth target component carrier is in an inactive state. Optionally, the BWP configuration information includes BWP configuration information of a primary base station and BWP configuration information of a secondary base station; and when the BWP configuration information of the secondary base station is received, the BWP configuration information of the secondary base station is used to indicate a component carrier where a primary cell of the secondary base station is located, and default BWP information corresponding to each component carrier. The computer program is executed by the processor801to further implement the following steps: activating a default BWP corresponding to the component carrier where the primary cell of the secondary base station is located. Optionally, the computer program is executed by the processor801to further implement the following steps:receiving a primary cell handover command transmitted by the base station, where the primary cell handover command is configured to indicate a target primary cell as a handover cell and a default BWP on the target primary cell; andperforming a cell handover based on the default BWP on the target primary cell and the target primary cell. In view of the above, in the embodiments of the present disclosure, BWP configuration information transmitted by a base station is received and saved, where the BWP configuration information includes BWP identification information; a BWP activation command transmitted by the base station is received; and a BWP is activated based on a BWP identifier indicated by the BWP activation command BWPs are numbered, the activation command indicates a BWP that needs to be activated, and the BWP is activated by a UE, thereby improving the flexibility of BWP activation control. Referring toFIG.9,FIG.9is a structural diagram of a UE according to an embodiment of the present disclosure, which can implement details of the method for controlling activation of a bandwidth part (BWP) in the foregoing embodiments, and can achieve the same effects. As shown inFIG.9, the UE900includes a radio frequency (RF) circuit910, a memory920, an input unit930, a display unit940, a processor950, an audio circuit960, a communication module970, and a power supply980, and further includes a camera (not shown). The input unit930may be configured to receive numeric or character information inputted by a user, and to generate signal inputs related to user settings and function control of the UE900. Specifically, in an embodiment of the present disclosure, the input unit930may include a touch panel931. The touch panel931, also referred to as a touch screen, may collect touch operations by the user on or near the touch panel (such as an operation performed by the user using any suitable object or accessory such as a finger or a stylus on the touch panel931), and drive a corresponding connection apparatus according to a predetermined program. Optionally, the touch panel931may include two parts: a touch detection apparatus and a touch controller. The touch detection apparatus is configured to detect a touch position of the user, detect a signal generated due to the touch operation, and transmit the signal to the touch controller; and the touch controller is configured to receive the touch information from the touch detection device, convert the touch information into contact coordinates, send the contact coordinates to the processor950, and receive and execute commands from the processor950. In addition, the touch panel931may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch panel931, the input unit930may further include other input devices932. The input devices932may include, but not limited to, one or more of a physical keyboard, a function button (such as a volume control button and a switch buttons), a trackball, a mouse, or a joystick. The display unit940may be configured to display information inputted by the user or information provided to the user and various menu interfaces of the UE900. The display unit940may include a display panel941. Optionally, the display panel941may be configured in the form of a liquid crystal display (LCD) panel or an organic light-emitting diode (OLED). It should be noted that the touch panel931may cover the display panel941to form a touch display screen, and when the touch display screen detects a touch operation on or near it, the touch operation is transmitted to the processor950to determine the type of the touch event, and then the processor950provides a corresponding visual output on the touch display screen based on the type of touch event. The processor950is the control center of the UE900, which connects various parts of the entire mobile phone by using various interfaces and wirings, performs functions of the UE900and process data by running or executing software programs and/or modules stored in a first memory921and invoking data stored in a second memory922, thereby performing overall monitoring on the UE900. Optionally, the processor950may include one or more processing units. In an embodiment of the present disclosure, by calling a software program and/or a module stored in the first memory921, and/or data stored in the second memory922, the computer program is executed by the processor950to perform the following steps:receiving and saving BWP configuration information transmitted by a base station, where the BWP configuration information includes BWP identification information;receiving a BWP activation command transmitted by the base station; andperforming BWP activation with a BWP identifier indicated by the BWP activation command. Optionally, the BWP configuration information is used to indicate a default BWP corresponding to each component carrier; and the computer program is executed by the processor950to further perform the following steps:receiving a component carrier activation signaling transmitted by the base station; andactivating, based on the component carrier activation signaling, a first target component carrier and a default BWP corresponding to the first target component carrier. Optionally, a manner where the BWP configuration information indicates the default BWP corresponding to each component carrier includes any of the following manners:for a component carrier configured with a BWP set, one bit is used to indicate whether each BWP in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP ranked first or last in the BWP set is the default BWP;for a component carrier configured with a BWP set, each BWP has an index value, and a BWP with an initial index value in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP with a maximum bandwidth or a minimum bandwidth in the BWP set is the default BWP; orfor a component carrier configured with a BWP set, a BWP with a maximum starting frequency or a minimum starting frequency in the BWP set is the default BWP. Optionally, the computer program is executed by the processor950to further perform the following steps:receiving a BWP deactivation command transmitted by the base station, where the BWP deactivation command is configured to adjust an activated BWP on a second target component carrier to a deactivated state; anddeactivating the second target component carrier, or deactivating the second target component carrier and a corresponding BWP on the second target component carrier, in a case that the activated BWP to be adjusted on the second target component carrier is a last activated BWP. Optionally, the computer program is executed by the processor950to further perform the following steps:receiving a component carrier deactivation signaling transmitted by the base station; anddeactivating, based on the component carrier deactivation signaling, a third target component carrier, or a third target component carrier and all activated BWPs on the third target component carrier. Optionally, the BWP activation command is configured to adjust a deactivated BWP on a fourth target component carrier to an activated state; and the fourth target component carrier and a BWP designated to be activated through the BWP activation command is activated, in a case that the fourth target component carrier is in an inactive state. Optionally, the BWP configuration information includes BWP configuration information of a primary base station and BWP configuration information of a secondary base station; and when the BWP configuration information of the secondary base station is received, the BWP configuration information of the secondary base station is used to indicate a component carrier where a primary cell of the secondary base station is located, and default BWP information corresponding to each component carrier. The computer program is executed by the processor950to further perform the following steps:activating a default BWP corresponding to the component carrier where the primary cell of the secondary base station is located. Optionally, the computer program is executed by the processor950to further perform the following steps:receiving a primary cell handover command transmitted by the base station, where the primary cell handover command is configured to indicate a target primary cell as a handover cell and a default BWP on the target primary cell; andperforming a cell handover based on the default BWP on the target primary cell and the target primary cell. In view of the above, in the embodiments of the present disclosure, BWP configuration information transmitted by a base station is received and saved, where the BWP configuration information includes BWP identification information; a BWP activation command transmitted by the base station is received; and a BWP is activated based on a BWP identifier indicated by the BWP activation command BWPs are numbered, the activation command indicates a BWP that needs to be activated, and the BWP is activated by a UE, thereby improving the flexibility of BWP activation control. Referring toFIG.10,FIG.10is a schematic structural diagram of a base station according to an embodiment of the present disclosure, which can implement details of the method for controlling activation of a bandwidth part (BWP) in the foregoing embodiments, and can achieve the same effects. As shown inFIG.10, the base station1000includes: a processor1001, a transceiver1002, a memory1003, a user interface1004, and a bus interface. The processor1001is configured to read a program in the memory1003and execute the following processes:transmitting BWP configuration information to a user equipment, where the BWP configuration information includes BWP index information; andtransmitting a BWP activation command to the user equipment, where the BWP activation-related command is configured for the user equipment to perform BWP activation with a BWP index indicated by the BWP activation-related command. InFIG.10, a bus architecture may include any number of interconnected buses and bridges, and may be specifically configured to couple various circuits including one or more processors represented by the processor1001and storages represented by the memory1003. The bus architecture may also couple various other circuits such as peripherals, voltage regulators and power management circuits, which are well known in the art. Therefore, a detailed description thereof is omitted herein. A bus interface provides an interface. The transceiver1002may be multiple elements, i.e., including a transmitter and a receiver, to allow for communication with various other apparatuses on the transmission medium. For different user equipment, the user interface1004may also be an interface capable of may also be an interface capable of externally or internally connecting the required devices, which includes, but not limited to, a keypad, a display, a speaker, a microphone, a joystick, and the like. The processor1001is responsible for the control of the bus architecture and general processing, and the memory1003may store data used by the processor300in performing operations. Optionally, the BWP configuration information is used to indicate a default BWP corresponding to each component carrier; and the program is executed by the processor1001to further perform the following steps:transmitting a component carrier activation-related signaling to the user equipment, where the component carrier activation-related signaling is used for the user equipment to activate a first target component carrier and a default BWP corresponding to the first target component carrier. Optionally, the BWP configuration information indicates a default BWP corresponding to each component carrier as follows:for a component carrier configured with a BWP set, one bit is used to indicate whether each BWP in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP ranked first or last in the BWP set is the default BWP;for a component carrier configured with a BWP set, each BWP has an identification value, and a BWP with an initial identification value in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP with a maximum or minimum bandwidth in the BWP set is the default BWP; orfor a component carrier configured with a BWP set, a BWP with a maximum or minimum starting frequency in the BWP set is the default BWP. Optionally, the program is executed by the processor1001to further perform the following steps:transmitting a BWP deactivation-related command to the user equipment,where the BWP deactivation-related command is configured to instruct the user equipment to adjust an activated BWP on a second target component carrier to a deactivated state; andthe BWP deactivation-related command is configured to instruct the user equipment to deactivate the second target component carrier, or deactivate the second target component carrier and a corresponding BWP on the second target component carrier, in a case that the activated BWP to be adjusted is a last activated BWP on the second target component carrier. Optionally, the program is executed by the processor1001to further perform the following steps:transmitting a component carrier deactivation-related signaling to the user equipment, where the component carrier deactivation-related signaling is configured to instruct the user equipment to deactivate a third target component carrier, or to deactivate a third target component carrier and all activated BWPs on the third target component carrier. Optionally, the BWP activation-related command is configured to adjust a deactivated BWP on a fourth target component carrier to an activated state; and the BWP activation-related command is configured to instruct the user equipment to activate the fourth target component carrier and a BWP designated to be activated through the BWP activation-related command, in a case that the fourth target component carrier is in an inactive state. Optionally, the BWP configuration information includes BWP configuration information of a primary base station and BWP configuration information of a secondary base station; and the BWP configuration information of the secondary base station is used to indicate a component carrier where a primary cell of the secondary base station is located, and default BWP information corresponding to each component carrier. Optionally, the program is executed by the processor1001to further perform the following steps:transmitting a primary cell handover command to the user equipment, where the primary cell handover command is configured to indicate a target primary cell as a handover cell and a default BWP on the target primary cell. In view of the above, in the embodiments of the present disclosure, BWP configuration information transmitted by a base station is received and saved, where the BWP configuration information includes BWP index information; a BWP activation-related command transmitted by the base station is received; and a BWP is activated based on a BWP index indicated by the BWP activation-related command BWPs are numbered, the activation-related command indicates a BWP that needs to be activated, and the BWP is activated by a UE, thereby improving the flexibility of BWP activation control. An embodiment of the present disclosure further provides a computer-readable storage medium, having stored a computer program thereon. The computer program is executed by a processor to implements steps in a method for controlling activation of a bandwidth part (BWP) in any one of the foregoing method embodiments. A person skilled in the art may be aware that, the exemplary units and algorithm steps described in connection with the embodiments disclosed in this specification may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of the disclosure. It may be clearly understood by a person skilled in the art that, for ease of description and conciseness, for a detailed working process of the foregoing system, apparatus, and unit, reference may be made to a corresponding process in the foregoing method embodiments, and details are not described herein again. In the several embodiments provided in the present application, it should be understood that the disclosed device and method may be implemented in other manners. For example, the described device embodiment is merely exemplary. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be neglected or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the devices or units may be implemented in electric, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one position, or may be distributed on a plurality of network units. A part or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present disclosure. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units may be integrated into one unit. If the functions are implemented in a form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, essential part or the part contributing to the prior art of the technical solutions of the present disclosure, or a part of the technical solutions may be implemented in a form of a software product. The software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or a part of the steps of the methods described in the embodiments of the disclosure. The foregoing storage medium includes any medium that may store program code, such as a universal serial bus (USB) flash drive, a mobile hard disk, an ROM, an RAM, a magnetic disk, or an optical disc. The above descriptions are merely specific implementations of the present disclosure, but the scope of the present disclosure is not limited thereto. Any modifications and substitutions easily made by a person of ordinary skill in the art without departing from the technical principle of the present disclosure shall fall within the scope of the present disclosure. Therefore, the scope of the present disclosure shall be determined by the claims.
70,967
11863500
DETAILED DESCRIPTION OF EMBODIMENTS Hereinafter, a communication apparatus, a communications system, and a communication method according to embodiments of the present disclosure will be described with reference to the drawings. Although main components of the communication apparatus and the communications system will be mainly described below, the communication apparatus and the communications system may have components and functions not shown or described. The following description does not exclude the components or functions not shown or described. A communications system according to an embodiment of the present disclosure performs serial communication between two SerDeses.FIG.1is a block diagram showing a schematic configuration of a communications system1including two SerDeses, i.e., a SerDes200(SerDes #1) and a SerDes400(SerDes #2).FIG.1shows an example in which the SerDes200and the SerDes400perform serial communication with each other. The SerDes200and the SerDes400, which are high-speed serial interface devices, are connected to each other by a cable300having the length of several to ten and several meters. An ECU100is connected to the SerDes200, and a peripheral device500(Peripheral #1) and a peripheral device600(Peripheral #2) are connected to the SerDes400. The ECU100performs processing of receiving main data such as a video signal and transmitting the received data, and transmits/receives an SPI (Serial Peripheral Interface) signal, an I2C (Inter-Integrated Circuit) signal, a GPIO (general purpose IO) signal, or the like to control the whole system. Meanwhile, the peripheral device500connected to the SerDes400transmits high-speed and large-capacity main data such as a video signal, and transmits/receives a control signal by SPI or GPIO. Further, the peripheral device600connected to the SerDes400transmits/receives a low-speed signal such as observation data and a control signal by I2C or GPIO. The communications system including the two SerDeses200and400shown inFIG.1is provided in various apparatuses such as an in-vehicle camera module. As an interface technology for serial communication between the two SerDeses200and400, FPD-LINK has been known. In addition, in a high-speed serial interface standard organization, Automotive SerDes Alliance (ASA), the standardization work of a high-speed serial interface technology for automobiles is currently in progress. The difference between FPD-LINK and ASA is that ASA uses Time Division Duplex (TDD) while FPD-LINK uses Frequency Division Duplex (FDD) as a method for realizing two-way communication. The packet transmission timing and the frequency band of FDD are illustrated in the lower left ofFIG.1, and the packet transmission timing and the frequency band of ASA are illustrated in the lower right ofFIG.1. In FDD, a Down Link and an UP Link use difference frequency bands to transmit/receive packets in parallel during an overlapping period. Meanwhile, in TDD, a Down Link and an UP Link use an overlapping frequency band to transmit/receive packets in time division. FIG.2is a block diagram of the communications system1embodying the internal configuration of the SerDeses200and400.FIG.2shows an example in which an application includes an SPI signal, an I2C signal, and a GPIO signal. As shown inFIG.2, the SerDes200includes a PHY unit (PHY block)200-1, a LINK unit (LINK block)200-2, a plurality of encapsulators (Application Stream Encapsulators)200-3, a plurality of de-encapsulators (Application Stream De-encapsulators)200-4, and a control register (Control registers)200-5. The PHY unit200-1includes an UP Link transmission unit (UP Link Tx)200-1-1and a Down Link transmission unit (Down Link Rx)200-1-2. The LINK unit200-2includes a frame construction unit (Frame Constructor)200-2-1, a frame deconstruction unit (Frame De-constructor)200-2-2, and an OAM (Operation Administration Maintenance) unit200-2-3. The ECU100generates an SPI signal, an I2C signal, and a GPIO signal, which are control signals, as necessary for processing, and outputs the generated signal to the SerDes200. The plurality of encapsulators200-3in the SerDes200is provided for each application (e.g., each of the SPI signal, the I2C signal, and the GPIO signal). Each of the encapsulators200-3generates a corresponding application packet. The encapsulator200-3for an SPI signal receives an SPI signal from the ECU100and generates an application packet including the SPI signal. The encapsulator200-3for an I2C signal receives an I2C signal from the ECU100and generates an application packet including the I2C signal. The encapsulator200-3for a GPIO signal receives a GPIO signal from the ECU100and generates an application packet including the GPIO signal. The plurality of de-encapsulators200-4in the SerDes200is provided for each application. The de-encapsulator200-4for main data restores main data from the received application packet and transmits the restored main data to the ECU100. The de-encapsulator200-4for an SPI signal restores an SPI signal from the received application packet and transmits the restored SPI signal to the ECU100. The de-encapsulator200-4for an I2C signal restores an I2C signal from the received application packet and transmits the restored I2C signal to the ECU100. The de-encapsulator200-4for a GPIO signal restores a GPIO signal from the received application packet and transmits the restored GPIO signal to the ECU100. FIG.3is a diagram showing a configuration of an application packet generated by the plurality of encapsulators200-3, a link frame generated by the LINK unit200-2, and a transmission symbol to be transmitted by the PHY unit200-1. As shown in Part (3-1) ofFIG.3, the application packet includes a packet header and application packet data. The LINK unit200-2generates a container for each of the plurality of encapsulators200-3, and generates a link frame including a plurality of containers. As shown in Part (3-2) ofFIG.3, the container includes a container header and a container payload. The container header includes address information of a device on the reception side supplied from the control register200-5and address information of the de-encapsulator200-4. The link frame generated by the LINK unit200-2is supplied to the UP Link transmission unit200-1-1of the PHY unit200-1. The UP Link transmission unit200-1-1adds a sync header that is necessary for synchronization processing on the reception side to the link frame ((3-3) inFIG.3) to generate a transmission frame ((3-4) inFIG.3), and then performs modulation processing such as binary transmission (NRZ) and quadrature transmission (PAM4) to convert the transmission frame into a transmission symbol ((3-5) inFIG.3) and outputs the obtained transmission symbol to the cable300. This is the transmission processing on the UP Link side. One transmission frame ((3-4) inFIG.3) is transmitted within one TDD time slot of a TDD method. In the TDD method, Down Link transmission and UP Link transmission are performed once at different timings within one TDD burst period. The above-mentioned transmission frame is transmitted within, for example, a Down Link transmission period. For example, in the case where the UP Link transmits only a control signal and the Down Link transmits a video signal including a control signal, or the like, the Down Link basically occupies a larger amount of time and the time ratio is 1:several tens. FIG.4is a block diagram showing the internal configuration of the frame construction unit200-2-1in the SerDes200inFIG.2. The frame construction unit200-2-1includes a plurality of container makers (container makers)200-2-1-1corresponding to the plurality of encapsulators200-3, a multiplexer200-2-1-3, and a scheduler200-2-1-2. Each of the encapsulators200-3includes a packet maker (Packet maker)200-3-1and a buffer200-3-3. The application packet generated by the packet maker200-3-1is once stored in the buffer200-3-3and then input to the corresponding container maker200-2-1-1in the frame construction unit200-2-1in accordance with an instruction from the scheduler200-2-1-2. Each of the container makers200-2-1-1receives a corresponding application packet from the corresponding encapsulator200-3or the OAM unit200-2-3to generate a corresponding container. The container generated by each of the container makers200-2-1-1is input to the multiplexer200-2-1-3. The scheduler200-2-1-2outputs a timing adjustment signal indicating at which timing each container is output. The multiplexer200-2-1-3generates a link frame including a plurality of containers on the basis of the timing adjustment signal from the scheduler200-2-1-2. At system startup, a schedule according to the transmission band required by an application to be transmitted by the ECU100is transferred to the control register200-5by some means not shown inFIG.2. The control register200-5supplies the schedule to the scheduler200-2-1-2. Therefore, control is performed such that the container for transmitting wideband information such as a video signal is selected more often by the multiplexer200-2-1-3per unit time and a low-speed signal such as GPIO is selected less often. Similarly, an OAM signal including the schedule generated by the ECU100is supplied also to the scheduler in a frame construction unit400-2-1of the SerDes400via the UP Link. Next, the reception processing on the UP Link side of the SerDes400will be described. As shown inFIG.2, the SerDes400includes a PHY unit (PHY block)400-1, a LINK unit (LINK block)400-2, a plurality of encapsulators (Application Stream Encapsulator)400-3, a plurality of de-encapsulators (Application Stream De-encapsulator)400-4, and a control register (Control registers)400-5. The PHY unit400-1includes a Down Link transmission unit (Down Link Tx)400-1-1and an UP Link reception unit (UP Link Rx)400-1-2. The LINK unit400-2includes a frame construction unit400-2-1, a frame deconstruction unit400-2-2, and an OAM unit400-2-3. The encapsulator400-3for main data receives main data from the peripheral device500and generates an application packet including the main data. The encapsulator400-3for an SPI signal receives an SPI signal from the peripheral device500and generates an application packet including the SPI signal. The encapsulator400-3for a GPIO signal receives a GPIO signal from the peripheral device500and generates an application packet including the GPIO signal. The de-encapsulator400-4for an I2C signal restores an I2C signal from the received packet and transmits the restored I2C signal to the peripheral device600. The de-encapsulator400-4for a GPIO signal restores a GPIO signal from the received packet and transmits the restored GPIO signal to the peripheral device600. The SerDes400generates a lock synchronized with a symbol frequency with the sync signal added to the top of the transmission symbol ((3-5) inFIG.3) received from the SerDes200to reproduce the transmission frame ((3-4) inFIG.3). The sync header is removed from the reproduced transmission frame ((3-4) inFIG.3) to generate the link frame ((3-3) inFIG.3) and the generated link frame is input to the frame deconstruction unit400-2-2in the LINK unit400-2. The frame deconstruction unit400-2-2divides the link frame ((3-3) inFIG.3) into containers ((3-2) inFIG.3), acquires address information of the de-encapsulator400-4from the container header of each container, and outputs, to the corresponding de-encapsulator400-4, the application packet ((3-1) inFIG.3) included in the container payload of the container. Each of the de-encapsulators400-4reconstructs, on the basis of the packet header in the corresponding application packet, the application packet data in the application packet into the format of each application and outputs the obtained data to the corresponding peripheral device500or600. Since the processing on the Down Link side in which information is transmitted from the peripheral device500or600to the ECU100is similar to the processing on the UP Link side, description thereof is omitted. In the ASA standard of a TDD method, the number of times per unit time and the transmission order for transmitting a container ((3-2) inFIG.3) that stores each application to be transmitted are determined in advance at the time of system design. As a result, the latency for each application becomes substantially constant and problems such as transmission jitter of an application are avoided. This is very convenient in the case where large-capacity data such as a video signal is constantly transmitted. Meanwhile, the transmission band required by each of an SPI signal, an I2C signal, and a GPIO signal for mainly controlling a peripheral device may be narrower than that of the video signal. FIG.5is a diagram showing the transmission timing of the SerDes200inFIG.2.FIG.5shows an example in which the frame construction unit200-2-1on the UP Link side sets the transmission schedule of each application packet With 6 TDD time slots as one period. As shown inFIG.5, the time of one switching between the Down Link and the UP Link is the transmission time unit of TDD (1 TDD burst period=1 TDD time slot). In the example ofFIG.5, one application packet is transmitted for each TDD time slot in the UP Link ((5-1) inFIG.5). In this case, the application packet including an OAM signal is transmitted once every 6 TDD time slots, an application packet including an SPI signal is transmitted four times every 6 TDD time slots, and an application packet including a GPIO signal and an application packet including an I2C signal are transmitted once every 12 TDD time slots ((5-2) inFIG.5). As shown inFIG.2toFIG.4, the SPI signal, the I2C signal, and the GPIO signal to be input to the SerDes200are once input to the encapsulator200-3, converted into a corresponding application packet, and then, buffered and input to the frame construction unit200-2-1in accordance with the read timing of the scheduler200-2-1-2. The read timing determined by the scheduler200-2-1-2coincides with the transmission schedule of each application shown in Parts (5-1) and (5-2) ofFIG.5. For example, application packets SPI #m and #m+1 of an SPI signal are respectively transmitted in TDD time slots #9and #10((5-3) inFIG.5), and application packets GPI #n and #n+1 of a GPIO signal ((5-5) inFIG.5) are respectively transmitted in TDD time slots #14and #26((5-2) inFIG.5). However, since the I2C signal is in an adle state at this time point ((5-8) inFIG.5), there is no need to transmit an application packet of an I2C signal ((5-8) inFIG.5). FIG.6is a diagram showing the transmission timing in the case where an I2C signal has been input from the ECU100to the SerDes200. In the case ofFIG.6, an application packet of an I2C signal is transmitted in a TDD time slot #20. Now, assumption is made that a signal such as an interrupt signal, which requires a small transmission band but whose change in the signal state cannot be predicted in advance is transmitted using a GPIO signal. As shown inFIG.5, assumption is made that since the band required for transmitting a GPIO signal is small, TDD time slots are assigned such that the GPIO signal is transmitted once every 12 TDD time slots. In this case, the change in the GPI signal ((5-6) inFIG.5) input from the ECU100is sampled at intervals of 12 TDD time slots and converted into packets GPI #n and #n+1 ((5-5) inFIG.5) of a GPIO signal, and the packets are respectively transmitted in TDD time slots #14and #26((5-2) inFIG.5). The change in the GPI signal ((5-6) inFIG.5) input from the ECU100has occurred near TDD time slots #2to #3, but the timing at which this change is packetized and transmitted is a TDD time slot #26, which causes transmission latency. For example, in the case where 1 TDD burst period is approximately 30 usec, transmission latency of approximately 690 usec occurs. In order to reduce the transmission latency of an application, it only needs to increase the frequency of assigning a TDD time slot to the application. However, since the application does not constantly transmit a signal and transmits a signal only as necessary, the efficiency of using the TDD time slot deteriorates in the case where the required transmission band is small. In this regard, the communication apparatus and the communications system according to the embodiment of the present disclosure are characterized in that the transmission efficiency is improved while minimizing the transmission latency when transmitting, by a TDD transmission method, a plurality of applications to which a transmission schedule has been assigned in advance. First Embodiment FIG.7is a block diagram of the frame construction unit200-2-1according to a first embodiment of the present disclosure. InFIG.7, components common to those of the frame construction unit200-2-1inFIG.4are denoted by the same reference symbols, and the differences will be mainly described below. Similarly toFIG.4, the frame construction unit200-2-1inFIG.7includes a plurality of container makers (container makers)200-2-1-1corresponding to the plurality of encapsulators200-3, the multiplexer200-2-1-3, and a scheduler200-2-1-4. In the frame construction unit200-2-1inFIG.7, the operation of the scheduler200-2-1-4is different from the scheduler200-2-1-4inFIG.4. Further, each of the encapsulators200-3shown inFIG.7stores, in the corresponding buffer200-3-3, an application packet generated by the corresponding packet maker200-3-1, and then outputs a data ready signal. The data ready signal is a signal indicating that a valid application packet is stored in the corresponding buffer200-3-3. The data ready signal from each of the encapsulators200-3is input to the scheduler200-2-1-4. The scheduler200-2-1-4controls, on the basis of the data ready signal from each of the encapsulators200-3, the order of causing the container generated by each of the container makers200-2-1-1to be included in the transmission frame. FIG.8is a flowchart showing the processing operation of the scheduler200-2-1-4inFIG.7.FIG.9is a diagram showing the transmission timing in the UP Link according to this embodiment. Hereinafter, the processing operation of the communication apparatus and the communications system according to this embodiment will be described on the basis ofFIG.7toFIG.9. Similarly to the scheduler200-2-1-2inFIG.4, the scheduler200-2-1-4inFIG.7determines in advance, before the communications system starts transmission, which application packet is assigned to which TDD time slot and transmitted by the schedule management from the ECU100via the control register200-5. The ECU100or the control register200-5prepares a specific TDD time slot (shared time slot) in the scheduler200-2-1-4, and assigns, to the shared time slot, not one application packet but a plurality of application packets. A signal such as a control signal, which has a relatively small transmission band and a low transmission frequency, is assigned to this application packet transmitted in the specific shared time slot. Further, the ECU100or the control register200-5sets, for the scheduler200-2-1-4, the output priority of the application packet assigned to the shared time slot. How many shared time slots are prepared, which application packet is assigned, and how the priority is set are changed in accordance with the system and the operation situation. The priority may be determined at the time of system design, for example. That is, the order in which the priority is periodically changed may be determined at the time of system design and stored in a memory, a resister, or the like (not shown). Alternatively, a user may set the priority or the order in which the priority is changed using a, updatable value of a resister. In this case, the user can change the priority or the order in which the priority is changed by updating the value of the resister at an appropriate timing. The scheduler200-2-1-4determines whether or not the TDD time slot to be scheduled is the shared time slot (Step S1). For example, TDD time slots #2, #8, #14, #20, and #26inFIG.9are determined to be the shared time slots. In the case of the shared time slot, whether or not the application packet having the highest priority is stored in the corresponding buffer200-3-3is determined by the data ready signal (Step S2). In the case where the application packet having the highest priority is stored in the buffer200-3-3, it is determined that the application packet has been prepared, a container is generated by the container maker200-2-1-1corresponding to the application packet stored in the buffer200-3-3, and the generated container is selected by the multiplexer200-2-1-3to form a link frame (Step S3). After that, the priority of the shared time slot is changed by one level (Step S4). For example, in the case where there are application packets A, B, and C, the priority of each of five shared time slots #2, #8, #14, #20, and #26inFIG.9is changed as follows. Note that the following is just an example, and the order in which the priority is changed is arbitrary. Priority of shared time slot #2: A→B→C Priority of shared time slot #8: B→C→A Priority of shared time slot #14: C→A→B Priority of shared time slot #20: A→B→C Priority of shared time slot #26: B→C→A When the processing of Step S4inFIG.8is finished, the processing of Step S1and subsequent Steps is repeated. More specifically, for example, inFIG.9, the priority of the TDD time slot #2satisfies the relationship of GPIO>I2C, the priority of the next TDD time slot #8satisfies the relationship of I2C>GPIO, and the priority of the next TDD time slot #14satisfies the relationship of GPIO>I2C similarly to the original. As a result, it is possible to guarantee the transmission band originally assigned to the application packet. As described above, the priority of the shared time slot is switched in order for every one period. When it is determined in Step S2that the application packet is not stored in the buffer200-3-3, whether or not there is an application packet having the next highest priority is determined (Step S5). When it is determined that there is an application packet having the next highest priority, whether or not the determined application packet is stored in the corresponding buffer200-3-3is determined by the data ready signal (Step S6). When it is determined that the determined application packet is stored in the corresponding buffer200-3-3, the processing proceeds to Step S3. For example, although the application packet including an I2C signal has the highest priority in the TDD time slot #8inFIG.9, the I2C signal is null at this time point and the application packet including an I2C signal is not stored in the corresponding buffer200-3-3. For this reason, the determination in Step S2inFIG.8is NO, the processing proceeds to Step S5, and whether or not there is an application packet having the next highest priority is determined. In the TDD time slot #8inFIG.9, the GPIO signal has the next highest priority of the I2C signal. At this time point, the application packet including a GPIO signal is stored in the corresponding buffer200-3-3(GPI #n+1 Part9-5ofFIG.9). In this regard, the container maker200-2-1-1corresponding to this application packet generates a container including this application packet. Meanwhile, in the case where it is determined in Step S5that there is no application packet having the next highest priority, the processing proceeds to Step S4. For example, in the TDD time slot #14inFIG.9, the application packet including a GPIO signal has the highest priority. At this time point, an application to be transmitted is not stored in the buffer200-3-3in the encapsulator200-3for a GPIO signal. For this reason, the determination is NO in Step S2inFIG.8, the processing proceeds to Step S5, and whether or not there is an application packet having the next highest priority is determined. In the TDD time slot #14inFIG.9, the I2C signal has the next highest priority. At this time point, the I2C signal is null and a valid application packet is not stored in the corresponding buffer200-3-3. For this reason, the determination is NO in Step S5, and the priority of the shared time slot is switched in Step S4. When it is determined in Step S1that the TDD time slot to be scheduled is not the shared time slot, the scheduler200-2-1-4selects the designated application packet, a container corresponding to the selected application packet is generated by the container maker200-2-1-1, and the generated container is selected by the multiplexer200-2-1-3to form a link frame (Step S7). When the processing of Step S7is finished, the processing of Step S1and subsequent Steps is repeated. As described above, in the first embodiment, when serial transmission is performed by a TDD method, one period including a plurality of TDD time slots includes a shared time slot capable of transmitting a packet including one of a plurality of types of serial signals. Since a plurality of types of application packets each including an application signal having a low transmission frequency is transmitted in the shared time slot and the priority when transmitting the plurality of types of application packets in the shared time slot is changed in order, it is possible to transmit the plurality of types of application packets with equal transmission latency. Further, by transmitting a plurality of types of application packet each having a low transmission frequency in the shared time slot, it is possible to increase the number of TDD time slots to be assigned to the application packet having a high transmission frequency and further reduce the transmission latency of the application packet having a high transmission frequency. Therefore, in accordance with this embodiment, it is possible to efficiently transmit a plurality of types of serial signals corresponding to a plurality of applications by a TDD method. Second Embodiment A second embodiment is different from the first embodiment in the configuration of the frame construction unit200-2-1in the LINK unit200-2and the surroundings thereof. FIG.10is a block diagram showing a configuration of the frame construction unit200-2-1according to the second embodiment and the surroundings thereof. In the second embodiment, a packet selector200-6is disposed between the plurality of encapsulators200-3and the frame construction unit200-2-1. The packet selector200-6executes part of functions of the scheduler200-2-1-4inFIG.7. Specifically, the packet selector200-6is connected to two or more encapsulators200-3each transmitting an application packet in the shared time slot. Each of the two or more encapsulators200-3connected to the packet selector200-6includes the packet maker200-3-1and the buffer200-3-3. Each of the encapsulators200-3outputs the data ready signal when an application packet is stored in the corresponding buffer200-3-3. The data ready signal is input to the packet selector200-6. The packet selector200-6selects, on the basis of the data ready signal from the two or more encapsulators200-3each transmitting an application packet in the shared time slot, an application packet to be transmitted in the shared time slot. The data ready signal is a signal indicating that the corresponding application packet is stored in the corresponding buffer200-3-3, and is output from the corresponding encapsulator200-3. The application packet transmitted in the shared time slot is input to the frame construction unit200-2-1. Similarly toFIG.7, the frame construction unit200-2-1inFIG.10includes a plurality of container makers200-2-1-1(Container makers) corresponding to the plurality of encapsulators200-3or the OAM unit200-2-3, the multiplexer200-2-1-3, and the scheduler200-2-1-5. The scheduler200-2-1-5inFIG.10is different from the scheduler200-2-1-4inFIG.7. Although the frame construction unit200-2-1inFIG.7includes the same number of the container makers200-2-1-1as the plurality of encapsulators200-3, the frame construction unit200-2-1inFIG.10includes the container makers200-2-1-1whose number is smaller than that of the plurality of encapsulators200-3. More specifically, one of a plurality of application packets to be transmitted in the shared time slot is selected by the packet selector200-6, and the selected application packet is input to the dedicated container maker200-2-1-1. A read timing signal informing the timing of the shared time slot is input from the scheduler200-2-1-5to the packet selector200-6. The packet selector200-6performs, when a read timing signal is input, the processing operation similar to that in the flowchart shown inFIG.8. The packet selector200-6transmits, when selecting an application packet to be transmitted in the shared time slot, the application packet to the corresponding container maker200-2-1-1together with packet information indicating which application the application corresponds to. The corresponding container maker200-2-1-1generates, on the basis of the received packet information, a container header together with a container payload including the received application packet to complete a container. The scheduler200-2-1-5in the frame construction unit200-2-1selects, setting information of the control register200-5, a plurality of containers generated by the plurality of container makers200-2-1-1one by one to generate a link frame. As described above, in the second embodiment, since the packet selector200-6is provided between the plurality of encapsulators200-3and the frame construction unit200-2-1to select an application packet to be transmitted in the shared time slot, it is possible to reduce the number of the container makers200-2-1-1in the frame construction unit200-2-1. Further, since the packet selector200-6performs part of the processing of schedule management of the scheduler200-2-1-5, it is possible to reduce the processing load of the scheduler200-2-1-5and simplify the internal configuration of the frame construction unit200-2-1. Note that the present technology may also take the following configurations. (1) A communication apparatus, including: a communication unit that periodically transmits, with an interval assigned by TDD (Time Division Duplex) being one TDD time slot and a plurality of TDD time slots being one period, a plurality of application packets corresponding to a plurality of serial signals generated by a plurality of applications to a communication partner device; anda transmission control unit that changes, for every one period, a priority of part of application packets corresponding to part of two or more applications of the plurality of applications, the part of application packets being transmitted in at least one specific TDD time slot for transmitting the part of application packets, the plurality of TDD time slots including the at least one specific TDD time slot. (2) The communication apparatus according to (1), in whichthe transmission control unit changes, for every one period, the priority of the part of application packets in a preset order or in accordance with user's designation. (3) The communication apparatus according to (1), in whichthe transmission control unit changes, for every one period, the priority of the part of application packets to be transmitted in the specific TDD time slot in order. (4) The communication apparatus according to (1), in whichthe transmission control unit preferentially transmits a packet corresponding to an application having a higher priority in the specific TDD time slot. (5) The communication apparatus according to any one of (1) to (4), in whichthe transmission control unit checks whether an application having a higher priority has prepared a packet to be transmitted in the specific TDD time slot, and checks, if not prepared, whether or not an application having the next highest priority has prepared a packet to be transmitted in the specific TDD time slot. (6) The communication apparatus according to (5), in whichthe transmission control unit repeats processing of checking whether or not a packet to be transmitted is prepared in descending order of priority until a packet that can be transmitted in the specific TDD time slot is found. (7) The communication apparatus according to any one of (1) to (6), in whichthe transmission control unit stops, where none of the part of applications have prepared a packet to be transmitted in the specific TDD time slot, transmitting a valid packet in the specific TDD time slot. (8) The communication apparatus according to any one of (1) to (7), in whichthe transmission control unit causes the one period to include a dedicated TDD time slot for transmitting a packet including a serial signal generated by an application designated in advance separately from the specific TDD time slot. (9) The communication apparatus according to (8), in whichthe application designated in advance is an application other than the part of applications of the plurality of applications. (10) The communication apparatus according to (8) or (9), in whichthe transmission control unit increases the number of dedicated TDD time slots included in the plurality of periods to be larger than the number of specific TDD time slots. (11) The communication apparatus according to any one of (8) to (10), in whichthe part of applications include at least one of an application that generates a packet of I2C (Inter-Integrated Circuit) communication or an application that generates a packet of GPIO (General Purpose Input/Output) communication, andthe application designated in advance includes at least one of an application that generates a packet of SPI (Serial Peripheral Interface) communication or an application that generates a packet of OAM (Operation, Administration, Maintenance). (12) The communication apparatus according to any one of (1) to (11), in whichthe transmission control unit changes, for every one period, the priority of the part of applications by one level, and makes, where the priority has reached the lowest or the highest, the priority the highest or the lowest in the next period. (13) The communication apparatus according to any one of (8) to (11), in whichthe transmission control unit sets, on a basis of at least one of a transmission frequency or a signal amount of a serial signal generated by each of the plurality of applications, whether to assign the dedicated TDD time slot to the corresponding application or share the specific TDD time slot with another application. (14) The communication apparatus according to any one of (1) to (13), further including:a plurality of encapsulators that is provided for each of the plurality of applications, generates a packet including a serial signal generated by the corresponding application, and outputs a ready signal indicating whether or not the packet has been generated; anda frame construction unit that generates, on a basis of a plurality of packets generated by the plurality of encapsulators, a link frame to be transmitted to the communication partner device in the one period, in whichthe frame construction unit includes a scheduler that manages the priority of the specific TDD time slot and determines, on a basis of two or more ready signals generated by two or more encapsulators corresponding to the part of applications, the application that transmits a packet in the specific TDD time slot. (15) The communication apparatus according to (14), in whichthe frame construction unit includesa plurality of container makers that generates a container including a container payload and a container header, the container payload including a packet generated by each of the plurality of encapsulators, anda multiplexer that selects, under control of the scheduler, a plurality of containers generated by the plurality of container makers one by one to generate the link frame. (16) The communication apparatus according to (15), in whichthe number of the plurality of container makers is the same as the number of TDD time slots in the one period. (17) The communication apparatus according to any one of (1) to (13), further including:a plurality of encapsulators that is provided for each of the plurality of applications, generates a packet including a serial signal generated by the corresponding application, and outputs a ready signal indicating whether or not the packet has been generated;a packet selection unit that manages the priority of the specific TDD time slot and selects, on a basis of two or more ready signals generated by two or more encapsulators corresponding to the part of applications, a packet to be transmitted in the specific TDD time slot from two or more packets generated by the two or more encapsulators; anda frame construction unit that generates, on the basis of the packet selected by the packet selection unit and a packet corresponding to an application other than the part of applications of the plurality of applications, a link frame to be transmitted to the communication partner device in the one period, in whichthe frame construction unit includes a scheduler that manages a packet to be transmitted in the plurality of TDD time slots in the one period. (18) The communication apparatus according to (17), in whichthe frame construction unit includesa plurality of container makers that generates a container including a container payload and a container header corresponding to the container payload, the container payload including a packet selected by the packet selection unit and a packet corresponding to an application other than the part of applications of the plurality of applications, anda multiplexer that selects, under control of the scheduler, a plurality of containers generated by the plurality of container makers one by one to generate the link frame. (19) The communication apparatus according to (18), in whichthe number of the plurality of container makers is less than the number of TDD time slots in the one period. (20) A communications system, including:a first communication apparatus that transmits/and receives a packet by TDD (Time Division Duplex) via a predetermined communication protocol; anda second communication apparatus, in whichthe first communication apparatus includesa communication unit that periodically transmits, with an interval assigned by TDD (Time Division Duplex) being one TDD time slot and a plurality of TDD time slots being one period, a plurality of application packets corresponding to a plurality of serial signals generated by a plurality of applications to a communication partner device, anda transmission control unit that changes, for every one period, a priority of part of application packets corresponding to part of two or more applications of the plurality of applications, the part of application packets being transmitted in at least one specific TDD time slot for transmitting the part of application packets, the plurality of TDD time slots including the at least one specific TDD time slot, andthe second communication apparatus includes a second communication unit that receives a packet transmitted from the first communication apparatus and periodically transmits a packet to the first communication apparatus with the plurality of TDD time slots being the one period. (21) A communication method, including:periodically transmitting, with an interval assigned by TDD (Time Division Duplex) being one TDD time slot and a plurality of TDD time slots being one period, a plurality of application packets corresponding to a plurality of serial signals generated by a plurality of applications to a communication partner device; andchanging, for every one period, a priority of part of application packets corresponding to part of two or more applications of the plurality of applications, the part of application packets being transmitted in at least one specific TDD time slot for transmitting the part of application packets, the plurality of TDD time slots including the at least one specific TDD time slot. Aspects of the present disclosure are not limited to the above-mentioned individual embodiments and include also various modifications that can be conceived by those skilled in the art, and also the effects of the present disclosure are not limited to the above-mentioned content. That is, various additions, changes, and partial deletions can be made without departing from the conceptual idea and essence of the present disclosure derived from the content specified in the claims and the equivalents thereof.
40,644
11863501
DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts. Several aspects of telecommunication systems will now be presented with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, etc. (collectively referred to as “elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. By way of example, an element, or any portion of an element, or any combination of elements may be implemented as a “processing system” that includes one or more processors. Examples of processors include microprocessors, microcontrollers, graphics processing units (GPUs), central processing units (CPUs), application processors, digital signal processors (DSPs), reduced instruction set computing (RISC) processors, systems on a chip (SoC), baseband processors, field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Accordingly, in one or more example embodiments, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer. FIG.1is a diagram illustrating an example of a wireless communications system and an access network100. The wireless communications system (also referred to as a wireless wide area network (WWAN)) includes base stations102, UEs104, and an Evolved Packet Core (EPC)160. The base stations102may include macro cells (high power cellular base station) and/or small cells (low power cellular base station). The macro cells include base stations. The small cells include femtocells, picocells, and microcells. The base stations102(collectively referred to as Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access Network (E-UTRAN)) interface with the EPC160through backhaul links132(e.g., S1 interface). In addition to other functions, the base stations102may perform one or more of the following functions: transfer of user data, radio channel ciphering and deciphering, integrity protection, header compression, mobility control functions (e.g., handover, dual connectivity), inter-cell interference coordination, connection setup and release, load balancing, distribution for non-access stratum (NAS) messages, NAS node selection, synchronization, radio access network (RAN) sharing, multimedia broadcast multicast service (MBMS), subscriber and equipment trace, RAN information management (RIM), paging, positioning, and delivery of warning messages. The base stations102may communicate directly or indirectly (e.g., through the EPC160) with each other over backhaul links134(e.g., X2 interface). The backhaul links134may be wired or wireless. The base stations102may wirelessly communicate with the UEs104. Each of the base stations102may provide communication coverage for a respective geographic coverage area110. There may be overlapping geographic coverage areas110. For example, the small cell102′ may have a coverage area110′ that overlaps the coverage area110of one or more macro base stations102. A network that includes both small cell and macro cells may be known as a heterogeneous network. A heterogeneous network may also include Home Evolved Node Bs (eNBs) (HeNBs), which may provide service to a restricted group known as a closed subscriber group (CSG). The communication links120between the base stations102and the UEs104may include uplink (UL) (also referred to as reverse link) transmissions from a UE104to a base station102and/or downlink (DL) (also referred to as forward link) transmissions from a base station102to a UE104. The communication links120may use multiple-input and multiple-output (MIMO) antenna technology, including spatial multiplexing, beamforming, and/or transmit diversity. The communication links may be through one or more carriers. The base stations102/UEs104may use spectrum up to Y MHz (e.g., 5, 10, 15, 20, 100 MHz) bandwidth per carrier allocated in a carrier aggregation of up to a total of Yx MHz (x component carriers) used for transmission in each direction. The carriers may or may not be adjacent to each other. Allocation of carriers may be asymmetric with respect to DL and UL (e.g., more or less carriers may be allocated for DL than for UL). The component carriers may include a primary component carrier and one or more secondary component carriers. A primary component carrier may be referred to as a primary cell (PCell) and a secondary component carrier may be referred to as a secondary cell (SCell). Certain UEs104may communicate with each other using device-to-device (D2D) communication link192. The D2D communication link192may use the DL/UL WWAN spectrum. The D2D communication link192may use one or more sidelink channels, such as a physical sidelink broadcast channel (PSBCH), a physical sidelink discovery channel (PSDCH), a physical sidelink shared channel (PSSCH), and a physical sidelink control channel (PSCCH). D2D communication may be through a variety of wireless D2D communications systems, such as for example, FlashLinQ, WiMedia, Bluetooth, ZigBee, Wi-Fi based on the IEEE 802.11 standard, LTE, or NR. The wireless communications system may further include a Wi-Fi access point (AP)150in communication with Wi-Fi stations (STAs)152via communication links154in a 5 GHz unlicensed frequency spectrum. When communicating in an unlicensed frequency spectrum, the STAs152/AP150may perform a clear channel assessment (CCA) prior to communicating in order to determine whether the channel is available. The small cell102′ may operate in a licensed and/or an unlicensed frequency spectrum. When operating in an unlicensed frequency spectrum, the small cell102′ may employ NR and use the same 5 GHz unlicensed frequency spectrum as used by the Wi-Fi AP150. The small cell102′, employing NR in an unlicensed frequency spectrum, may boost coverage to and/or increase capacity of the access network. The gNodeB (gNB)180may operate in millimeter wave (mmW) frequencies and/or near mmW frequencies in communication with the UE104. When the gNB180operates in mmW or near mmW frequencies, the gNB180may be referred to as an mmW base station. Extremely high frequency (EHF) is part of the RF in the electromagnetic spectrum. EHF has a range of 30 GHz to 300 GHz and a wavelength between 1 millimeter and 10 millimeters. Radio waves in the band may be referred to as a millimeter wave. Near mmW may extend down to a frequency of 3 GHz with a wavelength of 100 millimeters. The super high frequency (SHF) band extends between 3 GHz and 30 GHz, also referred to as centimeter wave. Communications using the mmW/near mmW radio frequency band has extremely high path loss and a short range. The mmW base station180may utilize beamforming184with the UE104to compensate for the extremely high path loss and short range. The EPC160may include a Mobility Management Entity (MME)162, other MMEs164, a Serving Gateway166, a Multimedia Broadcast Multicast Service (MBMS) Gateway168, a Broadcast Multicast Service Center (BM-SC)170, and a Packet Data Network (PDN) Gateway172. The MME162may be in communication with a Home Subscriber Server (HSS)174. The MME162is the control node that processes the signaling between the UEs104and the EPC160. Generally, the MME162provides bearer and connection management. All user Internet protocol (IP) packets are transferred through the Serving Gateway166, which itself is connected to the PDN Gateway172. The PDN Gateway172provides UE IP address allocation as well as other functions. The PDN Gateway172and the BM-SC170are connected to the IP Services176. The IP Services176may include the Internet, an intranet, an IP Multimedia Subsystem (IMS), a PS Streaming Service, and/or other IP services. The BM-SC170may provide functions for MBMS user service provisioning and delivery. The BM-SC170may serve as an entry point for content provider MBMS transmission, may be used to authorize and initiate MBMS Bearer Services within a public land mobile network (PLMN), and may be used to schedule MBMS transmissions. The MBMS Gateway168may be used to distribute MBMS traffic to the base stations102belonging to a Multicast Broadcast Single Frequency Network (MBSFN) area broadcasting a particular service, and may be responsible for session management (start/stop) and for collecting eMBMS related charging information. The base station may also be referred to as a gNB, Node B, evolved Node B (eNB), an access point, a base transceiver station, a radio base station, a radio transceiver, a transceiver function, a basic service set (BSS), an extended service set (ESS), or some other suitable terminology. The base station102provides an access point to the EPC160for a UE104. Examples of UEs104include a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a laptop, a personal digital assistant (PDA), a satellite radio, a global positioning system, a multimedia device, a video device, a digital audio player (e.g., MP3 player), a camera, a game console, a tablet, a smart device, a wearable device, a vehicle, an electric meter, a gas pump, a toaster, or any other similar functioning device. Some of the UEs104may be referred to as IoT devices (e.g., parking meter, gas pump, toaster, vehicles, etc.). The UE104may also be referred to as a station, a mobile station, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communications device, a remote device, a mobile subscriber station, an access terminal, a mobile terminal, a wireless terminal, a remote terminal, a handset, a user agent, a mobile client, a client, or some other suitable terminology. In LTE, the allocation of physical resources (e.g., PRACH, PUCCH, SRS, SR) are fixed once the TDD frame structure is finalized. Thus, physical resources are always maintained in particular subframe temporal positions in LTE. With regards to LTE eIMTA and LAA, the TDD frame structure can change dynamically. Nevertheless, a least common set of resources is defined in eIMTA for the allocation of physical resources (e.g., ePRACH, PUCCH, SRS, SR) across different frame configurations. In LTE LAA, the TDD frame structure can change dynamically and floats in time, however certain physical resources have fixed configurations. For example, a short PUCCH (sPUCCH) configuration may be fixed in LTE LAA but allowed to float temporally. However, even in this case, the sPUCCH location starts and ends in subframes subject to a transmit opportunity (TxOP) time limit. For eMTC-Uplink (eMTC-U) and NB-IoT-Uplink (NB-IoT-U), there has been some discussion of having a dynamic type TDD frame structure with a self-contained transmission framework. Nevertheless, the DL-UL transaction in eMTC-Uplink (eMTC-U) and NB-IoT-Uplink (NB-IoT-U) are completed within a TxOP (or a few TxOPs) and the DL transaction is always within one TxOP. As such, TDD frame structures have previously defined minimum guaranteed DL subframes for the allocation of physical DL resources and minimum guaranteed UL subframes for the allocation physical UL resources. In this disclosure however, frame structure dependent resource configurations are disclosed that enable a higher scheduling flexibility at a minimal increase in configuration complexity. These solutions even lead to UE power savings depending on configuration settings. Referring again toFIG.1, in certain aspects (see element198), the UE104may be configured to receive information from the base station180. In the DL, the base station180may provide header compression, ciphering, packet segmentation and reordering, multiplexing between logical and transport channels, and radio resource allocations to the UE. In this example, the information may indicate at least one of a location or a size of a PDCCH search space within a set of subframes of a set of frames. The location and/or the size of the PDCCH search space is a function of a TDD frame structure of the set of frames. In one aspect, the information may indicate the structure of the TDD frames in the set of frames. The UE may use this information to determine the location and size of a PDCCH search space within a set of subframes of the set of frames. For example, the information may indicate the number of DL frames or the number of DL subframes in the set of frames. Based on this information, the UE may determine the location and size of the PDCCH search space within the DL subframes in the set of frames. For instance, the information may be broadcast in a physical DL channel, such as the physical broadcast channel (PBCH), and be provided as information within a master information block (MIB) or a system information block (SIB). The control information for the UE104, such as a common PDCCH or a UE specific PDCCH, may be within a set of DL subframes in the set of frames. The UE104may perform a blind search to find its control information in the DL subframes that make up the PDCCH search space. As such, the UE104determines the PDCCH search space within the set of DL subframes based on the received information indicating the location and/or the size of the PDCCH search space or based on the received information indicating the structure of the TDD frames in the set of frames. To obtain the control information for the UE104, the UE104may perform a blind decoding or blind search of the determined PDCCH search space to obtain the control information. In one example, the UE104may decode PDCCH candidates in the PDCCH search space until the UE104finds a common PDCCH or a UE specific PDCCH. The UE104may obtain the control information from the common PDCCH or the UE specific PDCCH to obtain control information which may include, for example, information regarding physical UL resource allocations. In another aspect of the disclosure, the UE104may be configured to determine a TDD frame structure. The UE104may use the TDD frame structure to determine UL resources for the transmission of non-data information. For example, the UE104may receive information on the TDD frame structure on the common PDCCH or UE specific PDCCH. In one aspect, the UE104may receive information on the TDD frame structure through the PBCH, the MIB, or the SIB. The UE104may then use the information about the TDD frame structure to determine various physical resources available within UL subframes in the set of frames. For example, the UE104may receive information indicating the number of DL frames, the number of DL subframes, the number of UL frames, or the number of UL subframes in the set of frames. Based on this information, the UE104may determine the location and size of the UL subframes in the set of frames. In this manner, the UE104may determine a location of at least one of a PRACH, a PUCCH, SRS, or SR resources based on the determined TDD frame structure. The UE104may then transmit at least one of the PRACH, the PUCCH, the SRS, the SR resources, or the measurements of a positioning reference signal (PRS) based on the determined location for the at least one of the PRACH, the PUCCH, the SRS, or the SR resources. In yet another aspect of the disclosure, the UE104may be configured to determine a TDD frame structure. The UE104may use the TDD frame structure to determine DL resources for the measurement of channel quality and UL resources for the transmission of measurement of the channel quality. For example, the UE104may receive information on the TDD frame structure through the common PDCCH or UE specific PDCCH. In one aspect, the UE104may receive information on the TDD frame structure through the PBCH, the MIB, or the SIB. The UE104may then use the information about the TDD frame structure to determine the location and size of DL subframes and UL subframes in the set of frames. The UE104is then configured to determine a number of the DL subframes over which to measure, and to average, a channel quality based on the determined TDD frame structure, and to send, over a number of the UL subframes, a channel quality indicator (CQI) measured over the determined number of subframes. FIG.2Ais a diagram200illustrating an example of a DL frame structure.FIG.2Bis a diagram230illustrating an example of channels within the DL frame structure.FIG.2Cis a diagram250illustrating an example of an UL frame structure.FIG.2Dis a diagram280illustrating an example of channels within the UL frame structure. Other wireless communication technologies may have a different frame structure and/or different channels. A frame (10 ms) may be divided into 10 equally sized subframes. Each subframe may include two consecutive time slots. A resource grid may be used to represent the two time slots, each time slot including one or more time concurrent resource blocks (RBs) (also referred to as physical RBs (PRBs)). The resource grid is divided into multiple resource elements (REs). For a normal cyclic prefix, an RB may contain 12 consecutive subcarriers in the frequency domain and 7 consecutive symbols (for DL, OFDM symbols; for UL, SC-FDMA symbols) in the time domain, for a total of 84 REs. For an extended cyclic prefix, an RB may contain 12 consecutive subcarriers in the frequency domain and 6 consecutive symbols in the time domain, for a total of 72 REs. The number of bits carried by each RE depends on the modulation scheme. As illustrated inFIG.2A, some of the REs carry DL reference (pilot) signals (DL-RS) for channel estimation at the UE. The DL-RS may include cell-specific reference signals (CRS) (also sometimes called common RS), UE-specific reference signals (UE-RS), and channel state information reference signals (CSI-RS).FIG.2Aillustrates CRS for antenna ports 0, 1, 2, and 3 (indicated as R0, R1, R2, and R3, respectively), UE-RS for antenna port 5 (indicated as R5), and CSI-RS for antenna port 15 (indicated as R). FIG.2Billustrates an example of various channels within a DL subframe of a frame. The physical control format indicator channel (PCFICH) is within symbol 0 of slot 0, and carries a control format indicator (CFI) that indicates whether the physical downlink control channel (PDCCH) occupies 1, 2, or 3 symbols (FIG.2Billustrates a PDCCH that occupies 3 symbols). The PDCCH occupies the 1, 2, or 3 symbols at the beginning of each subframe as indicated by the PCFICH. The PDCCH carries downlink control information (DCI) within one or more control channel elements (CCEs), each CCE including nine RE groups (REGs) distributed across the first 1, 2, or 3 symbols of each subframe, each REG including four consecutive REs in an OFDM symbol. The number of CCEs in a PDCCH is called an aggregation level, and may be 1, 2, 4, 8, or 16 consecutive CCEs. For example, a PDCCH with an aggregation level of 8 may use a CCE at the first 1, 2, or 3 symbols of each of 8 consecutive subframes. A PDCCH with an aggregation level of n may only start on a boundary of every n subframes. For example, in the example of the PDCCH with an aggregation level of 8, the PDCCH may only start on subframe 0, 8, 16, etc. A UE may be configured with a UE-specific enhanced PDCCH (ePDCCH) that also carries DCI. The ePDCCH may have 2, 4, or 8 RB pairs (FIG.2Bshows two RB pairs, each subset including one RB pair). The physical hybrid automatic repeat request (ARQ) (HARQ) indicator channel (PHICH) is also within symbol 0 of slot 0 and carries the HARQ indicator (HI) that indicates HARQ acknowledgement (ACK)/negative ACK (NACK) feedback based on the physical uplink shared channel (PUSCH). The primary synchronization channel (PSCH) may be within symbol 6 of slot 0 within subframes 0 and 5 of a frame. The PSCH carries a primary synchronization signal (PSS) that is used by a UE104to determine subframe/symbol timing and a physical layer identity. The secondary synchronization channel (SSCH) may be within symbol 5 of slot 0 within subframes 0 and 5 of a frame. The SSCH carries a secondary synchronization signal (SSS) that is used by a UE to determine a physical layer cell identity group number and radio frame timing. Based on the physical layer identity and the physical layer cell identity group number, the UE can determine a physical cell identifier (PCI). Based on the PCI, the UE can determine the locations of the aforementioned DL-RS. The physical broadcast channel (PBCH), which carries a master information block (MIB), may be logically grouped with the PSCH and SSCH to form a synchronization signal (SS) block. The MIB provides a number of RBs in the DL system bandwidth, a PHICH configuration, and a system frame number (SFN). The physical downlink shared channel (PDSCH) carries user data, broadcast system information not transmitted through the PBCH such as system information blocks (SIBs), and paging messages. As illustrated inFIG.2C, some of the REs carry demodulation reference signals (DM-RS) for channel estimation at the base station. The UE may additionally transmit sounding reference signals (SRS) in the last symbol of a subframe. The SRS may have a comb structure, and a UE may transmit SRS on one of the combs. The SRS may be used by a base station for channel quality estimation to enable frequency-dependent scheduling on the UL. FIG.2Dillustrates an example of various channels within an UL subframe of a frame. A physical random access channel (PRACH) may be within one or more subframes within a frame based on the PRACH configuration. The PRACH may include six consecutive RB pairs within a subframe. The PRACH allows the UE to perform initial system access and achieve UL synchronization. A physical uplink control channel (PUCCH) may be located on edges of the UL system bandwidth. The PUCCH carries uplink control information (UCI), such as scheduling requests, a CQI, a precoding matrix indicator (PMI), a rank indicator (RI), and HARQ ACK/NACK feedback. The PUSCH carries data, and may additionally be used to carry a buffer status report (BSR), a power headroom report (PHR), and/or UCI. FIG.3is a block diagram of a base station310in communication with a UE350in an access network. In the DL, IP packets from the EPC160may be provided to a controller/processor375. The controller/processor375implements layer 3 and layer 2 functionality. Layer 3 includes a radio resource control (RRC) layer, and layer 2 includes a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a medium access control (MAC) layer. The controller/processor375provides RRC layer functionality associated with broadcasting of system information (e.g., MIB, SIBs), RRC connection control (e.g., RRC connection paging, RRC connection establishment, RRC connection modification, and RRC connection release), inter radio access technology (RAT) mobility, and measurement configuration for UE measurement reporting; PDCP layer functionality associated with header compression/decompression, security (ciphering, deciphering, integrity protection, integrity verification), and handover support functions; RLC layer functionality associated with the transfer of upper layer packet data units (PDUs), error correction through ARQ, concatenation, segmentation, and reassembly of RLC service data units (SDUs), re-segmentation of RLC data PDUs, and reordering of RLC data PDUs; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto transport blocks (TBs), demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization. The transmit (TX) processor316and the receive (RX) processor370implement layer 1 functionality associated with various signal processing functions. Layer 1, which includes a physical (PHY) layer, may include error detection on the transport channels, forward error correction (FEC) coding/decoding of the transport channels, interleaving, rate matching, mapping onto physical channels, modulation/demodulation of physical channels, and MIMO antenna processing. The TX processor316handles mapping to signal constellations based on various modulation schemes (e.g., binary phase-shift keying (BPSK), quadrature phase-shift keying (QPSK), M-phase-shift keying (M-PSK), M-quadrature amplitude modulation (M-QAM)). The coded and modulated symbols may then be split into parallel streams. Each stream may then be mapped to an OFDM subcarrier, multiplexed with a reference signal (e.g., pilot) in the time and/or frequency domain, and then combined together using an Inverse Fast Fourier Transform (IFFT) to produce a physical channel carrying a time domain OFDM symbol stream. The OFDM stream is spatially precoded to produce multiple spatial streams. Channel estimates from a channel estimator374may be used to determine the coding and modulation scheme, as well as for spatial processing. The channel estimate may be derived from a reference signal and/or channel condition feedback transmitted by the UE350. Each spatial stream may then be provided to a different antenna320via a separate transmitter318TX. Each transmitter318TX may modulate an RF carrier with a respective spatial stream for transmission. At the UE350, each receiver354RX receives a signal through its respective antenna352. Each receiver354RX recovers information modulated onto an RF carrier and provides the information to the receive (RX) processor356. The TX processor368and the RX processor356implement layer 1 functionality associated with various signal processing functions. The RX processor356may perform spatial processing on the information to recover any spatial streams destined for the UE350. If multiple spatial streams are destined for the UE350, they may be combined by the RX processor356into a single OFDM symbol stream. The RX processor356then converts the OFDM symbol stream from the time-domain to the frequency domain using a Fast Fourier Transform (FFT). The frequency domain signal comprises a separate OFDM symbol stream for each subcarrier of the OFDM signal. The symbols on each subcarrier, and the reference signal, are recovered and demodulated by determining the most likely signal constellation points transmitted by the base station310. These soft decisions may be based on channel estimates computed by the channel estimator358. The soft decisions are then decoded and deinterleaved to recover the data and control signals that were originally transmitted by the base station310on the physical channel. The data and control signals are then provided to the controller/processor359, which implements layer 3 and layer 2 functionality. The controller/processor359can be associated with a memory360that stores program codes and data. The memory360may be referred to as a computer-readable medium. In the DL, the controller/processor359provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, and control signal processing to recover IP packets from the EPC160. The controller/processor359is also responsible for error detection using an ACK and/or NACK protocol to support HARQ operations. Similar to the functionality described in connection with the DL transmission by the base station310, the controller/processor359provides RRC layer functionality associated with system information (e.g., MIB, SIBs) acquisition, RRC connections, and measurement reporting; PDCP layer functionality associated with header compression/decompression, and security (ciphering, deciphering, integrity protection, integrity verification); RLC layer functionality associated with the transfer of upper layer PDUs, error correction through ARQ, concatenation, segmentation, and reassembly of RLC SDUs, re-segmentation of RLC data PDUs, and reordering of RLC data PDUs; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto TBs, demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization. Channel estimates derived by a channel estimator358from a reference signal or feedback transmitted by the base station310may be used by the TX processor368to select the appropriate coding and modulation schemes, and to facilitate spatial processing. The spatial streams generated by the TX processor368may be provided to different antenna352via separate transmitters354TX. Each transmitter354TX may modulate an RF carrier with a respective spatial stream for transmission. The UL transmission is processed at the base station310in a manner similar to that described in connection with the receiver function at the UE350. Each receiver318RX receives a signal through its respective antenna320. Each receiver318RX recovers information modulated onto an RF carrier and provides the information to a RX processor370. The controller/processor375can be associated with a memory376that stores program codes and data. The memory376may be referred to as a computer-readable medium. In the UL, the controller/processor375provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, control signal processing to recover IP packets from the UE350. IP packets from the controller/processor375may be provided to the EPC160. The controller/processor375is also responsible for error detection using an ACK and/or NACK protocol to support HARQ operations. FIG.4is a generalized representation of frame structures400A-400D for a set of frames. In this example, a temporal duration of each of the TDD frame structures400A-400D is 80 ms where each TDD frame structure400A-400D represents a set of frames. Therefore, there are 8 frames in each of the TDD frame structures400A-400D shown inFIG.4. Furthermore, in this example, each of the TDD frame structures400A-400D includes LBT frames structures. Thus, the UE104begins each of the set of frames by performing a CCA or an enhanced CCA (ECCA), which has a temporal duration of approximately 3 ms. The UE104is then configured to listen for a transmission signature from the base station180during the next 2 ms. The UE104uses the transmission signature to detect transmission of the set of frames with one of the TDD frame structures400A-400D. Each of the TDD frame structures400A-400D then has a minimum guaranteed set of subframes402for DL. The minimum guaranteed set of subframes402for DL may be used for a DL reference signal (DRS), radio resource management (RRM), and common control signals. The minimum guaranteed set of subframes402for DL may include at least one common PDCCH candidate and/or one UE specific PDCCH candidate for UEs with the highest coverage extension. However, as explained below, PDCCH candidates may also be allocated by the TDD frame structures400A-400D to other subframes in the set of frames. Additionally, each of the TDD frame structures400A-400D may also include a minimum guaranteed set of subframes404for UL in the last frame of the set of frames. The minimum guaranteed set of subframes404for UL may have a temporal duration of 10 ms, or one frame. The set of subframes404in the last frame with the TDD frame structures400A-400D may be utilized to provide physical UL resources such as the PUCCH, PRACH, ePRACH, ACK/NACK feedback, P-CSI (periodic channel state information), SRS, SR, etc. However, as shown inFIG.4, one of the problems with providing physical UL resources over the minimum guaranteed set of subframes404for the UL in the last frame of the set of frames is when there are multiple UEs that require physical UL resource allocations. For example, when a first UE uses the physical UL resources over the first 5 ms of the last frame to transmit, a second UE may not transmit. Conversely, when the second UE uses the physical UL resources over the final 5 ms of the last frame to transmit, the first UE may not transmit. As such, to provide a frame structure with enhanced scheduling flexibility, the TDD frame structures400A-400D may have different flexible sections406A,406B,406C,406D with different portions for assigning physical resources for the DL and for the UL. The allocation of the physical resources of the frame structures400A-400D between the DL and UL may depend on the physical resources needed for transmission between the UE104and the base station180. Each of the flexible sections406A,406B,406C,406D is provided between the minimum guaranteed set of subframes402for the DL and the minimum guaranteed set of subframes404for the UL in the last frame of the set of frames. In the TDD frame structure400A containing the flexible section406A, the entire flexible section406A is used for physical DL resources except for a small gap at the end of the flexible section406A that is utilized as a UL/gap. As such, a subset of the subframes of the set of frames within the flexible section406A may be used to provide PDCCHs and PDSCHs. Thus, a PDCCH search space may extend beyond the minimum guaranteed set of subframes402for DL into the entire flexible section406A except for the UL/gap. In the TDD frame structure400B containing the flexible section406B, the flexible section406B includes a portion408for physical DL resources and a portion410that may be used for physical UL resources. As such, a subset of the subframes of the set of frames within the portion408may be used to provide PDCCHs and PDSCHs. In this example, the portion410includes the last frame in the frames within the flexible section406B. Thus, the portion410may be used to provide physical UL resources such as the PUCCH, PRACH, ePRACH, ACK/NACK feedback, P-CSI, SRS, SR in addition to the minimum guaranteed set of subframes404for the UL. In the TDD frame structure400C containing the flexible section406C, the flexible section406C includes a portion412for physical DL resources and a portion414that may be used for physical UL resources. As such, a subset of the subframes of the set of frames within the portion412may be used to provide PDCCHs and PDSCHs. In this example, the portion414includes the last three frames in the frames within the flexible section406C. Thus, the portion414may be used to provide physical UL resources such as the PUCCH, PRACH, ePRACH, ACK/NACK feedback, P-CSI, SRS, SR in addition to the minimum guaranteed set of subframes404for the UL. In the TDD frame structure400D containing the flexible section406D, the entire flexible section406D is used for physical UL resources. Thus, all five subframes within the flexible portion406D are used for physical UL resources Thus, the entire flexible section406D may be used to provide physical UL resources such as the PUCCH, PRACH, ePRACH, ACK/NACK feedback, P-CSI, SRS, SR in addition to the minimum guaranteed set of subframes404for the UL. FIG.5illustrates a pair of PDCCH search spaces500,502. The PDCCH search spaces500,502are provided by different TDD frame structures, such as the TDD frame structures400A-400D shown inFIG.4. In this example, the PDCCH search space500has a maximum number of subframes that may contain PDCCH candidates for the UE104to search. For example, the PDCCH search space500may be provided by the TDD frame structure400A inFIG.4with the flexible portion406A. In this case the PDCCH search space500has a total number of 16 subframes, which may be the first 16 DL subframes in, or may be distributed through, the set of frames in a TDD frame structure. With regard to the PDCCH search space502, the PDCCH search space502has less than the maximum number of subframes in the search space500, and may contain PDCCH candidates for the UE to search. In this example, the PDCCH search space502may be provided by the TDD frame structure400B inFIG.4with the flexible portion406B. The PDCCH search space502has a total number of 8 subframes, which may be the first 8 DL subframes in, or may be distributed through, the set of frames in a TDD frame structure. Both of the PDCCH search spaces500,502may be a function of the TDD frame structures. In particular, the number of PDCCH candidates in the PDCCH search spaces500,502may be a function of the TDD frame structure. As explained above with regards toFIG.1, the information from the base station180to the UE104may indicate at least one of a location or a size of a PDCCH search space within a set of subframes of a set of frames. The location and/or the size of the PDCCH search space are a function of the TDD frame structure of the set of frames. In one aspect, the UE104may determine the location and/or the size of a PDCCH search size from information on the TDD frame structure received from the base station180. For example, with regards to the TDD frame structure400A having the flexible section406A, the base station180may indicate that the number of DL subframes in the set of frames for the PDCCH search space500is 16 DL subframes. As such, the UE104may receive information from the base station180indicating the size of the PDCCH search space500. In one aspect, the UE104may determine the size of the PDCCH search space500from information on the TDD frame structure received from the base station180. The UE104may determine a search strategy over the PDCCH search space500of 16 DL subframes. Given that the PDCCH search space500has 16 DL subframes, the UE104may determine that a maximum aggregation level for the PDCCH search space500is 16. As mentioned, the number of CCEs in a PDCCH is called an aggregation level, and may be 1, 2, 4, 8, or 16 consecutive CCEs at the first 1, 2, or 3 symbols of 1, 2, 4, 8, or 16 consecutive subframes, respectively. The UE104then performs a blind decoding or searching of the determined PDCCH search space500to obtain control information for the UE104. In this example, the UE104performs a blind decoding or searching over the determined PDCCH search space500based on the determined maximum aggregation level of 16. Thus, the determined PDCCH search space500of 16 DL subframes has 1 PDCCH candidate at an aggregation level of 16 (which is the maximum aggregation level for the PDCCH search space500), has 2 PDCCH candidates at an aggregation level of 8, has 4 PDCCH candidates at an aggregation level of 4, and 8 PDCCH candidates at an aggregation level of 2. The PDCCH candidates at an aggregation level of n may only start on a boundary of every n subframes. Using standard decoding techniques, the UE104performs PDCCH decoding of each of these PDCCH candidates in the PDCCH search space500to obtain common and/or UE specific control information. In another example and with regards to the TDD frame structure400B having the flexible section406B, the base station180may indicate that the number of DL subframes in the set of frames for the PDCCH search space502is 8 DL subframes. As such, the UE104may receive information from the base station180indicating the size of the PDCCH search space502. In one aspect, the UE104may determine the size of the PDCCH search space502from information on the TDD frame structure received from the base station180. Thus, the UE104may determine a search strategy over the PDCCH search space502of 8 DL subframes. Given that the PDCCH search space502has 8 DL subframes, the UE104may determine that a maximum aggregation level for the PDCCH search space502is 8. The UE104then performs a blind decoding of the determined PDCCH search space502to obtain control information for the UE104. In this example, the UE104performs a blind decoding over the determined PDCCH search space502based on the determined maximum aggregation level of 8. Thus, the determined PDCCH search space502of 8 DL subframes has 1 PDCCH candidate at an aggregation level of 8 (which is the maximum aggregation level for the PDCCH search space502), has 2 PDCCH candidates at an aggregation level of 4, and has 4 PDCCH candidates at an aggregation level of 2. Using standard decoding techniques, the PDCCH UE104performs decoding of each of these PDCCH candidates in the PDCCH search space502to obtain common and/or UE specific control information. Analogous processes and techniques may be provided by the UE104and the base station180with regards to the TDD frame structures400C,400D with the flexible sections406C,406D with PDCCH search spaces having a number of DL subframes of 4 and 2, respectively. Within the control information (which may be DCI of a common PDCCH or a UE specific PDCCH), the UE104may also determine the expected TDD frame structure of the next frame or set of frames in addition to the current TDD frame structure. This TDD frame structure may be applicable even if the base station180does not transmit on the next frame due to LBT failure. The UE104may use the control information about the current and next TDD frame structure to determine resources for various physical channels, especially on the UL. FIG.6illustrates a pair of PDCCH search spaces600,602and are related to another technique for determining PDCCH candidates. The PDCCH search spaces600,602are each provided by different TDD frame structures, such as the TDD frame structures400A-400D shown inFIG.4. For example, the PDCCH search space600has a number of subframes which may contain PDCCH candidates. In this case, the PDCCH search space600may be provided by the TDD frame structure400B inFIG.4with the flexible portion406B. In this case, the PDCCH search space600has a total number of 16 subframes, which may be the first 16 DL subframes in, or may be distributed through, the set of frames in the TDD frame structure. With regard to the PDCCH search space602, the PDCCH search space602has a greater number of subframes which may be provided with PDCCH candidates than the maximum aggregation level of 16 for the PDCCH. For example, the PDCCH search space602may be provided by the TDD frame structure400A inFIG.4with the flexible portion406A. In this case the PDCCH search space has a total number of 32 subframes, which may be the first 32 DL subframes in, or may be distributed through, the set of frames in the TDD frame structure. Each of the PDCCH search space600,602is a function of the TDD frame structures. In particular, the size of the PDCCH search space and the number of PDCCH candidates may scale as a function of the number of DL subframes in the TDD frame structure. As explained above with regards toFIG.1, the information from the base station180to the UE104may indicate at least one of a location or a size of a PDCCH search space within a set of subframes of a set of frames. The location and/or the size of the PDCCH search space are a function of the TDD frame structure of the set of frames. In one aspect, the UE104may determine the location and/or the size of a PDCCH search size from information on the TDD frame structure received from the base station180. For example, with regards to the TDD frame structure400B having the flexible section406B, the base station180may indicate that the number of DL subframes in the set of frames for the PDCCH search space is 16 DL subframes. As such, the UE104may receive information from the base station180indicating the size of the PDCCH search space. The information indicating the size of a PDCCH search space600may be based on a number of DL subframes in the set of frames. In one aspect, the UE104may determine the size of the PDCCH search space600from information on the TDD frame structure received from the base station180. The UE104may determine a search strategy over the PDCCH search space600of 16 DL subframes. Given that the PDCCH search space600is 16 DL subframes, the UE104may determine a search strategy to search for all PDCCH candidates at all possible aggregation level. In one aspect, to keep the number of blind decoding constant as the size of the PDCCH search space increases, the UE may search a subset of all the PDCCH candidates of the determined PDCCH search space600. The UE104then performs a blind decoding of the determined PDCCH search space600to obtain control information for the UE104. The UE104may perform a blind decoding over the determined PDCCH search space600based on all PDCCH candidates or a subset of the PDCCH candidates. Thus, the determined PDCCH search space600has 1 PDCCH candidate at an aggregation level of 16, has 2 PDCCH candidates at an aggregation level of 8, has 4 PDCCH candidates at an aggregation level of 4, and 8 PDCCH candidates at an aggregation level of 2. Using standard decoding techniques, the UE104may perform PDCCH decoding of each of these PDCCH candidates in the PDCCH search space600to obtain common and/or UE specific control information. In another example and with regards to the TDD frame structure400A having the flexible section406A, the base station180may indicate that the number of DL subframes in the set of frames for the PDCCH search space is 32 DL subframes. As such, the UE104may receive information from the base station180indicating the size of the PDCCH search space. The information indicating the size of a PDCCH search space602may be based on a number of DL subframes in the set of frames, which in the example for the PDCCH search space602is 32. In one aspect, the UE104may determine the size of the PDCCH search space602from information on the TDD frame structure received from the base station180. The UE104may determine a search strategy over the PDCCH search space602of 32 DL subframes. In one aspect, the UE104may use information received from the base station180such as a cell radio network temporary identifier (C-RNTI) or a user identifier (UE-ID) when determining the search strategy. For example, to keep the number of blind decoding constant as the size of the PDCCH search space increases, the UE may search a subset of all the PDCCH candidates of the determined PDCCH search space602as a function of the C-RNTI, UE-ID, slot number, subframe number, frame number, etc. The UE104then performs a blind decoding of the determined PDCCH search space602to obtain control information for the UE104. Given that the PDCCH search space602has 32 DL subframes, the UE104may perform a blind decoding of all possible PDCCH candidates over the determined PDCCH search space602. The maximum aggregation level for the PDCCH is 16 and there are 2 PDCCH candidates over the PDCCH search space of 32 DL subframes. For the aggregation level of 8, there are 4 PDCCH candidates. For the aggregation level of 4, there are 8 PDCCH candidates. For the aggregation level of 2, there are 16 PDCCH candidates. Using standard decoding techniques, the UE104may perform PDCCH decoding of each of these PDCCH candidates in the PDCCH search space602to obtain common and/or UE specific control information. In one aspect, the UE104may perform a blind decoding of a subset of the possible PDCCH candidates, subject to a minimum at each aggregation level. The subset of the PDCCH candidates for blind decodes may be determined based on at least one of the C-RNTI or the user identifier UE-ID. Thus, the UE104may perform a blind decoding of only 16 of the 32 DL subframes in the PDCCH search space602at each of the aggregation levels of 16, 8, 4, 2. Which 16 of the 32 DL subframes to perform the blind decoding at each aggregation level may be determined based on at least one of the C-RNTI, the user identifier UE-ID, slot number, subframe number, frame number or some other information about the UE104or the frame structure. For example, the UE104may perform the blind decoding over the determined PDCCH search space602at the maximum aggregation level of 16 over the first 16 of the 32 DL subframes. Thus, 1 PDCCH candidate is decoded at the aggregation level of 16. Additionally, the UE104may perform the blind decoding over the determined PDCCH search space602at the aggregation level of 8 over 16 of the 32 DL subframes. Thus, 2 PDCCH candidates are decoded at the aggregation level of 8. Furthermore, the UE104may perform the blind decoding over the determined PDCCH search space602at the aggregation level of 4 over 16 of the 32 DL subframes. Thus, 4 PDCCH candidates are decoded at the aggregation level of 4. Finally, the UE104may perform the blind decoding over the determined PDCCH search space602at the aggregation level of 2 over 16 of the 32 DL subframes. Thus 8 PDCCH candidates are decoded at the aggregation level of 2. Using standard decoding techniques, the UE104may perform PDCCH decoding of each of these PDCCH candidates in the PDCCH search space602to obtain common and/or UE specific control information. Because the UE104may determine which of the 16 of the 32 subframes to perform blind decoding based on at least one of the C-RNTI, the user identifier UE-ID, slot number, subframe number, frame number or some other information about the UE104or the frame structure, one large grant of common and/or UE specific control information in the PDCCH does not block scheduling other UEs within the same frames. Also, UE multiplexing is beneficial because for higher coverage enhancements, it is more efficient for the UE104to transmit on the UL on narrowband (e.g., 1RB or 2RB) and the rest of the RBs can be used to schedule other UEs. FIG.7illustrates a pair of TDD frame structures700,702. The TDD frame structures700,702may be examples of the TDD frame structures400shown inFIG.4. The TDD frame structure700has a number of UL subframes which may be provided in the last frame and the second to last frame of the set of frames. For example, the TDD frame structure700may be an example of the TDD frame structure400B inFIG.4with the flexible portion406B. In this case, the TDD frame structure700has a total number of 20 UL subframes. With regard to the TDD frame structure702, the TDD frame structure702has a number of UL subframes which may be provided in the last frame of the set of frames. For example, the TDD frame structure702may be an example of the TDD frame structure400A inFIG.4with the flexible portion406A. In this case, the TDD frame structure700has a total number of 10 UL subframes. The UE104is configured to determine the TDD frame structure. For example, if the TDD frame structure is the TDD frame structure700, the base station180may transmit and the UE104may receive information indicating the TDD frame structure700. In one aspect, the UE104may receive information on the TDD frame structure through the PBCH, the MIB, or the SIB. In one case, the control information in the common PDCCH or UE specific PDCCH for the UE104is used as the information that indicates the TDD frame structure700. Similarly, if the TDD frame structure is the TDD frame structure702, the base station180may transmit and the UE104may receive information indicating the TDD frame structure702. Periodically, the base station180may assign physical resources (in particular, physical uplink resources) which the UE104then uses to transmit uplink access, uplink control information, and other non-data information. For example, the UE104may be configured to determine a location of at least one of a PRACH, PUCCH, SRS, SR resources based on the determined TDD frame structure. The PRACH/PUCCH/SRS/SR resources at a certain repetition level may be semi-statically configured to occur at a fixed location in time or frequency. For example, the repetition level for the PRACH/PUCCH/SRS/SR resources in the TDD frame structure700is 10 while the repetition level in the TDD frame structure702is 5. In one aspect, the PRACH/PUCCH/SRS/SR resources may be configured to occur even when the base station180does not clear the medium. If the PRACH/PUCCH/SRS/SR resources are configured to occur on a UL subframe assigned for the PRACH/PUCCH/SRS/SR resources based on the information on the frame structure, then the PRACH/PUCCH/SRS/SR resources are available for transmission over the assigned UL subframe. Otherwise, if the PRACH/PUCCH/SRS/SR resources are configured to occur on a DL subframe or on a UL subframe not assigned for the PRACH/PUCCH/SRS/SR resources, then the PRACH/PUCCH/SRS/SR resources are not available. In one aspect, the UE104may receive information indicating a change in the TDD frame structure for a second set of frames that follow the set of frames. For example, the PRACH/PUCCH/SRS/SR resources may occur only at fixed subframe locations such as in the second to last frame in the TDD frame structure700. However, if these subframe locations are not UL subframes, like in the TDD frame structure702, the PRACH/PUCCH/SRS/SR resources would not be available and would not be provided at all. Thus, if the first set of frames is provided with the TDD frame structure700while a second set of frames is provided with the TDD frame structure702, the UE104may receive information from the base station180indicating a change in the TDD frame structure from the TDD frame structure700for the first set of frames to the TDD frame structure702for a second set of frames. In this case, the UE104would know that the PRACH/PUCCH/SRS/SR resources are not available in the second to last frame within the TDD frame structure702. In one aspect, the PRACH and SR resources may be available dynamically, such as when the PRACH and SR resources are configured to occur at a repetition level, and may be usable by the UE104as they are not scheduled for use by the base station180. In one aspect, the availability of the PUCCH resources, such as ACK/NACK resources, used by the UE104for the UL may be indicated in the DL grant. In one aspect, the availability of the resources for P-CSI transmission is dependent on the availability of the PUCCH resources or PUSCH resources. If the PUCCH or PUSCH resources are available, the P-CSI is transmitted. Otherwise, the transmission of the P-CSI is dropped. In one aspect, with regards to the measurements of the PRS received from the base station180, the number of the subframes over which the PRS is transmitted may be static or may be a function of the frame structure. The UE104may determine from the frame structure the number of subframes over which the PRS is transmitted. The UE104may measure the PRS over the determined number of subframes and may transmit the measured PRS to the base station180using the UL subframes. Given that the UE104is configured to determine the TDD frame structure, the UE104may also use the TDD frame structure to determine resources available for making measurements relevant to the base station180. For example, the UE104may determine a number of subframes over which to measure and to average a channel quality based on the determined TDD frame structure. The UE104may then send a CQI measured over the determined number of subframes and may transmit the CQI over a number of UL subframes, such as using the PUCCH resources when they are available. FIG.8illustrates a call flow800illustrating certain aspects of the disclosure with respect to a UE802and a base station804. At procedure806, the base station804transmits and the UE802may receive information indicating at least one of a location or a size of a PDCCH search space within a set of subframes of a set of frames. The location and/or the size of the PDCCH search space are a function of the TDD frame structure of the set of frames, as explained above with regards toFIG.4-6. In one aspect, the base station804may transmit and the UE802may receive information indicating the TDD frame structure of the set of frames. For example, the information may indicate the number of DL frames or the number of DL subframes in the set of frames. The information may be broadcast in a physical DL channel, such as the physical broadcast channel (PBCH), and be provided as information within a master information block (MIB) or a system information block (SIB). At procedure808, the UE802may determine the PDCCH search space within the set of subframes and a search strategy. The UE802determines the PDCCH search space based on the received information indicating the at least one of the location or the size of the PDCCH search space, or based on the received information indicating the structure of the TDD frames in the set of frames. For example, the base station180may indicate the number of DL subframes in the set of frames for the PDCCH search space. In one aspect, the UE802may determine the number of DL subframes from information on the TDD frame structure, and from the number of DL subframes, the UE802may determine the location and/or size of the PDCCH search space. For the search strategy, the UE802may determine a maximum aggregation level for the PDCCH search space (e.g., SeeFIG.5) based on the number of DL subframes in the set of frames of the PDCCH search space. In one aspect, the UE802may determine a search strategy to search for all PDCCH candidates at all possible aggregation level. In one aspect, to keep the number of blind decoding constant as the size of the PDCCH search space increases, the UE802may determine a search strategy to search a subset of all the PDCCH candidates of the determined PDCCH search space at different aggregation levels, subject to a minimum of blind decodes at each aggregation level (e.g., SeeFIG.6). At procedure810, the base station804transmits control information on PDCCHs, a C-RNTI or UE-ID for the UE802, and a PRS to the UE802. The UE802then performs a blind decoding over the determined PDCCH search space to obtain control information for the UE802, as shown in procedure812. The UE802may perform the blind decoding over the determined PDCCH search space502based on the determined maximum aggregation to search for all PDCCH candidates at all possible aggregation level according to the search strategy (e.g., SeeFIG.5). In one aspect, the UE802may search a subset of all the PDCCH candidates of the determined PDCCH search space at different aggregation levels according to the search strategy (e.g., SeeFIG.6). In one aspect, the subset of blind decodes may be determined based on at least one of a C-RNTI, a user identifier UE-ID for the UE802, slot number, subframe number, frame number or some other information about the UE104or the frame structure, subject to a minimum of blind decodes at each aggregation level. Using standard decoding techniques, the UE802may perform PDCCH decoding of each of these PDCCH candidates in the PDCCH search space to obtain common and/or UE specific control information. In one aspect, within the control information (which may be DCI of a common PDCCH or a UE specific PDCCH), the base station804can indicate the expected TDD frame structure of the next frame in addition to the current TDD frame structure. The UE802may use the control information about the current and next TDD frame structure to determine resources for various physical channels, especially on the UL. Accordingly, at procedure814, the UE802determines a TDD frame structure, such as the assignment of the UL subframes. The UE802may receive information indicating the TDD frame structure, such as within the control information in the PDCCH (either common or UE specific). In one aspect, the UE104may receive information on the TDD frame structure through the PBCH, the MIB, or the SIB. Periodically, the base station804assigns physical resources (in particular, physical uplink resources) which the UE104then uses to transmit uplink access, uplink control information, and other non-data information. At procedure816, based on the determined TDD frame structure, the UE802determines a location of at least one of a PRACH, a PUCCH, SRS, SR resources. For example, as explained inFIG.7, the PRACH/PUCCH/SRS/SR resources at a certain repetition level may be semi-statically configured to occur at a fixed location in time or frequency. If the PRACH/PUCCH/SRS/SR resources are configured to occur on a UL subframe of the frame structure assigned for the PRACH/PUCCH/SRS/SR resources, then the PRACH/PUCCH/SRS/SR resources are available for transmission over the assigned UL subframe. Otherwise, if the PRACH/PUCCH/SRS/SR resources are configured to occur on a DL subframe or on a UL subframe not assigned for the PRACH/PUCCH/SRS/SR resources, then the PRACH/PUCCH/SRS/SR resources are not available for transmission. In another example explained inFIG.7, the UE802may receive information indicating a change in the TDD frame structure for a second set of frames that follow the current set of frames. For example, the PRACH/PUCCH/SRS/SR resources may occur only at fixed subframe locations. Thus, if these subframe locations are UL subframes, the PRACH/PUCCH/SRS/SR resources are available. However, if these subframe locations are not UL subframes, the PRACH/PUCCH/SRS/SR resources would not be available and would not be provided at all. Thus, the UE802may receive information from the base station804indicating a change in the TDD frame structure. Given that the UE802is configured to determine the TDD frame structure, the UE802may also use the TDD frame structure to determine resources available for making measurements relevant to the base station804. For example, at procedure818the UE802may determine a number of DL subframes over which to measure and to average a CQI based on the determined TDD frame structure. In one aspect, the UE802may determine a number of DL subframes over which the PRS is received from the base station804based on the determined TDD frame structure for the UE802to measure the PRS. At procedure820, the UE802may then transmit PRACH/PUCCH/SRS/SR/P-CSI using the at least one of the PRACH, the PUCCH, the SRS, or the SR resources based on the determined location of the PRACH, the PUCCH, the SRS, or the SR resources. With regards to the PRS, the UE802may be configured to measure the PRS based on the determined number of DL subframes containing the PRS, which was received at procedure810. The UE802may transmit the PRS measurement to the base station804using one or more UL subframes, such as using the PUCCH resources when they are available. In one aspect, the UE802is configured to measure and to average the CQI based on the determined number of DL subframes over which to make the CQI measurement. At procedure822, the UE802sends the CQI to the base station804. The UE802may transmit the CQI over a number of UL subframes, such as using the PUCCH resources when they are available. FIG.9is a flowchart900of a method of wireless communication. The method may be performed by a UE (e.g.,104,802) to obtain control information that are allocated in a flexible manner into subframes as a function of a TDD frame structure. For example, the UE may determine a PDCCH search space and a search strategy from a TDD frame structure and may perform a blind decoding of the PDCCH search space to obtain downlink control information. At902, the UE receives, from a base station, information indicating a TDD frame structure of a set of frames. For example, the information may indicate the number of DL frames or the number of DL subframes in the set of frames. In one aspect, the information may indicate at least one of a location or a size of a PDCCH search space within a set of subframes in the set of frames. The location and/or the size of the PDCCH search space are a function of the TDD frame structure of the set of frames. The base station may broadcast the information in a physical DL channel, such as the PBCH, and may provide the information within a MIB or a SIB. At904, the UE determines a physical downlink control channel PDCCH search space within a set of subframes in the set of frames based on the received information indicating the TDD frame structure. The PDCCH search space may indicate at least one of a location or a size of the PDCCH search space. The location or the size of the PDCCH search space is a function of a TDD frame structure of the set of frames. In one aspect, the received information may indicate the number of DL subframes in the set of frames for the PDCCH search space. In one aspect, the UE may determine the number of DL subframes from information on the TDD frame structure, and from the number of DL subframes, the UE may determine the location and/or size of the PDCCH search space. At906, the UE determines a search strategy including a maximum aggregation level based on the PDCCH search space. For example, the UE may determine a search strategy based on the location and/or size of the PDCCH search space. In one aspect, the UE may determine a search strategy to search for all PDCCH candidates at all possible aggregation levels based on the maximum aggregation level within the PDCCH search space. In one aspect, to keep the number of blind decoding constant as the size of the PDCCH search space increases, the UE may determine a search strategy to search a subset of all the PDCCH candidates of the PDCCH search space at different aggregation levels. At908, the UE performs a blind decoding of the determined PDCCH search space using the search strategy to obtain control information. In one aspect, the UE may perform a blind decoding of all possible PDCCH candidates at all possible aggregation levels based on a maximum aggregation level over the determined PDCCH search space. For example, if the PCCH search space is 32 DL subframes, the UE may search for all 2 PDCCH candidates for the maximum aggregation level of 16, for all 4 PDCCH candidates for the aggregation level of 8, for all 8 PDCCH candidates for the aggregation level of 4, and for all 16 PDCCH candidates for the aggregation level of 2. In one aspect, the UE may perform a blind decoding of a subset of the possible PDCCH candidates, subject to a minimum at each aggregation level. FIG.10is a flowchart1000of a method of wireless communication. The method may be performed by a UE (e.g.,104,802) to determine a plurality of UL subframes that are allocated in a flexible manner as a function of a TDD frame structure. The plurality of UL subframes may be assigned by the base station for transmitting UL resources associated with a number of control signaling. The UE may determine a location of a scheduled UL resource within the TDD frame structure for communicating one of the number of the control signaling. The control signaling may include PRACH, PUCCH, SRS, or SR. At1002, the UE determines a TDD frame structure. The UE may determine the TDD frame structure, such as the assignment of the UL subframes, from the control information in the PDCCH (either common or UE specific). The assignment of the UL subframes may dynamically change as a function of the TDD frame structure. In one aspect, the UE may determine the TDD frame structure through the PBCH, the MIB, or the SIB. The base station may periodically assign physical uplink resources which the UE may use to transmit uplink access, uplink control information, and other non-data information. At1004, the UE determines a location of at least one of a scheduled PRACH, a PUCCH, SRS, or SR resources. For example, the PRACH, PUCCH, SRS, or SR resources at a certain repetition level may be semi-statically configured to occur at a fixed location in time or frequency. If the PRACH/PUCCH/SRS/SR resources are configured to occur on a UL subframe of the frame structure assigned for the PRACH/PUCCH/SRS/SR resources, then the PRACH/PUCCH/SRS/SR resources are available for transmission over the assigned UL subframe. Otherwise, if the PRACH/PUCCH/SRS/SR resources are configured to occur on a DL subframe or on a UL subframe not assigned for the PRACH/PUCCH/SRS/SR resources, then the PRACH/PUCCH/SRS/SR resources are not available for transmission. At1006, the UE determines if the scheduled PRACH, PUCCH, SRS, or SR resources are configured to occur on a UL subframe of the TDD frame structure assigned for the PRACH, PUCCH, SRS resources. For example, the repetition level for the PRACH/PUCCH/SRS/SR resources in the TDD frame structure may be 10 subframes. The UE determines if the repetition level for the PRACH/PUCCH/SRS/SR resources coincide with a UL subframe assigned for the PRACH/PUCCH/SRS/SR resources based on the frame structure. At1008, if the scheduled PRACH, PUCCH, SRS, or SR resources are configured to occur on a UL subframe of the TDD frame structure assigned for the PRACH, PUCCH, SRS resources, the UE may transmit at least one of the PRACH, PUCCH, SRS, SR, or P-CSI using the scheduled PRACH, PUCCH, SRS, or SR resources over the assigned UL subframe. For example, the PRACH resource may occur only at fixed subframe locations. If one of these subframe locations coincides with a UL subframe assigned for the PRACH resource, the UE may transmit the PRACH resource over the UL subframe. Otherwise, at1012, if the scheduled PRACH, PUCCH, SRS resources are configured to occur on a DL subframe or on a UL subframe not assigned for the PRACH, PUCCH, SRS, or SR resources, then the UE does not transmit the PRACH, PUCCH, SRS, SR, or P-CSI. For example, the PRACH resource may occur only at fixed subframe locations. If one of these subframe locations coincides with a DL subframe, or a UL subframe not assigned for the PRACH resource, then the UE does not transmit the PRACH resource. In one aspect, at1010, the UE determines a number of DL subframes over which to measure and to average a CQI based on the determined TDD frame structure. The UE may measure and average the CQI based on the determined number of DL subframes. In one aspect, at1010, the UE may determine a number of DL subframes over which the PRS is received from a base station based on the determine TDD frame structure for the UE to measure the PRS. The UE may measure the PRS based on the determined number of DL subframes containing the PRS. At1008, the UE sends the CQI or the PRS measurement using at least one of the scheduled PRACH, PUCCH, SRS, SR resources over one of the UL subframes assigned if the scheduled PRACH, PUCCH, SRS, or SR resources occur on a UL subframe of the TDD frame structure assigned for the PRACH, PUCCH, SRS resources. For example, The UE may transmit the CQI or the PRS measurement over a number of UL subframes, such as using the PUCCH resources when they are available. If the scheduled PRACH, PUCCH, SRS resources are configured to occur on a DL subframe or on a UL subframe not assigned for the PRACH, PUCCH, SRS, or SR resources, then the UE does not transmit the CQI or the PRS measurement. FIG.11a conceptual data flow diagram illustrating the data flow between different modules/means/components in an exemplary apparatus1102. The apparatus1102may be a UE. The apparatus1102may include a TDD structure determination component1126, a search space and search strategy determination component1122, a blind decoding component1124, a UL subframes for UL resources availability determination component1128, and a CQI/PRS measurement component, a UL resources transmission component1130. The TDD structure determination component1126may be configured to receive, from a base station, via an antenna1150, information about a time division duplex (TDD) frame structure of a plurality of frames and/or to determine the TDD frame structure. For example, the information may indicate the number of DL frames or the number of DL subframes in the set of frames. The TDD structure determination component1126may be configured to pass information on the number of DL frames or the number of DL subframes to the search space and search strategy determination component1122. The search space and search strategy determination component1122may be configured to determine the PDCCH search space within the set of DL subframes and to determine a search strategy. For example, from the information on the number of DL subframes of the TDD frame structure received from the TDD structure determination component1126, the location and/or the size of the PDCCH search space may be determined. For the search strategy, a maximum aggregation level may be determined. The search strategy may determine whether to search all or a subset of the PDCCH candidates in the PDCCH search space based on the number of DL subframes of the PDCCH search space. The search space and search strategy determination component1122may be configured to generate information on the PDCCH search space and the search strategy to the blind decoding component1124. The blind decoding component1124may be configured to perform a blind decoding over the determined PDCCH search space to obtain control information for the apparatus1102. For example, the blind decoding component1124may be configured to perform the blind decoding over the PDCCH search space based on the determined maximum aggregation to search for all PDCCH candidates at all possible aggregation level. In one aspect, the blind decoding component1124may be configured to search a subset of all the PDCCH candidates of the PDCCH search space at different aggregation levels according to the search strategy. The blind decoding component1124may be configured decode each of these PDCCH candidates in the PDCCH search space to obtain common and/or UE specific control information. The control information in the PDCCH about the current TDD frame structure may be used by the apparatus1102to determine resources for UL. The UL subframes for UL resources availability determination component1128may be configured to receive the control information in the PDCCH from the search space and search strategy determination component1122. The UL subframes for UL resources availability determination component1128may also be configured to receive, from the TDD structure determination component1126, information on UL physical resources that may be used to transmit uplink access, uplink control information, and/or other non-data information. The UL subframes for UL resources availability determination component1128may be configured to determine a location of at least one of a PRACH, a PUCCH, SRS, SR resources. If the PRACH/PUCCH/SRS/SR resources are configured to occur on a UL subframe of the frame structure assigned for the PRACH/PUCCH/SRS/SR resources, then the PRACH/PUCCH/SRS/SR resources are available for transmission over the assigned UL subframe. Otherwise, if the PRACH/PUCCH/SRS/SR resources are configured to occur on a DL subframe or on a UL subframe not assigned for the PRACH/PUCCH/SRS/SR resources, then the PRACH/PUCCH/SRS/SR resources are not available for transmission. The UL subframes for UL resources availability determination component1128may be configured to transmit PRACH/PUCCH/SRS/SR resources that are available for transmission to the UL resources transmission component1130. The UL resources transmission component1130may be configured to communicate PRACH/PUCCH/SRS/SR using the scheduled UL resource over one of the plurality of UL subframes when the location of the scheduled UL resource for communicating PRACH/PUCCH/SRS/SR occurs on one of the plurality of UL subframes assigned for transmitting the scheduled UL resource, as determined by the UL subframes for UL resources availability determination component1128. The UL resources transmission component1130may be configured to communicate the PRACH/PUCCH/SRS/SR to the antenna1150for UL transmission to the base station. The CQI/PRS measurement component1132may be configured to determine a number of DL subframes over which to measure and to average a CQI based on the TDD frame structure received from the TDD structure determination component1126. The CQI/PRS measurement component1132may be configured to measure and to average the CQI based on the determined number of DL subframes. In one aspect, the CQI/PRS measurement component1132may be configured to determine a number of DL subframes over which the PRS is received from a base station based on the TDD frame structure for the apparatus1102to measure the PRS. The CQI/PRS measurement component1132may be configured to measure the PRS based on the number of DL subframes containing the PRS. The CQI/PRS measurement component1132may be configured to send the CQI or the PRS measurement to the UL resources transmission component1130. The UL resources transmission component1130may be configured to use at least one of the scheduled PRACH, PUCCH, SRS, SR resources over one of the UL subframes to communicate the CQI or the PRS measurement, such as using the PUCCH resources when they are available. The UL resources transmission component1130may be configured to communicate the CQI or the PRS measurement to the antenna1150for UL transmission to the base station. FIG.12is a diagram1200illustrating an example of a hardware implementation for an apparatus1202′ employing a processing system1214. The processing system1214may be implemented with a bus architecture, represented generally by the bus1208. The bus1208may include any number of interconnecting buses and bridges depending on the specific application of the processing system1214and the overall design constraints. The bus1208links together various circuits including one or more processors and/or hardware components, represented by the processor1204, the components1122,1124,1126,1128,1130,1132, and the computer-readable medium/memory1206. The bus1208may also link various other circuits such as timing sources, peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further. The processing system1214may be coupled to a transceiver1210. The transceiver1210is coupled to one or more antennas1220. The transceiver1210provides a means for communicating with various other apparatus over a transmission medium. The transceiver1210receives a signal from the one or more antennas1220, extracts information from the received signal, and provides the extracted information to the processing system1214, specifically the blind decoding component1124. In addition, the transceiver1210receives information from the processing system1214, specifically the UL resources transmission component1130, and based on the received information, generates a signal to be applied to the one or more antennas1220. The processing system1214includes a processor1204coupled to a computer-readable medium/memory1206. The processor1204is responsible for general processing, including the execution of software stored on the computer-readable medium/memory1206. The software, when executed by the processor1204, causes the processing system1214to perform the various functions described supra for any particular apparatus. The computer-readable medium/memory1206may also be used for storing data that is manipulated by the processor1204when executing software. The processing system further includes at least one of the components1122,1124,1126,1128,1130, and1132. The components may be software components running in the processor1204, resident/stored in the computer readable medium/memory1206, one or more hardware components coupled to the processor1204, or some combination thereof. In one configuration, the apparatus1202′ may include means for receiving, from a base station, information about a time division duplex (TDD) frame structure of a plurality of frames and/or means for determining the TDD frame structure. The means to receive information about the TDD frame structure and/or means to determine the TDD frame structure may be implemented by the TDD structure determination component1126. The plurality of frames includes a plurality of subframes. A plurality of UL subframes of the plurality of frames assigned for transmitting a plurality of UL resources associated with a control signaling is a function of the TDD frame structure. The apparatus1202′ may include means for determining a control channel search space within the plurality of subframes based on the information about the TDD frame structure of the plurality of frames. The apparatus1202′ may include means for determining a search strategy based on the control channel search space. The means for determining the control channel search space and the search strategy may be implemented by the search space and search strategy determination component1122. The apparatus1202′ may include means for performing a blind decoding of the control channel search space with the search strategy to obtain control information. The means for the blind decoding may be implemented by the blind decoding component1124. In one configuration, the apparatus1202′ may include means for determining a location of a scheduled UL resource within the TDD frame structure for communicating a type of control signaling. The control signaling may include a random access channel, a uplink control channel, a SRS, or a SR. The apparatus1202′ may include means for determining if the location of the scheduled UL resource for communicating a type of the control signaling occurs on one of the plurality of UL subframes assigned for transmitting the scheduled UL resource. The means for determining the availability of the UL subframes for the scheduled UL resources for communicating the control signaling associated with the scheduled UL resources may be implemented by the UL subframes for UL resources availability determination component1128. The apparatus1202′ may include means for communicating a type of the control signaling using the scheduled UL resource over one of the plurality of UL subframes when the location of the scheduled UL resource for communicating the type of the control signaling occurs on one of the plurality of UL subframes assigned for transmitting the scheduled UL resource. The means for communicating the control signaling may be implemented by the UL resources transmission component1130. The apparatus1202′ may include means for determining a number of DL subframes of the plurality of frames over which to measure a CQI or a PRS based on the TDD frame structure and means for measuring the CQI or the PRS over the determined number of DL subframes. The means for the CQI/PRS measurement may be implemented by the CQI/PRS measurement component1132. Any of the aforementioned means may be one or more of the processing system1214of the apparatus1202′ configured to perform the functions recited by the aforementioned means. It is understood that the specific order or hierarchy of blocks in the processes/flowcharts disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes/flowcharts may be rearranged. Further, some blocks may be combined or omitted. The accompanying method claims present elements of the various blocks in a sample order, and are not meant to be limited to the specific order or hierarchy presented. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Unless specifically stated otherwise, the term “some” refers to one or more. Combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof” include any combination of A, B, and/or C, and may include multiples of A, multiples of B, or multiples of C. Specifically, combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof” may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. The words “module,” “mechanism,” “element,” “device,” and the like may not be a substitute for the word “means.” As such, no claim element is to be construed as a means plus function unless the element is expressly recited using the phrase “means for.”
87,068
11863502
DETAILED DESCRIPTION OF THE DRAWINGS In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It will be appreciated, however, by those having skill in the art, that the embodiments of the invention may be practiced without these specific details or with an equivalent arrangement. In other cases, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the embodiments of the invention. FIG.1shows an illustrative user interface for presenting dynamic conversational responses using two-tier machine learning models, in accordance with one or more embodiments. For example,FIG.1shows user interface100. The system (e.g., a mobile application) may generate and respond to user interactions in a user interface (e.g., user interface100) in order to engage in a conversational interaction with the user. The conversational interaction may include a back-and-forth exchange of ideas and information between the system and the user. The conversational interaction may proceed through one or more mediums (e.g., text, video, audio, etc.) In order to maintain the conversational interaction, the system may need to generate response (e.g., conversational response) dynamically and/or in substantially real-time. For example, the system may generate responses within the normal cadence of a conversation. In some embodiments, the system may continually determine a likely intent of the user in order to generate responses (e.g., in the form of prompts, notifications, and/or other communications) to the user. It should be noted that a response may include any step or action (or inaction) taken by the system, including computer processes, which may or may not be perceivable to a user. For example, in response to a user action, which in some embodiments may comprise a user logging onto an application that generates user interface100, inputting a query (e.g., query104) into user interface100, and/or a prior action (or lack thereof) by a user to a prior response generated by the system, the system may take one or more steps to generate dynamic conversational responses. These steps may include retrieving data about the user, retrieving data from other sources, monitoring user actions, and/or other steps in order to generate a feature input (e.g., as discussed below). FIG.2shows an illustrative system for generating dynamic conversational responses using two-tier machine learning models. For example, system200may represent the components used for generating dynamic conversational responses as shown inFIG.1. As shown inFIG.2, system200may include mobile device222and user terminal224. While shown as a smartphone and personal computer, respectively, inFIG.2, it should be noted that mobile device222and user terminal224may be any computing device, including, but not limited to, a laptop computer, a tablet computer, a hand-held computer, other computer equipment (e.g., a server), including “smart,” wireless, wearable, and/or mobile devices.FIG.2also includes cloud components210. Cloud components210may alternatively be any computing device as described above and may include any type of mobile terminal, fixed terminal, or other device. For example, cloud components210may be implemented as a cloud computing system and may feature one or more component devices. It should also be noted that system200is not limited to three devices. Users may, for instance, utilize one or more other devices to interact with one another, one or more servers, or other components of system200. It should be noted that, while one or more operations are described herein as being performed by particular components of system200, those operations may, in some embodiments, be performed by other components of system200. As an example, while one or more operations are described herein as being performed by components of mobile device222, those operations may, in some embodiments, be performed by components of cloud components210. In some embodiments, the various computers and systems described herein may include one or more computing devices that are programmed to perform the described functions. Additionally or alternatively, multiple users may interact with system200and/or one or more components of system200. For example, in one embodiment, a first user and a second user may interact with system200using two different components. With respect to the components of mobile device222, user terminal224, and cloud components210, each of these devices may receive content and data via input/output (hereinafter “I/O”) paths. Each of these devices may also include processors and/or control circuitry to send and receive commands, requests, and other suitable data using the I/O paths. The control circuitry may comprise any suitable processing, storage, and/or input/output circuitry. Each of these devices may also include a user input interface and/or user output interface (e.g., a display) for use in receiving and displaying data. For example, as shown inFIG.2, both mobile device222and user terminal224include a display upon which to display data (e.g., based on recommended contact strategies). Additionally, as mobile device222and user terminal224are shown as touchscreen smartphones, these displays also act as user input interfaces. It should be noted that in some embodiments, the devices may have neither user input interface nor displays and may instead receive and display content using another device (e.g., a dedicated display device such as a computer screen and/or a dedicated input device such as a remote control, mouse, voice input, etc.). Additionally, the devices in system200may run an application (or another suitable program). The application may cause the processors and/or control circuitry to perform operations related to generating dynamic conversational responses using two-tier machine learning models. Each of these devices may also include electronic storages. The electronic storages may include non-transitory storage media that electronically stores information. The electronic storage media of the electronic storages may include one or both of (i) system storage that is provided integrally (e.g., substantially non-removable) with servers or client devices or (ii) removable storage that is removably connectable to the servers or client devices via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). The electronic storages may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. The electronic storages may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). The electronic storages may store software algorithms, information determined by the processors, information obtained from servers, information obtained from client devices, or other information that enables the functionality as described herein. FIG.2also includes communication paths228,230, and232. Communication paths228,230, and232may include the Internet, a mobile phone network, a mobile voice or data network (e.g., a 4G or LTE network), a cable network, a public switched telephone network, or other types of communications networks or combinations of communications networks. Communication paths228,230, and232may separately or together include one or more communications paths, such as a satellite path, a fiber-optic path, a cable path, a path that supports Internet communications (e.g., IPTV), free-space connections (e.g., for broadcast or other wireless signals), or any other suitable wired or wireless communications path or combination of such paths. The computing devices may include additional communication paths linking a plurality of hardware, software, and/or firmware components operating together. For example, the computing devices may be implemented by a cloud of computing platforms operating together as the computing devices. Cloud components210may be a database configured to store user data for a user. For example, the database may include user data that the system has collected about the user through prior transactions. Alternatively, or additionally, the system may act as a clearing house for multiple sources of information about the user. Cloud components210may also include control circuitry configured to perform the various operations needed to generate recommendations. For example, the cloud components210may include cloud-based storage circuitry configured to store a first machine learning model and a second machine learning model. Cloud components210may also include cloud-based control circuitry configured to determine an intent of the user based on a two-tier machine learning model. Cloud components210may also include cloud-based input/output circuitry configured to generate the dynamic conversational response during the conversational interaction. Cloud components210includes machine learning model202. Machine learning model202may take inputs204and provide outputs206. The inputs may include multiple datasets such as a training dataset and a test dataset. Each of the plurality of datasets (e.g., inputs204) may include data subsets related to user data, contact strategies, and results. In some embodiments, outputs206may be fed back to machine learning model202as input to train machine learning model202(e.g., alone or in conjunction with user indications of the accuracy of outputs206, labels associated with the inputs, or with other reference feedback information). In another embodiment, machine learning model202may update its configurations (e.g., weights, biases, or other parameters) based on the assessment of its prediction (e.g., outputs206) and reference feedback information (e.g., user indication of accuracy, reference labels, or other information). In another embodiment, where machine learning model202is a neural network, connection weights may be adjusted to reconcile differences between the neural network's prediction and the reference feedback. In a further use case, one or more neurons (or nodes) of the neural network may require that their respective errors are sent backward through the neural network to facilitate the update process (e.g., backpropagation of error). Updates to the connection weights may, for example, be reflective of the magnitude of error propagated backward after a forward pass has been completed. In this way, for example, the machine learning model202may be trained to generate better predictions. In some embodiments, machine learning model202may include an artificial neural network (e.g., as described inFIG.3below). In such embodiments, machine learning model202may include an input layer and one or more hidden layers. Each neural unit of machine learning model202may be connected with many other neural units of machine learning model202. Such connections can be enforcing or inhibitory in their effect on the activation state of connected neural units. In some embodiments, each individual neural unit may have a summation function which combines the values of all of its inputs together. In some embodiments, each connection (or the neural unit itself) may have a threshold function such that the signal must surpass before it propagates to other neural units. Machine learning model202may be self-learning and trained, rather than explicitly programmed, and can perform significantly better in certain areas of problem solving, as compared to traditional computer programs. During training, an output layer of machine learning model202may correspond to a classification of machine learning model202and an input known to correspond to that classification may be input into an input layer of machine learning model202during training. During testing, an input without a known classification may be input into the input layer, and a determined classification may be output. In some embodiments, machine learning model202may include multiple layers (e.g., where a signal path traverses from front layers to back layers). In some embodiments, back propagation techniques may be utilized by machine learning model202where forward stimulation is used to reset weights on the “front” neural units. In some embodiments, stimulation and inhibition for machine learning model202may be more free-flowing, with connections interacting in a more chaotic and complex fashion. During testing, an output layer of machine learning model202may indicate whether or not a given input corresponds to a classification of machine learning model202(e.g., whether a first length of time corresponds to lengths of programming time for previously completed stories by contributors without a required skill). FIG.3is an illustrative model architecture of a two-tier machine learning model, in accordance with one or more embodiments. One tier of the multi-tiered machine learning model may include an artificial neural network (e.g., model330) and another tier may include a factorization machine model (e.g., model320). In some embodiments, a first machine learning model (e.g., model320) is a supervised machine learning model and a second machine learning model (e.g., model330) is an unsupervised machine learning model. It should be noted that alternatively, the first machine learning model (e.g., model320) may be either a supervised or unsupervised machine learning model and/or the second machine learning model (e.g., model330) may be a supervised or unsupervised machine learning model. In some embodiments, model300may predict a goal or intent of a user. This goal or intent may be selected from a plurality of goals and/or intents stored by the system. Model300may first determine an intent cluster (e.g., a group or category of intents) and then select a specific intent from the intent cluster. In some embodiments, the system may determine the cluster of intents based on the similar feature inputs. For example, the system may cluster goals/intents based on similar characteristics of the users. For example, the system may determine that users who ask different questions about payment have similar account information and digital activities. The system may further determine that the users tend to be different from those of users who have a one-off type request, such as lost card reports or travel notification. A multi-tiered approach may be used to capture this behavior. The first layer of the model (e.g., model320) identifies which group of goals is most likely, then in the subsequent layer, the model (e.g., model330) identifies which specific goals are most likely. The clusters of goals used in the first layer (e.g., model320) are derived based on feature data and the known goal/intent list, which can change as available data changes or expands. In some embodiments, a specific intent may comprise its own intent cluster and/or not every potential specific intent needs to belong to an intent cluster. For example, if the first-layer model (e.g., model320) determines that none of the existing clusters are likely, a default classification model may be used to make a prediction at goal level to make sure that goals not belonging to any cluster can be predicted. In some embodiments, the model (e.g., model300) may automatically perform actions based on output340. In some embodiments, the model (e.g., model300) may not perform any actions on a user's account, rather the output of the model (e.g., model300) may be only used to decide which dynamic conversational responses display to a user. Model320may be structured as a factorization machine model. Model320may be a non-linear model and/or supervised learning model that can perform both classification and regression. Model320may perform these tasks by measuring interactions between variables within large datasets. In some embodiments, model320may be used to determine intent clusters for a feature input (e.g., feature input310). For example, model320may be a general-purpose supervised learning algorithm that the system uses for both classification and regression tasks. It may be an extension of a linear model that is designed to capture interactions between features within high dimensional sparse datasets economically. For example, factorization machine models are extensions of linear models which model the interactions of variables. They map and plot their interactions to a lower dimension. As a result, the number of parameters extends linearly through the dimensions. Beneficially, model320may estimate parameters under very sparse data and therefore scale to fit large datasets. This is particularly useful for the user account and user action data as this data may be highly correlated and sparse. Moreover, model320may not rely on training data, resulting in more compact models. In some embodiments, the features of the training data (e.g., used for model330), can be derived from model320. Therefore, model320may serve a dual purpose. Additionally, model320(as a factorization machine) may work with any real-valued feature vector, whereas other factorization models may require special input data. In some embodiments, the feature input may include a vector that describes various information about a user, a user action (which may include user inactions), and/or a current or previous interaction with the user. The system may further select the information for inclusion in the feature input based on a predictive value. The information may be collected actively or passively by the system and compiled into a user profile. In some embodiments, the information (e.g., a user action) may include conversation details such as information about a current session, including a channel or platform, e.g. desktop web, iOS, mobile, a launch page (e.g., the webpage that the application was launched from), a time of launch, activities in a current or previous session before launching the application. The system may store this information and all the data about a conversational interaction may be available in real-time via HTTP messages and/or through data streaming from more or more sources (e.g., via an API). In some embodiments, the information (e.g., a user action) may include user account information such as types of accounts the user has, other accounts on file such as bank accounts for payment, information associated with accounts such as credit limit, current balance, due date, recent payments, recent transactions. The system may obtain this data in real-time for model prediction through enterprise APIs In some embodiments, the information (e.g., a user action) may include insights about users, provided to the application (e.g., via an API) from one or more sources such as a qualitative or quantitative representations (e.g., a percent) of a given activity (e.g., online spending) in a given time period (e.g., six months), upcoming actions (e.g., travel departure, pay day, leave and/or family event) for a user, information about third parties (e.g., merchants (ranked by the number of transactions) over the last year for the user), etc. Model320may include embedding layers324at which each feature of the vector of feature input310is converted into a dense vector representation. These dense vector representations for each feature are then pooled at layer322to convert the ser of embedding vectors into a single vector. The created vector is then used as an input for model330. The output from the first machine learning model may then be input into a second machine learning model (or second tier). For example, the output may comprise the feature input, a determination of an intent cluster, and/or a specific model (or algorithm) for use in the second tier. Model330may be structured as an artificial neural network. Model330may include one or more hidden layers. Model330may be based on a large collection of neural units (or artificial neurons). Model330loosely mimics the manner in which a biological brain works (e.g., via large clusters of biological neurons connected by axons). Each neural unit of a model330may be connected with many other neural units of model330. Such connections can be enforcing or inhibitory in their effect on the activation state of connected neural units. In some embodiments, each individual neural unit may have a summation function which combines the values of all of its inputs together. In some embodiments, each connection (or the neural unit itself) may have a threshold function such that the signal must surpass before it propagates to other neural units. Model330may be self-learning and trained, rather than explicitly programmed, and can perform significantly better in certain areas of problem solving, as compared to traditional computer programs. During training, output340may correspond to a classification of model330(e.g., a specific intent) and an input known to correspond to that classification may be input into model330from model320. In some embodiments, model330may include multiple layers (e.g., where a signal path traverses from front layers to back layers). In some embodiments, back propagation techniques may be utilized by model330where forward stimulation is used to reset weights on the “front” neural units. In some embodiments, stimulation and inhibition for model330may be more free-flowing, with connections interacting in a more chaotic and complex fashion. During testing, output340may indicate whether or not a given input corresponds to a classification of model330(e.g., whether or not a given output of model320corresponds to a specific intent). FIG.4shows a flowchart of the steps involved in generating dynamic conversational responses using two-tier machine learning models, in accordance with one or more embodiments. For example, process400may represent the steps taken by one or more devices as shown inFIGS.1-2when generating dynamic conversational responses using two-tier machine learning models (e.g., as shown inFIG.3). At step402, process400(e.g., using one or more components in system200(FIG.2)) receives a user action. For example, the system may receive one or more user inputs to a user interface (e.g., user interface100(FIG.1)). The system may then determine a likely intent of the user in order to generate one or more dynamic conversational responses based on that intent. The user action may take various forms include speech commands, textual inputs, responses to system queries, and/or other user actions (e.g., logging into a mobile application of the system). In each case, the system may aggregate information about the user action, information about the user, and/or other circumstances related to the user action (e.g., time of day, previous user actions, current account settings, etc.) in order to determine a likely intent of the user. At step404, process400(e.g., using one or more components in system200(FIG.2)) determines an intent of a user based on a two-tier machine learning model. For example, the system may first use a first tier of a model (e.g., model320(FIG.3)) to determine an intent cluster of the user's intent. The system may then determine a second tier of a model (e.g., model330(FIG.3)) to determine a specific intent of the user's intent. For example, the first machine learning model (or first tier) may be selected based on its attributes to generate results with sparse amounts of training data and/or in a supervised manner. For example, the first tier of the machine learning model may comprise a factorization machine model. Using the sparse amount of data, the first machine learning model can be used to determine an intent cluster for the user. For example, the first machine learning model may group the feature input into one of a plurality of categories of specific intents. The second machine learning model may then determine a specific intent based on the output from the first machine learning model. Given the two-tiered structure, the second machine learning model may be individually trained and/or trained on training data specific to the second machine learning model. Additionally, the second machine learning model can use an unsupervised learning model (e.g., an artificial neural network). For example, as the initial determination of the intent cluster has been made, the second machine learning model can be trained to optimize the precision of the selection of the specific intent. At step406, process400(e.g., using one or more components in system200(FIG.2)) generates a dynamic conversational response based on the intent of the user. For example, by using the two-tier machine learning model, the system may ensure that at least a conversational response is generated based on an intent in the correct cluster. The system may also increase the likelihood that it determines a correct specific intent of the user. For example, as the initial determination of the intent cluster has been made, the second machine learning model can be trained to optimize the precision of the selection of the specific intent. That is, the output of the second machine learning model, and the response generated based on that output, will only be selected from responses from the intent cluster. For example, the system may generate a dynamic conversational response (e.g., response102(FIG.1)) and present the response in a user interface (e.g., user interface100(FIG.1)). The response may appear with one or more likely responses (e.g., as shown inFIG.1)). In some embodiments, the system may receive a user action selecting (or not selecting) a response (e.g., response102(FIG.1)) from a user interface. It is contemplated that the steps or descriptions ofFIG.4may be used with any other embodiment of this disclosure. In addition, the steps and descriptions described in relation toFIG.4may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these steps may be performed in any order, in parallel, or simultaneously to reduce lag or increase the speed of the system or method. Furthermore, it should be noted that any of the devices or equipment discussed in relation toFIGS.1-2could be used to perform one of more of the steps inFIG.4. FIG.5shows a flowchart of the steps involved in generating dynamic conversational responses using two-tier machine learning models, in accordance with one or more embodiments. For example, process500may represent the steps taken by one or more devices as shown inFIGS.1-3when in generating dynamic conversational responses. At step502, process500(e.g., using one or more components in system200(FIG.2)) receives a user action. For example, the system may receive a first user action during a conversational interaction with a user interface as shown inFIG.1. The conversational interaction may comprise a user inquiry regarding an account of the user and/or may include one or more user actions. At step504, process500(e.g., using one or more components in system200(FIG.2)) determines a feature input based on the user action. For example, the system may determine, using control circuitry, a first feature input based on the first user action in response to receiving the first user action. The system may generate the feature input based on one or more criteria. For example, the system may generate the feature input based on a conversational detail or information from a user account of the user, a time at which the user interface was launched, and/or a webpage from which the user interface was launched. At step506, process500(e.g., using one or more components in system200(FIG.2)) inputs the feature input into a first machine learning model. For example, the system may input, using the control circuitry, the first feature input into a first machine learning model, wherein the first machine learning model is trained to select an intent cluster from a plurality of intent clusters based on the first feature input and the first user action, wherein each intent cluster of the plurality of intent clusters corresponds to a respective intent of a user following the first user action. In some embodiments, the system may receive a first labeled feature input, wherein the first labeled feature input is labeled with a known intent cluster for the first labeled feature input. The system may then train the first machine learning model to classify the first labeled feature input with the known intent cluster. In some embodiments, the system may cluster available specific intents into one or more plurality of intent clusters. For example, the system may group and/or categorize specific intents into intent clusters based on similarities between the specific intents and/or similarities between the feature inputs. For example, two user actions that may appear similar may first be stored into the same intent cluster and then further classified into specific intents. This ensures that the system determines intents with an increased accuracy. At step508, process500(e.g., using one or more components in system200(FIG.2)) receives a first output from the first machine learning model. For example, the system may receive, using the control circuitry, a first output from the first machine learning model. In some embodiments, the first machine learning model may be a supervised machine learning model and/or a factorization machine model. At step510, process500(e.g., using one or more components in system200(FIG.2)) inputs the first output into a second machine learning model. For example, the system may input, using the control circuitry, the first output into a second machine learning model, wherein the second machine learning model is trained to select a specific intent from a plurality of specific intents of the selected intent cluster based on the first output, and wherein each specific intent of the plurality of specific intents corresponds to a respective specific intent of the user following the first user action. In some embodiments, the second machine learning model may be an unsupervised machine learning model and/or an artificial neural network model. In some embodiments, the system may select the second machine learning model, from a plurality of machine learning models, based on the intent cluster selected from the plurality of intent clusters, wherein each intent cluster of the plurality of intent clusters corresponds to a respective machine learning model from the plurality of machine learning models. For example, the system may develop independent models, using different algorithms and/or trained on different data, in order to increase the precision at which a specific intent is determined. For example, the system may receive a second user action during the conversational interaction with the user interface. The system may determine a second feature input for the first machine learning model based on the second user action in response to receiving the second user action. The system may input the second feature input into the first machine learning model. The system may receive a different output from the first machine learning model, wherein the different output corresponds to a different intent cluster from the plurality of intent clusters. The system may input the different output into the second machine learning model. At step512, process500(e.g., using one or more components in system200(FIG.2)) receives a second output from the second machine learning model. For example, the system may receive, using the control circuitry, a second output from the second machine learning model. In some embodiments, the system may receive a first labeled output from the first machine learning model, wherein the first labeled output is labeled with a known specific intent. The system may then train the second machine learning model to classify the first labeled output with the known specific intent. At step514, process500(e.g., using one or more components in system200(FIG.2)) selects a dynamic conversational response based on the second output. For example, the system may select, using the control circuitry, a dynamic conversational response from a plurality of dynamic conversational responses based on the second output. For example, the system may have one or more potential responses and select one or more of these responses based on the predicted specific intent of the user. At step516, process500(e.g., using one or more components in system200(FIG.2)) generates the dynamic conversational response. For example, the system may generate, at the user interface, the dynamic conversational response during the conversational interaction (e.g., as shown inFIG.1). It is contemplated that the steps or descriptions ofFIG.5may be used with any other embodiment of this disclosure. In addition, the steps and descriptions described in relation toFIG.5may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these steps may be performed in any order, in parallel, or simultaneously to reduce lag or increase the speed of the system or method. Furthermore, it should be noted that any of the devices or equipment discussed in relation toFIGS.1-2could be used to perform one or more of the steps inFIG.5. The above-described embodiments of the present disclosure are presented for purposes of illustration and not of limitation, and the present disclosure is limited only by the claims which follow. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any other embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiment in a suitable manner, done in different orders, or done in parallel. In addition, the systems and methods described herein may be performed in real time. It should also be noted that the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods. The present techniques will be better understood with reference to the following enumerated embodiments: 1. A method for generating dynamic conversational responses using two-tier machine learning models, the method comprising: receiving a first user action during a conversational interaction with a user interface; in response to receiving the first user action, determining a first feature input based on the first user action; inputting the first feature input into a first machine learning model, wherein the first machine learning model is trained to select an intent cluster from a plurality of intent clusters based on the first feature input and the first user action, wherein each intent cluster of the plurality of intent clusters corresponds to a respective intent of a user following the first user action; receiving a first output from the first machine learning model; inputting the first output into a second machine learning model, wherein the second machine learning model is trained to select a specific intent from a plurality of specific intents of the selected intent cluster based on the first output, and wherein each specific intent of the plurality of specific intents corresponds to a respective specific intent of the user following the first user action; receiving a second output from the second machine learning model; selecting a dynamic conversational response from a plurality of dynamic conversational responses based on the second output; and generating, at the user interface, the dynamic conversational response during the conversational interaction. 2. The method of embodiment 2, further comprising selecting the second machine learning model, from a plurality of machine learning models, based on the intent cluster selected from the plurality of intent clusters, wherein each intent cluster of the plurality of intent clusters corresponds to a respective machine learning model from the plurality of machine learning models. 3. The method of any one of embodiments 1-2, further comprising: receiving a second user action during the conversational interaction with the user interface; in response to receiving the second user action, determining a second feature input for the first machine learning model based on the second user action; inputting the second feature input into the first machine learning model; receiving a different output from the first machine learning model, wherein the different output corresponds to a different intent cluster from the plurality of intent clusters; and inputting the different output into the second machine learning model. 4. The method of any one of embodiments 1-3, wherein the first machine learning model is a supervised machine learning model, and wherein the second machine learning model is a supervised machine learning model. 5. The method of any one of embodiments 1-4, wherein the first machine learning model is a factorization machine model, and wherein the second machine learning model is an artificial neural network model. 6. The method of any one of embodiments 1-5, further comprising clustering available specific intents into the plurality of intent clusters. 7. The method of any one of embodiments 1-6, further comprising: receiving a first labeled feature input, wherein the first labeled feature input is labeled with a known intent cluster for the first labeled feature input; and training the first machine learning model to classify the first labeled feature input with the known intent cluster. 8. The method of any one of embodiments 1-7, wherein the first feature input is a conversational detail or information from a user account of the user. 9. The method of any one of embodiments 1-8, wherein the first feature input indicates a time at which the user interface was launched. 10. The method of any one of embodiments 1-9, wherein the first feature input indicates a webpage from which the user interface was launched. 11. A tangible, non-transitory, machine-readable medium storing instructions that, when executed by a data processing apparatus, cause the data processing apparatus to perform operations comprising those of any of embodiments 1-10. 12. A system comprising: one or more processors; and memory storing instructions that, when executed by the processors, cause the processors to effectuate operations comprising those of any of embodiments 1-10. 13. A system comprising means for performing any of embodiments 1-10.
38,872
11863503
DETAILED DESCRIPTION As used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” one particular value, and/or to “about” another particular value. When such a range is expressed, another configuration includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent “about,” it will be understood that the particular value forms another configuration. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint. “Optional” or “optionally” means that the subsequently described event or circumstance may or may not occur, and that the description includes cases where said event or circumstance occurs and cases where it does not. Throughout the description and claims of this specification, the word “comprise” and variations of the word, such as “comprising” and “comprises,” means “including but not limited to,” and is not intended to exclude, for example, other components, integers or steps. “Exemplary” means “an example of” and is not intended to convey an indication of a preferred or ideal configuration. “Such as” is not used in a restrictive sense, but for explanatory purposes. It is understood that when combinations, subsets, interactions, groups, etc. of components are described that, while specific reference of each various individual and collective combinations and permutations of these may not be explicitly described, each is specifically contemplated and described herein. This applies to all parts of this application including, but not limited to, steps in described methods. Thus, if there are a variety of additional steps that may be performed it is understood that each of these additional steps may be performed with any specific configuration or combination of configurations of the described methods. As will be appreciated by one skilled in the art, hardware, software, or a combination of software and hardware may be implemented. Furthermore, a computer program product on a computer-readable storage medium (e.g., non-transitory) having processor-executable instructions (e.g., computer software) embodied in the storage medium. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, magnetic storage devices, memresistors, Non-Volatile Random Access Memory (NVRAM), flash memory, or a combination thereof. Throughout this application reference is made block diagrams and flowcharts. It will be understood that each block of the block diagrams and flowcharts, and combinations of blocks in the block diagrams and flowcharts, respectively, may be implemented by processor-executable instructions. These processor-executable instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the processor-executable instructions which execute on the computer or other programmable data processing apparatus create a device for implementing the functions specified in the flowchart block or blocks. These processor-executable instructions may also be stored in a computer-readable memory that may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the processor-executable instructions stored in the computer-readable memory produce an article of manufacture including processor-executable instructions for implementing the function specified in the flowchart block or blocks. The processor-executable instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the processor-executable instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks. Accordingly, blocks of the block diagrams and flowcharts support combinations of devices for performing the specified functions, combinations of steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowcharts, and combinations of blocks in the block diagrams and flowcharts, may be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions. This detailed description may refer to a given entity performing some action. It should be understood that this language may in some cases mean that a system (e.g., a computer) owned and/or controlled by the given entity is actually performing the action. Described herein are methods, systems, and apparatuses for modifying/supplementing a message. A computing device of a communication system, such as a telecommunications system, may be interacted with by a user of a user device to create a message intended for a recipient device. For example, the user device may have attempted to establish a communication session (e.g., a phone call, a video call, etc.) with the recipient device, such as by initiating a voice and/or video call. The communication may fail (e.g., time out) when the recipient device does not indicate to the communication system that the communication is accepted (e.g., by a user answering the phone and/or video call). As a result, the communication system may terminate the request for the communication session after a period of time (e.g., a timeout period) expires. When the communication system terminates the request for the communication session, a Call Forwarded Not Available (“CFNA”) or a Call Forwarding Blocked (“CFB”) request may be sent by the communication system to the computing device. The computing device may receive the CFNA/CFB request and provide an interactive voice response (“IVR”) system that the user of the user device may interact with to create a message to be sent to the recipient device. The message may be an audio voicemail, a video voicemail, and/or the like. The message created by the user of the user device may be sent by the computing device to the recipient device without any supplementation (e.g., added auditory or imagery features). As an example, the message may be supplemented with one or more message options before being sent to the recipient device. The one or more message options may include a plurality of auditory or imagery features that may be used to modify/supplement the message, such as sounds, songs, effects, pictures, image filters, etc. The IVR system and/or the computing device may provide the one or more message options to the user of the user device before the message is sent to the recipient device. For example, the IVR system and/or the computing device may analyze the message in real-time (e.g., as the user is speaking and/or gesturing) and provide the one or more message options to the user device via the IVR system. The one or more options may be provided as a plurality of phrases/titles (e.g., “Play a song,” “Add effects,” “Add sounds,” etc.). The one or more message options may be associated with a number such that the user of the user device may select at least one of the one or more message options using a keypad of the user device (e.g., a number is pressed using the keypad). The computing device may select the one or more message options from a plurality of message options stored in a database. For example, the one or more message options ultimately provided to the user of the user device may be based on a context and/or content of the message. The context and/or content of the message may be words, phrases, voice tones, gestures (for video calls), etc., that may be determined by the computing device using natural language processing, machine learning, and/or artificial intelligence methods as described herein. The computing device may use the context and/or content of the message to determine which of the plurality of message options stored in the database are the most relevant to the message (e.g., the message options most likely to be of interest to the user). The computing device may provide a quantity of the most relevant message options to the user as one or more suggested message options (e.g., one or more suggested modifications). The user of the user device may select at least one of the one or more suggested message options to modify/supplement the message using the IVR system. The computing device may receive the user's selection(s) made from the one or more suggested message options to modify/supplement the message accordingly. For example, the message may be modified with a background sound, song, and/or image; a series of sounds and/or images at various points in the message; an effect/filter; one or more thereof, and/or the like. The computing device may send the modified message to the recipient device. FIG.1shows an example system100for modifying/supplementing a message. Those skilled in the art will appreciate that digital equipment and/or analog equipment may be employed. One skilled in the art will appreciate that provided herein is a functional description and that the respective functions may be performed by software, hardware, or a combination of software and hardware. The system100may include a user device102, a computing device104, a recipient device106, and a message device108. The user device102may communicate with the computing device104, the recipient device106, and/or the message device108via a network109. The network109may support communication between the user device102, the computing device104, the recipient device106, and/or the message device108via a short-range communications (e.g., BLUETOOTH®, near-field communication, infrared, Wi-Fi, etc.) and/or via a long-range communications (e.g., Internet, cellular, satellite, and the like). For example, the network109may utilize Internet Protocol Version 4 (IPv4) and/or Internet Protocol Version 6 (IPv6). The network109may be a telecommunications network, such as a mobile, landline, and/or Voice over Internet Protocol (VoIP) provider. The user device102may include a communication element110, an address element112, a service element114, communication software116, and an identifier118. The communication element110may be configured to communicate via any network protocol. For example, the communication element110may communicate via a wired network protocol (e.g., Ethernet, LAN, WAN, etc.) on a wired network (e.g., the network109). The communication element110may include a wireless transceiver configured to send and receive wireless communications via a wireless network (e.g., the network109). The wireless network may be a Wi-Fi network. The user device102may communicate with the computing device104, the recipient device106, and/or the message device108via the communication element110. The user device102may be a mobile device, such as a smartphone, or a telephone. The communication element110of the user device102may be configured to communicate via one or more of second generation (2G), third generation (3G), fourth generation (4G), fifth generation (5G), GPRS, EDGE, D2D, M2M, long term evolution (LTE), long term evolution advanced (LTE-A), code division multiple access (CDMA), wideband code division multiple access (WCDMA), universal mobile telecommunications system (UMTS), wireless broadband (WiBro), Voice Over IP (VoIP), and global system for mobile communication (GSM). The communication element110of the user device102may further be configured for communication over a local area network connection through network access points using technologies such as IEEE 802.11. The user device102may include an address element112and a service element114. The address element112may include or provide an internet protocol address, a network address, a media access control (MAC) address, an Internet address (e.g., an IPv4, an IPv6 address, etc.), or the like. The address element112may be used to establish a communication connection between the user device102, the computing device104, the recipient device106, the message device108, and/or other devices and/or networks. The address element112may be an identifier or locator of the user device102. The address element112may be persistent for a particular network (e.g., the network109). The service element114may include an identification of a service provider associated with the user device102and/or with the class of user device102. The class of the user device102may be related to a type of device, capability of device, type of service being provided, and/or a level of service (e.g., business class, service tier, service package, etc.). The service element114may include information relating to or provided by a service provider (e.g., Internet service provider, content service provider, communications service provider, etc.) that may provide or enable data flow such as communication services (e.g., a phone call, a video call, etc.) and/or content services to the user device102. The service element114may include information relating to a preferred service provider for one or more particular services relating to the user device102. The address element112may be used to identify or retrieve data from the service element114, or vice versa. One or more of the address element112and/or the service element114may be stored remotely from the user device102. Other information may be represented by the service element114. The user device102may be associated with a user identifier or device identifier118. The device identifier118may be any identifier, token, character, string, or the like, for differentiating one user or user device (e.g., the user device102) from another user or user device. For example, the device identifier118may be or relate to an Internet Protocol (IP) Address, a Media Access Control (MAC) address, an International Mobile Equipment Identity (IMEI) number, an International Mobile Subscriber Identity (IMSI) number, a phone number, a SIM card number, and/or the like. The device identifier118may identify a user or user device as belonging to a particular class of users or user devices. The device identifier118may include information relating to the user device102such as a manufacturer, a model or type of device, a service provider associated with the user device102, a state of the user device102, a locator, and/or a label or classifier. Other information may be represented by the device identifier118. The user device102may include communication software118. The communication software118may be software, firmware, hardware, and/or a combination of software, firmware, and hardware. The communication software118may allow the user device102to communicate with one or more devices. The communication software118may be configured to send and/or receive data, communication services (e.g., a phone call, a video call, etc.), and so forth. For example, the communication software118may be configured to allow the user device102to establish a communication connection and/or a communication session with the recipient device106via the network109. The recipient device106may be a mobile device, such as a smartphone, or a telephone. The recipient device106may be configured to communicate via one or more of second generation (2G), third generation (3G), fourth generation (4G), fifth generation (5G), GPRS, EDGE, D2D, M2M, long term evolution (LTE), long term evolution advanced (LTE-A), code division multiple access (CDMA), wideband code division multiple access (WCDMA), universal mobile telecommunications system (UMTS), wireless broadband (WiBro), Voice Over IP (VoIP), and global system for mobile communication (GSM). The recipient device106may further be configured for communication over a local area network connection through network access points using technologies such as IEEE 802.11. For example, the communication software118may be configured to establish a phone call and/or a video call with the recipient device106. As an example, the communication software118may be configured to communicate with the message device108to leave one or more messages for another device (e.g., the recipient device106) if the communication connection and/or the communication session is not established with the other device. The computing device104may include a database120, a service element122, an address element124, an identifier126, message data128, and message software130. The computing device106may manage the communication between the user device102, the recipient device106, the message device108, and/or a database120for sending and receiving data therebetween. The database120may store a plurality of files (e.g., web pages), user identifiers or records, data associated with a plurality of devices, data associated with a plurality of messages, data associated with a plurality of options to modify/supplement the messages, supplemental data, and/or other information. The user device102, the recipient device106, and/or the message device108may request and/or retrieve a file from the database120. The database120may store information relating to the user device102such as the address element112and/or the service element114. The computing device104may obtain the device identifier118from the user device102and retrieve information from the database120. The computing device104may assign the identifier118to the user device102. Any information may be stored in and retrieved from the database120. The database120may be disposed remotely from the computing device104and accessed via direct or indirect connection. The database120may be integrated with the computing device104or some other device or system. The computing device104may have a service element122. The service element122may include an identification of a service provider associated with the computing device104and/or with the class of computing device104. The class of the computing device104may be related to a type of device, capability of device, type of service being provided, and/or a level of service (e.g., business class, service tier, service package, etc.). The service element122may include information relating to or provided by a communication service provider (e.g., Internet service provider, communications service provider, etc.) that is providing or enabling data flow such as communication services to the computing device104. The service element122may include information relating to a preferred service provider for one or more particular services relating to the computing device104. Other information may be represented by the service element122. The address element124may include or provide an internet protocol address, a network address, a media access control (MAC) address, an Internet address, or the like. The address element124may be relied upon to establish a communication session between the computing device104and the user device102, the recipient device106, and/or the message device108, or other devices and/or networks. The address element124may be used as an identifier or locator of the computing device104. The address element124may be persistent for a particular network. The computing device104may have an identifier126. The identifier126may be or relate to an Internet Protocol (IP) Address, a Media Access Control (MAC) address, or the like. The identifier126may be a unique identifier for facilitating wired and/or wireless communications with the user device102, the recipient device106, and/or the message device108. The identifier126may be associated with a physical location of the computing device104. The computing device104may store message data128in the database120. The message data128may include any data associated with a message sent by a device (e.g., the user device102, the computing device104, the recipient device106, and/or the message device108. For example, the message may be a voicemail or a video voicemail, and the message data128may include audio and/or video data associated with the message. The message data128may include information associated with a device that the message (e.g., the user device102), as well as an intended recipient of the message (e.g., the recipient device106). The message data128may include contextual data associated with the message. For example, the computing device104may analyze language of the message to determine context for one or more words of the message. As further described herein, the computing device may utilize natural language processing to determine the contextual information associated with the message. The contextual information may include at least one of a location of the user, a time associated with the message, a date associated with the message, the intended recipient of the message, a subject of the message, or a context associated with the message. The computing device104may store the contextual information as message data128. The message data128may include a plurality of options to modify/supplement the message. The plurality of options may include a plurality of modifications. For example, the plurality of options may include modification of audio associated with the message, addition of a song to the message, or execution of an action based on the message. The message data128may include historical data that indicates a plurality of previously received messages from a plurality of user device. For example, the message data128may include information associated with a specific message, whether a user associated with the message supplemented the message with additional information, and the option the user selected to modify/supplement the message with. The computing device104may include message software130. The message software130may determine one or more suggested options (e.g., one or more suggested modifications) of a plurality of options for modifying/supplementing the message. For example, the message software130may rank the plurality of options to indicate the option(s) (e.g., modification(s)) most likely to be selected by a user of the user device102. The one or more suggested options may be determined based on data (e.g., the message data128) associated with a plurality of previously selected options. For example, the message software130may utilize historical data that indicates a plurality of previously received messages from a plurality of user devices, as well as the options that the plurality of user devices selected. The message software130may determine/generate a supplemental message that incorporates a message option selected by the user of the user device102. For example, the message software130may receive data from the user device102that indicates a message option that the user of the user device102would like to modify (e.g., supplement) the message with. The message software130may determine/generate the supplemental message (e.g., modify the message) based on the message option that the user of the user device102selects. For example, the message software130may modify the message sent by the user device102to include supplemental information associated with the message option that the user of the user device102selects, as described further herein. The computing device104may be interacted with by the user of a user device102when creating a message intended for the recipient device106. For example, the user device102may have attempted to establish a communication session (e.g., a phone call, a video call, etc.) with the recipient device106, such as by initiating a voice and/or video call. The communication may fail (e.g., time out) when the recipient device106does not indicate to the network109that the communication is accepted (e.g., by a user answering the phone and/or video call). As a result, the network109may terminate the request for the communication session after a period of time (e.g., a timeout period) expires. When the network109terminates the request for the communication session, a Call Forwarded Not Available (“CFNA”) or a Call Forwarding Blocked (“CFB”) request may be sent by the network109to the messaging device108. The messaging device108may receive the CFNA/CFB request and provide an interactive voice response (“IVR”) system that the user of the user device102may interact with to create a message to be sent to the recipient device106. The message may be an audio voicemail, a video voicemail, and/or the like. The message created by the user of the user device102may be sent via the messaging device108and/or the computing device104to the recipient device106without any supplementation (e.g., added auditory or imagery features). As an example, the message software130may analyze the message to determine contextual information associated with the message and one or more message options for modifying/supplementing the message. The message may be supplemented with the one or more message options before being sent to the recipient device106. The one or more message options may include a plurality of auditory or imagery features that may be used to modify/supplement the message, such as sounds, songs, effects, pictures, image filters, etc. The messaging device108and/or the computing device104may provide the one or more message options to the user of the user device102before the message is sent to the recipient device102. For example, the messaging device108and/or the computing device104may analyze the message in real-time (e.g., as the user is speaking and/or gesturing) and provide the one or more message options to the user device102via the messaging device108. The one or more message options may be provided as a plurality of phrases/titles (e.g., “Play a song,” “Add effects,” “Add sounds,” etc.). The one or more message options may be associated with a number such that the user of the user device102may select at least one of the one or more message options using a keypad of the user device (e.g., a number is pressed using the keypad). The computing device104may select the one or more message options from a plurality of message options stored in the database120. For example, the one or more message options ultimately provided to the user of the user device102may be based on a context and/or content of the message. The context and/or content of the message may be words, phrases, voice tones, gestures (for video calls), etc., that may be determined by the messaging device108and/or the computing device104using natural language processing, machine learning, and/or artificial intelligence methods as described herein. For example, the message from the user device102may include the phrase “Happy Birthday!” The message software130may determine one or more options to modify/supplement the message based on the phrase “Happy Birthday!” occurring within the message. For example, the message software130may be used by the computing device104to determine one or more sounds that may be appropriate to modify/supplement the message with, such as the happy birthday song, a party sound, a sound of confetti popping, and so forth. The message software130may be used by the computing device104to send the one or more options (e.g., the aforementioned sounds) to the user device102to allow the user of the user device102to determine whether the user of the user device102would desire to modify/supplement the message. The message software130may receive a selection of one of the options from the user device102. For example, the user of the user device102may indicate that the user of the user device102desires the song “Happy Birthday” to be played with the message. The song may be played before, during, or after the message is played at the recipient device106. The message software130may modify the message to include supplemental data and/or information (e.g., the song “Happy Birthday”). The message software130may send the modified message to the recipient device106. After receiving the supplemented message, or otherwise accessing the supplemented message, the recipient device106may playback the supplemented message to a user of the recipient device106. The recipient device106may include a user device102. Accordingly, the recipient device106may include the same capabilities as the user device102. The recipient device106may include characteristics132. The characteristics132may indicate information associated with the recipient device106. For example, the characteristics132may include a type of the recipient device106, a manufacturer of the recipient device106, hardware capabilities of the recipient device106, an account associated with the recipient device106, and/or a user associated with the recipient device106. The supplemented message sent to the recipient device106, or otherwise accessed by the recipient device106, may be a rich communication. The rich communication may provide additional information as compared to a standard communication. For example, the rich communication may provide information associated with context of the message such as a location associated with the message (e.g., the location of the user device), a date and/or time associated with the message, and so forth. For example, the rich communication may include the message as-modified with a background sound, song, and/or image; a series of sounds and/or images at various points in the message; an effect/filter; one or more thereof, and/or the like. The recipient device106may receive the rich communication and may respond to the rich communication. The recipient device106may be associated with a user identifier or device identifier134. The device identifier134may be any identifier, token, character, string, or the like, for differentiating one user or computing device (e.g., the recipient device106) from another user or computing device. For example, the device identifier134may be or relate to an Internet Protocol (IP) Address, a Media Access Control (MAC) address, an International Mobile Equipment Identity (IMEI) number, an International Mobile Subscriber Identity (IMSI) number, a phone number, a SIM card number, and/or the like. The device identifier134may identify a user or computing device as belonging to a particular class of users or computing devices. The device identifier134may include information relating to the recipient device106such as a manufacturer, a model or type of device, a service provider associated with the recipient device106, a state of the recipient device106, a locator, and/or a label or classifier. Other information may be represented by the device identifier134. The message device108may be a voicemail device and/or a video voicemail device configured to record voicemails and/or video voicemails from the user devices102. The message device108may be a local component of the user device102, the recipient device106, and/or the computing device106, or the message device108may be a separate component/device independent of the user device102, the recipient device106, and the computing device106. The message device108may include message data136and message software138. The message data136may include all of the same information, data, and/or capabilities of the message data128. The message software138may establish a communication connection with the user device102. After the user device102establishes the communication connection with the message software138, the user device102may send a message to the message software138. For example, the user device102may attempt to establish a communication session and/or a communication connection with the recipient device106, but the user device102failed to establish the communications because the recipient device106did not respond. The user device102may provide the message software138with a message that the user of the user device102desires to deliver to the recipient device106. The message software138may send (e.g., provide) the message to the recipient device106. The message software138may send the message to the computing device104. As described herein, the computing device104may analyze the message created by/sent by the user device102to determine contextual information associated with the message. The computing device104may use the context and/or content of the message to determine which of a plurality of message options stored in the database120are the most relevant to the message (e.g., the message options most likely to be of interest to the user of the user device102). The computing device104may provide a quantity of the most relevant message options to the user of the user device102as one or more suggested message options (e.g., one or more suggested modifications). For example, the computing device104may send an indication and/or a notification to the message software138to determine if the user of the user device102desires to modify/supplement the message. The message software138may send the indication and/or notification received from the computing device104to the user device102. For example, the message software138may send the one or more suggested message options to modify/supplement the message to the user device102. The one or more suggested message options to modify/supplement the message may be based on historical data. For example, the one or more suggested message options may be a ranked list that is ordered based on the message options that are determined as being the most likely to be selected by the user of the user device102based on the content and/or context of the message. Selection by the computing device104using the message software130of the one or more suggested message options is described further herein with respect toFIG.3. The message software138may receive an indication of a selection of at least one of the one or more suggested message options to modify/supplement the message from the user of the user device102. For example, after receiving the one or more options from the message software138, the user of the user device102may indicate (e.g., via an input) that the user would like to modify/supplement the message with at least one of the one or more suggested message options. The user of the user device102may select at least one of the one or more suggested message options to modify/supplement the message using the IVR system provided by the messaging device108. The computing device104may receive the user's selection(s) made from the one or more suggested message options to modify/supplement the message accordingly. The message software138may send the indication of the selection of the at least one of the one or more suggested message options to modify/supplement the message to the computing device104. The computing device104may perform the selected suggested message option(s) to modify/supplement the message, and the computing device104may send the supplemented message to the message software138. The message software138may provide the supplemented message to the user device102, or otherwise provide access thereto. The message software138may receive an indication as to whether or not the user of the user device102accepts the supplemented message from the user device102. If the user device102does not accept the supplemented message, the message software138may indicate that to the computing device104. For example, the message software138may send data, a message, a notification, and so forth to the computing device104to indicate the user device102did not accept the supplemented message. The message software138may receive one or more additional options for modifying/supplementing the message from the computing device104, and the message software138may provide the one or more additional options to the user of the user device102. If the user of the user device102accepts the supplemented message, the message software138may send an indication to the computing device104that the user of the user device102accepted the supplemental message104. The message software138may receive the supplemented message from the computing device104. The message software138may send the supplemented message to the recipient device106, or otherwise provide access thereto. After receiving the supplemented message, or otherwise accessing the supplemented message, the recipient device106may playback the supplemented message to a user of the recipient device106. FIG.2shows an example system200for machine learning. The system200may include a plurality of user devices102that may provide data to the computing device108. For example, users of each of the user devices102may desire to modify/supplement a message, and each of the user devices102may be known devices so that a training data set202may be created based on the selections made by the users of the user devices102when supplementing a message. Each of the user devices102may have one or more characteristics and/or labels associated with the user devices102. Each of the user devices102may be associated with a message generated using the user devices102and stored at the database120. Each message may have been102sent (e.g., by the computing device104) to a respective recipient device (e.g., the recipient device106). Additionally, information (e.g., data) of a user associated with each of the user devices102may be known (hereinafter, “user information”). For example, the user information may include demographic information, location information, and/or one or more selections of an option to modify/supplement a message. The user information may be determined by the computing device104using message data associated with each message generated using the user devices102. The user information may be used to determine a probability that a user of an unknown user device102will select a specific option (e.g., a modification) of a plurality of options (e.g., a plurality of modifications) to modify/supplement a new message. The probability may be based on message data associated with the new message and one or more association rules, as further described herein. The probability may have one or more coefficients associated with the probability. The coefficients may be added to a vector associated with each known user device102, as well as any characteristics associated with each message, known user, and/or known user device102. For example, the user devices102may be known user devices because each user device102is associated with an existing message that a user had previously requested to modify/supplement, and the option that each user of each user device102selected regarding supplementing the message was previously known. Accordingly, the training data set202has a plurality of characteristics associated with a plurality of vectors for the plurality of known user devices102a-N. The training data set202may be utilized in a first stage of machine learning to produce a trained model204. FIG.3is a block diagram depicting an example view of the messaging software130of the computing device104. The messaging software130of the computing device104may include one or more of a crawler module104A, a search module104B, an association module104C, a first analysis module104D, a second analysis104E, and a search engine104F. The computing device104may receive data associated with a plurality of training messages associated with the one or more user devices102. For example, the plurality of training messages may include message data and user data for each message. The message data may include one or more words included within the message. The user data may include demographic information, location information, and/or one or more selections of an option to modify/supplement a message (hereinafter, “message options”). The crawler module104A of the messaging software130of the computing device104may determine/generate the training data set202using the plurality of training messages. The computing device104may determine one or more coefficients associated with attributes of the message data and/or the user information with respect to one or more of the message options.102. The coefficients may indicate a probability that a user associated with a new message generated using a user device102is to select a specific option of the message options to modify/supplement the new message. The crawler module104A may retrieve and analyze the plurality of training messages. The crawler module104A may analyze the message data and/or user information associated with each message to determine how to index the message (e.g., based on an option(s) of the message options selected). The message data for a given training message may contain message data and/or user information (e.g., attributes associated with one or more users, such as a caller and/or a callee). The user information may include, but is not limited to, address, city, state, area code, time, a combination thereof and the like. The crawler module104A may index the plurality of training messages based on the user information. The first analysis module104D may be used for natural language processing, contextual analysis, etc. The first analysis module104D may receive the plurality of training messages and analyze the message data associated with each training message. For example, the first analysis module104D may determine one or more words (e.g., spoken by the user who generated the message) indicated by the message data. The first analysis module104D may convert the one or more words of each training message into textual information. The textual information may be input into the first analysis module104D, and the first analysis module104D may determine/generate a cognitive model of each training message. In other words, a training message may include message data indicative of natural language that may be parsed into a representation format of first-order logic and naive semantics. The first analysis module104D may use a naive semantic system that incorporates modules for text processing based upon parsing, formal semantics, and discourse coherence, as well as relying on a naive semantic lexicon that stores word meanings in terms of a hierarchical semantic network. The first analysis module104D may use a high recall statistical retrieval module (not shown) using unspecified statistical techniques to produce a list of words and a relevance reasoning module (not shown) which may use first-order theorem proving and human-like reasoning to determine which message option(s) should be suggested to a user given a particular usage of a certain word or words. The textual information may be based on sentence structure, for example, based on a word-by-word analysis, and/or a whole sentence analysis. The first analysis module104D may determine word frequencies for some or all words contained in the textual information. The first analysis module104D may be configured to disambiguate and resolve homograph issues to accurately identify words and their frequencies. The second analysis module104E may be used for natural language processing, contextual analysis, etc. For example, the second analysis module104E may be configured for performing a concept-based method for searching text information (e.g., contained within the plurality of training messages) based on an ontology. The second analysis module104E may interact with the first analysis module104D to transform natural language into predicate structures representing logical relationships between words in the natural language. The second analysis module104E may include one or more ontologies and/or thesauri containing lexical semantic information about words and may be configured for ranking a set of matching natural language query predicate structures and equivalent textual information predicate structures. The second analysis module104E may provide a logical representation and/or a semantic representation for all of the message data associated with a training message. In an aspect, such a logical representation and/or a semantic representation may be referred to herein as a data profile. A thesaurus may be a structured controlled vocabulary. The thesaurus may provide information about each term and its relationships to other terms within the same thesaurus. In addition to specifying which terms may be used as synonyms, the thesaurus also indicates which terms are more specific (e.g., narrower terms), which are broader, and which are related terms. An ontology is set of concepts with attributes and relationships between the various concepts that contain various meanings, all to define a domain of knowledge, and is expressed in a format that is machine-readable. Certain applications of ontologies, as used in artificial intelligence, may define a domain of knowledge through terms and relationships. The second analysis module104E may determine/generate one or more data profiles, optionally in conjunction with the first analysis module104D. A data profile may include a list of concepts and/or terms and their associated relevance weights with respect to a message option(s) selected. A weight may indicate an importance of a concept/term with regard to other concepts/terms and a message option. The weights may represent, for example, the frequency with which the concepts occur in textual information, the specificity of the concepts, statistical characteristics of each concept, and the like. Statistical characteristics of concepts may include, without limitation, the specificity, the sensitivity, the number of alternatives occurring in the textual information, the textual similarity, and the like. The second analysis module104E and/or the first analysis module104D may determine a weight for a concept/term in the plurality of training messages by calculating a number of occurrences (e.g., a frequency) of all concepts/terms (e.g., words, phrases, etc.). A correction algorithm may reduce the weight of concepts/terms that occur in many training messages. For example, if a training message is indexed, a very generic term like “the” will not be very informative while a term like “birthday” is very specific and more informative for determining a weight of a message option to associate with the word. Therefore, if the frequency of the term “the” in a document is higher than the frequency of the term “birthday,” then the term “birthday” would have higher weight after correction. The second analysis module104E and/or the first analysis module104D may determine/generate a data profile based on a training message and/or one or more association rules. The resulting data profile may be used to identify one or more of the message options based on a comparison between a message's data profile and data profiles of potential message options. For example, an amount of overlap between the message's data profile and the data profiles of potential message options may identify relevant message options to suggest. A similarity score may be generated that reflects a similarity between a message's data profile and the data profiles of potential message options. Determining a similarity score amongst a plurality of data profiles may include performing a matching algorithm. Performing a matching algorithm may include storing each data profile as a vector (e.g., training data set202) and performing a vector matching algorithm. For example, a data profile may be stored mathematically as a vector with values between 0 and 1. The matching of a message's data profile with a stored data profile may be accomplished via vector matching. A variety of algorithms may be used to calculate the distance between the vectors. In an example, the various algorithms for determining the distance between vectors may include, but are not limited to, Vector algorithm, Portal algorithm, Quadsum algorithm, Jaccard algorithm, Dice algorithm, Basic algorithm, Weighted algorithm, Orion algorithm, Weighted Overlap algorithm, and the like. It is contemplated that one or more of these algorithms may be used concurrently. The analysis performed by the second analysis module104E and/or the first analysis module104D using the training messages (e.g., the weights, data profiles, etc.) may be provided to the association module104C. The association module104C may be configured to generate one or more association rules based on the analysis performed by the second analysis module104E and/or the first analysis module104D. For example, the association module104C may determine/generate an association rule for a given pairing of a word (e.g., birthday), a message option (e.g., supplementing a message with the Happy Birthday song), and/or a user information attribute (e.g., time of message). The association rule may be used to determine a probability that a user associated with a new message may select one or more of the message options (e.g., based on the message data associated with the new message and one or more association rules). The association module104C may use the one or more association rules to train the trained model204for analysis of one or more new messages. The trained model204may be classifier model (e.g., a Support Vector Machine (SVM), a logistic regression, a decision tree, a random forest, a neural network, etc.). A separate classifier may be trained for each characteristic to be determined for the user device102and/or the user of the user device102. As an example, a unified multi-task classifier (e.g., a multiple layer perceptron with hidden layers and multiple output variables) may be trained to predict all these characteristics and/or labels simultaneously. Any type of classifier may be used (e.g., a neural network with more hidden layers, a linear classifier, a random forest, etc.). Any suitable standard machine learning algorithm may be used. The classifier's parameters may be optimized (e.g., finding parameter values that will give accurate predictions). The association module104C may use the one or more association rules and one or more of the classifiers and/or classifier models discussed above to train to the trained model204. The trained model204may determine a probability of selecting a specific option of the message options for a new message not associated with any of the user devices102(e.g., an unknown user device) based on one or more of the message data associated with the new message, data associated with the unknown user device, and/or the user information determined based on the message data. The trained model204may receive as input the message data for the new message, and a probability (or probabilities) (e.g., output) that may indicate one or more selections of the message options the user associated with the new message is likely to select when generating the new message (e.g., recording via an IVR system). For example, the new message may be determined to include the words “Happy New Year.” The trained model204may use the search module104B when determine one or more suggested message options (e.g., one or more suggested modifications) based on the message data associated with the new message. The search module104B may be configured to perform one or more types of searches based on the message data associated with the new message. The message data may be used by the search module104B to return one or more message options the user of the unknown user device is likely to select to modify/supplement the new message. The search module104B may use the search engine104F to perform the one or more searches. The search engine104F may include a database listing comprising, for example, each of the message options that are available to select, referred to herein as search results. The search engine104F may be configured to maintain a listing of data profiles and/or the one or more association rules. Searching by the search engine104F may utilize metadata. For example, the metadata may include performing a Boolean search based on the message data (e.g., one or more spoken words, speech pattern(s), etc.). Searching by metadata may include performing a search by determining a deviation of a metadata value from a specified value and expressing the deviation in a relevance score. Searching by vector matching may include performing a vector matching algorithm as described herein. Searching by metadata and by vector matching may be performed simultaneously or sequentially. The one or more searches may also include a keyword, a phrase, a name, combinations thereof, and/or the like. The search module104B may be configured to perform a keyword search and/or a semantic search. A keyword search is a type of search that looks for matching vectors and/or association rules that contain one or more words specified by the message data. A semantic search seeks to improve search accuracy by understanding contextual meaning of terms as they appear in the message data to generate more relevant results. For example, a semantic search technique may be used to build a semantic model from a set of vectors and/or association rules, and to find the set of vectors and/or association rules that best relate to that query. An inverted index of all words in a vector and/or association rule across all vectors and/or association rules may be built, and then using various relevancy metrics, the words of the search may be compared against the inverted index, and a ranked set of vectors and/or association rules may be identified that are “closest” to the search terms. The search module104B may interact with one or more of the first analysis module104D and/or the second analysis module104E to effect a semantic search. For example, the search module104B may parse the new message and use the first analysis module104D and/or the second analysis module104E to develop a list of other related terms, concepts, and/or contexts that may correlate to one or more vectors and/or association rules. The search module104B may determine/generate related terms and/or concepts that relate to a search type using, for example, an ontology. The related terms and/or concepts may be used to expand the search to identify vectors and/or association rules that are relevant to the search (e.g., relevant to the words indicated by the message data). One or more suggested message options (e.g., one or more suggested modifications) that the user of the unknown user device may be likely to select to modify/supplement the new message may be returned by the search module104B based on the one or more searches conducted. The search module104B may provide the one or more suggested message options to the trained model204. The trained model204may determine based on the one or more suggested message options that the user of the unknown user device may desire to modify/supplement the new message with sounds that are associated with “Happy New Year.” For example, the trained model204may determine one or more suggested message options, such as adding sounds of fireworks, a song, “Happy New Year,” a group of people cheering “Happy New Year,” and/or other celebratory sounds associated with New Years to the background and/or the foreground of the new message. The trained model204may send (e.g., provide by the IVR system) the one or more suggested message options to the user of the unknown user device102so that the user may determine whether he or she desires to modify/supplement the message with any of the one or more suggested message options. The trained model204may be optimized based on a selection of the one or more suggested message options made by the user of the user device102. For example, the user of the user device102may receive the one or more suggested message options at the user device102and select at least one of the suggested options. The user device102may provide the computing device104with an indication of the at least one suggested option. The association module104C may use the indication of the at least one suggested option to optimize one or more of the association rules that are associated with the at least the suggested option. For example, the at least one suggested option may be to play a song. The song may be associated with one or more of the association rules, which may be optimized based on the message data (e.g., words spoken) associated with the message. In this way, the one or more association rules may be optimized each time the trained model204is used to analyze a new message. While the computing device104is shown as being separate from the trained model204, the computing device104may include the capabilities of the trained model204. Stated differently, the computing device104may be configured to use the machine learning described above. FIG.4shows an example sequence400for modifying/supplementing a message. The sequence400may include the user device102, the computing device104, and the recipient device106. At step402, the user device102may send a request to the computing device104. The request may be sent after an attempt is made by the user device102to establish a communication session and/or a communication connection with the recipient device106. The user device102may have failed to establish the communications because the recipient device106did not respond (e.g., no answer). The request may be to establish a session with an IVR system of the computing device104. The user of the user device102may interact with the computing device104using the IVR system to provide the computing device104with a message that the user of the user device102would desire to deliver to the recipient device106. At step404, the computing device104may send an indication and/or a notification to the user device102to determine if the user of the user device102desires to modify/supplement the message. The indication and/or the notification may include one or more suggested message options to modify/supplement the message. For example, the computing device104may analyze the message sent by the user device102to determine contextual information associated with the message. The message from the user device102may include the phrase “Happy Birthday!” The computing device104may determine one or more suggested message options to modify/supplement the message based on the phrase “Happy Birthday!” occurring within the message. For example, the computing device104may determine one or more sounds that may be appropriate to modify/supplement the message with, such as the happy birthday song, a party sound, a sound of confetti popping, and so forth. At step406, the user device102may send an indication to the computing device104of a selection of at least one of the one or more suggested message options to modify/supplement the message. For example, after receiving the one or more suggested message options from the computing device104, the user of the user device102may indicate (e.g., via an input) that the user would like to modify/supplement the message with at least one of the one or more suggested message options. At step408, the computing device104may perform the selected option to modify/supplement the message. The computing device104may send the supplemented message to the user device102, or the computing device104may otherwise provide the user device102with access to the message (e.g., sending the user device102a link to the supplemented message). For example, the computing device104may receive the indication from the user device102, and the computing device104may modify the message based on the indication. The computing device104may add one or more sounds to the message based on the indication from the user device102. For example, the indication may indicate that the user device would like to add the song “Happy Birthday” to the message, and the computing device104may add the song “Happy Birthday” to the message. At step408, the computing device104may provide the supplemented message to the user device102, or the computing device104may otherwise provide the user device102with access to the message (e.g., sending the user device102a link to the supplemented message). The user device102may receive the supplemented message, or otherwise access the supplemented message, and playback the supplemented message for the user of the user device102. At step410, the user device102may send an indication of whether or not the user of the user device102accepts the supplemented message to the computing device104. If the user of the user device102does not accept the supplemented message, the computing device104may return to step404to provide the user device102with one or more additional options for modifying/supplementing the message. If the user of the user device102accepts the supplemented message, the computing device104may send the supplemented message to the recipient device106, or otherwise provide access thereto, at step412. After receiving the supplemented message, or otherwise accessing the supplemented message, the recipient device106may playback the supplemented message to a user of the recipient device106. FIG.5shows an example500for modifying/supplementing a message. The example500may include the user device102, the computing device104, the recipient device106, and the message device108. At step502, the user device102establishes a communication connection with the message device108. The message device108may be a voicemail device and/or a video voicemail device configured to record voicemails and/or video voicemails from user devices102. The user device102may establish the communication connection with the message device108following a failed attempt by the user device102to establish a communication session with the recipient device106, because the recipient device106did not respond (e.g., no answer) to a request to initiate the communication session. The request may be to establish a session with an IVR system of the message device108. After the user device102establishes the communication connection with the message device108, the user of the user device102may interact with the message device108using the IVR system to provide the message device108with a message that the user of the user device102would desire to deliver to the recipient device106. The user device102may provide the message device108with the message that the user of the user device102desires to deliver to the recipient device106. At step504, the message device108may send the message to the computing device104. The computing device104may analyze the message sent by the user device102to determine contextual information associated with the message. The message from the user device102may include the phrase “Happy Birthday!” The computing device104may determine one or more suggested message options (e.g., one or more suggested modifications) to modify/supplement the message based on the phrase “Happy Birthday!” occurring within the message. For example, the computing device104may determine one or more sounds that may be appropriate to modify/supplement the message with, such as the happy birthday song, a party sound, a sound of confetti popping, and so forth. At step506, the computing device104may send an indication and/or a notification to the message device108to determine if the user of the user device102desires to modify/supplement the message. At step508, the message device108may send the indication and/or notification to the user device102. For example, the message device108may send the one or more suggested message options to modify/supplement the message to the user device102. At step510, the user device102may send an indication of a selection of at least one of the one or more suggested message options to modify/supplement the message to the message device108. For example, after receiving the one or more suggested message options from the message device108, the user of the user device102may indicate (e.g., via an input) that the user would like to modify/supplement the message with at least one of the one or more suggested message options. At step512, the message device108may send the indication to the computing device104of the selection of the at least one of the one or more suggested message options to modify/supplement the message. At step514, the computing device104may perform the selected suggested message option to modify/supplement the message. The computing device104may send the supplemented message to the message device108, or otherwise provide access thereto. For example, the computing device104may receive the indication from the message device108, and may modify the message based on the indication. The computing device104may add one or more sounds to the message based on the indication from the user device102. For example, the indication may indicate that the user device would like to add the song “Happy Birthday” to the message, and the computing device104may add the song “Happy Birthday” to the message. At step516, the message device108may provide the supplemented message to the user device102, or may otherwise provide access thereto. The user device102may receive the supplemented message, or otherwise access the supplemented message, and playback the supplemented message for the user of the user device102. At step518, the user device102may send an indication of whether or not the user of the user device102accepts the supplemented message to the message device108. If the user device102does not accept the supplemented message, the message device108may return to step504to indicate to the computing device104to provide the user device102with one or more additional suggested message options for modifying/supplementing the message. If the user of the user device102accepts the supplemented message, the message device108may send an indication that the user device102accepted the supplemental message to the computing device104. At step522, the computing device may send the supplemented message to the message device108, or otherwise provide access thereto. At step524, the message device108may send the supplemented message to the recipient device106, or otherwise provide access thereto. After receiving the supplemented message, or otherwise accessing the supplemented message, the recipient device106may playback the supplemented message to a user of the recipient device106. FIG.6shows a flowchart of an example method600for modifying/supplementing a message. At step610, contextual information associated with a message may be determined. The contextual information associated with the message may be determined by a computing device (e.g., the computing device104, the recipient device106, and/or the message device108ofFIGS.1,2,3,4, &5). The computing device may receive the message from a user device (e.g., the user device102ofFIGS.1,2,3,4, &5) via an interactive voice response (“IVR”) system. The message may be intended for another user device (e.g., the recipient device106ofFIGS.1,2,3,4, &5) associated with an intended recipient of the message. At step620, one or more suggested message options of a plurality of message options to modify/supplement the message may be determined. The plurality of message options may include a plurality of modifications to be applied to the message. For example, the plurality of modifications may include modifying audio associated with the message, adding a song to the message, or an action to execute based on the message. The computing device may determine the one or more suggested message options (e.g., one or more suggested modifications) of the plurality of message options. The computing device may determine the one or more suggested message options based on a plurality message options to modify/supplement the message. For example, the computing device may rank the plurality of message options to indicate the message option(s) most likely to be selected by the user device. The computing device may determine the one or more suggested message options based on the contextual information. For example, the computing device may analyze language of the message to determine context for one or more words of the message, and the computing device may sort the plurality of message options based on the context of the message. For example, the computing device may utilize natural language processing to determine the contextual information associated with the message. The contextual information may include at least one of a location of the user, a time associated with the message, a date associated with the message, the intended recipient of the message, a subject of the message, or an intent associated with the message. The one or more suggested message options may be determined based on data associated with a plurality of previously selected options. For example, the computing device may utilize historical data that indicates a plurality of previously received messages from a plurality of user devices, as well as the options that the plurality of user devices selected. The one or more suggested message options may be sorted based on a probability that a user of a user device associated with the message will select the one or more suggested message options. The probability may be determined using a trained model, such as the trained model204. At step630, information that indicates the one or more suggested message options may be sent. The computing device may send the information that indicates the one or more suggested message options. For example, the computing device may send data, a message, a notification, and so forth, to the user device. The one or more suggested message options may be sent based on a ranking of the one or more suggested message options which indicates a likelihood a user of the user device will select the one or more suggested options message. The computing device may send the one or more suggested message options based on receiving a request for the one or more suggested message options. The request may be received from the user device. At step640, an indication of a selection of a first message option of the one or more suggested message options may be received. The computing device may receive the indication of the selection of the first message option. For example, the user device may send data, a message, a notification, and so forth, to the computing device to indicate the selection of the first message option. At step650, the message may be modified. The message may be modified based on the first message option. For example, the computing device may determine/generate a supplemental message that incorporates the first message option. The computing device may determine/generate the supplemental message based on the selection of the first message option. The computing device may determine/generate the supplemental message based on the message. For example, the computing device may modify the message sent by the user device to include supplemental information associated with the first message option. The computing device may determine/generate a notification associated with the supplemental message. For example, the computing device may determine/generate a notification based on one or more characteristics of the recipient device. The one or more characteristics of the recipient device includes a type of the recipient device, a manufacturer of the recipient device, hardware capabilities of the recipient device, or an account associated with the recipient device. The notification may be a rich notification. The computing device may send the notification associated with the supplemental message to the recipient device. FIG.7shows a flowchart of an example method700for modifying/supplementing a message. At step710, a request for one or more suggested message options may be received. The one or more suggested message options may be one or more suggested modifications to be applied to the message. The request for the one or more suggested message options may be received by a computing device (e.g., the computing device104, the recipient device106, and/or the message device108ofFIGS.1,2,3,4, &5). The request for the one or more suggested message options may be sent by another device (e.g., the user device102and/or the message device108ofFIGS.1,2,3,4, &5). The request may be based on contextual information associated with a message. The message may be intended for a user device associated with an intended recipient of the message. The contextual information may include at least one of a location of the user, a time associated with the message, a date associated with the message, the intended recipient of the message, a subject of the message, or an intent associated with the message At step720, the one or more suggested message options are determined based on a plurality of previously received requests. The computing device may determine the one or more suggested message options. The one or more suggested message options may include modifying audio associated with the message, adding a song to the message, or an action to execute based on the message. The one or more suggested message options may be sorted based on a probability that a user of a user device associated with the message will select the one or more suggested message options. The one or more message options may be sorted based on the contextual information associated with the message. At step730, information that indicates the one or more suggested message options may be sent. The computing device may send the information to the other device or the message device. The information may include supplemental material to be added to the message. At step740, an indication of a selection of a first message option of the one or more suggested message options may be received. The computing device may receive the indication of the selection of the first message option. The computing device may modify the data associated with the plurality of previously received requests. The computing device may modify the data to indicate the selection of the first message option based on the contextual information. For example, the computing device may modify the data to increase the likelihood that the first message option will be selected since the user device did select the first message option. At step750, data associated with the first message option may be sent. The computing device may send the data associated with the first message option. The computing device may modify the message based on the first message option. For example, the computing device may determine/generate a supplemental message that incorporates the first message option. The computing device may determine/generate the supplemental message based on the selection of the first message option. The computing device may determine/generate the supplemental message based on the message. For example, the computing device may modify the message sent by the user device to include supplemental information associated with the first option. The computing device may determine (e.g., generate) a notification associated with the supplemental message. For example, the computing device may determine a notification based on one or more characteristics of the recipient device. The one or more characteristics of the recipient device includes a type of the recipient device, a manufacturer of the recipient device, hardware capabilities of the recipient device, or an account associated with the recipient device. The notification may be a rich notification. The computing device may send the notification associated with the supplemental message to the recipient device. FIG.8shows a flowchart of an example method800for modifying/supplementing a message. At step810, contextual information associated with a message is determined. The contextual information associated with the message may be determined by a computing device (e.g., the computing device104, the recipient device106, and/or the message device108ofFIGS.1,2,3,4, &5). The message may indicate a recipient device of the message. The computing device may determine one or more suggested message options of a plurality of message options based on the contextual information. The plurality of message options may include a plurality of modifications, and the one or more suggested message options may be one or more suggested modifications of the plurality of modifications. The one or more suggested message options (e.g., the one or more suggested modifications) may be applied to the message. For example, the computing device may analyze content of the message to determine context for one or more words of the message, and the computing device may sort the plurality of message options based on the context of the message. For example, the computing device may utilize natural language processing to determine the contextual information associated with the message. The contextual information may include at least one of a location of the user, a time associated with the message, a date associated with the message, the intended recipient of the message, a subject of the message, or an intent associated with the message. At step820, one or more suggested message options of the plurality of message options to modify/supplement the message may be sent. The computing device may send the one or more suggested message options. The computing device may send the one or more suggested message options to the user device. As described herein, the plurality of message options may include the plurality of modifications, and the one or more suggested message options may be the one or more suggested modifications. For example, the one or more suggested message modifications may be sent to the user device. The computing device may determine the one or more suggested message options. The one or more suggested message options may include modifying audio associated with the message, adding a song to the message, or an action to execute based on the message. The one or more suggested message options may be sorted based on a probability that a user of a user device associated with the message will select the one or more suggested message options. The one or more message options may be sorted based on the contextual information associated with the message At step830, an indication of a selection of a first message option of the one or more suggested message options may be received. The indication of the selection may be received by the user device. The computing device may receive the indication from the user device. The computing device may receive the indication of the selection of the first option. The computing device may modify the data associated with the plurality of previously received requests. The computing device may modify the data to indicate the selection of the first message option based on the contextual information. For example, the computing device may modify the data to increase the likelihood that the first message option will be selected since the user of the user device selected the first message option. The computing device may modify the message based on the first message option. For example, the computing device may determine/generate a supplemental message that incorporates the first message option. The computing device may determine/generate the supplemental message based on the selection of the first message option. The computing device may determine/generate the supplemental message based on the message. For example, the computing device may modify the message sent by the user device to include supplemental information associated with the first option At step840, a notification associated with the supplemental message may be generated. The notification may be a rich notification. The computing device may determine/generate a notification associated with the supplemental message. For example, the computing device may determine/generate a notification based on one or more characteristics of the recipient device. The one or more characteristics of the recipient device may include a type of the recipient device, a manufacturer of the recipient device, hardware capabilities of the recipient device, or an account associated with the recipient device. At step850, the notification associated with the supplemental message may be sent. The computing device may send the notification associated with the supplemental message to the recipient device FIG.9shows an example system900for modifying/supplementing a message. The user device102, the computing device104, the recipient device106, and/or the message device108ofFIGS.1,2,3,4, &5may be a computer901as shown inFIG.9. The computer901may include one or more processors903, a system memory912, and a bus913that couples various system components including the one or more processors903to the system memory912. In the case of multiple processors903, the computer901may utilize parallel computing. The bus913is one or more of several possible types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, or local bus using any of a variety of bus architectures. The computer901may operate on and/or include a variety of computer readable media (e.g., non-transitory). The readable media may be any available media that is accessible by the computer901and may include both volatile and non-volatile media, removable and non-removable media. The system memory912has computer readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM). The system memory912may store data such as the message data907and/or program modules such as the operating system905and the message software906that are accessible to and/or are operated on by the one or more processors903. The computer901may also have other removable/non-removable, volatile/non-volatile computer storage media.FIG.9shows the mass storage device904which may provide non-volatile storage of computer code, computer readable instructions, data structures, program modules, and other data for the computer901. The mass storage device904may be a hard disk, a removable magnetic disk, a removable optical disk, magnetic cassettes or other magnetic storage devices, flash memory cards, CD-ROM, digital versatile disks (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory (EEPROM), and the like. Any quantity of program modules may be stored on the mass storage device904, such as the operating system905and the message software906. Each of the operating system905and the message software906(or some combination thereof) may include elements of the program modules and the message software906. The message data907may also be stored on the mass storage device904. The message data907may be stored in any of one or more databases known in the art. Such databases may be DB2®, Microsoft® Access, Microsoft® SQL Server, Oracle®, MySQL, PostgreSQL, and the like. The databases may be centralized or distributed across locations within the network915. A user may enter commands and information into the computer901via an input device (not shown). Examples of such input devices include, but are not limited to, a keyboard, pointing device (e.g., a computer mouse, remote control), a microphone, a joystick, a scanner, tactile input devices such as gloves, and other body coverings, motion sensor, and the like These and other input devices may be connected to the one or more processors903via a human machine interface902that is coupled to the bus913, but may be connected by other interface and bus structures, such as a parallel port, game port, an IEEE 1394 Port (also known as a Firewire port), a serial port, network adapter908, and/or a universal serial bus (USB). The display device911may also be connected to the bus913via an interface, such as the display adapter909. It is contemplated that the computer901may include more than one display adapter909and the computer901may include more than one display device911. The display device911may be a monitor, an LCD (Liquid Crystal Display), light emitting diode (LED) display, television, smart lens, smart glass, and/or a projector. In addition to the display device911, other output peripheral devices may be components such as speakers (not shown) and a printer (not shown) which may be connected to the computer901via the Input/Output Interface910. Any step and/or result of the methods may be output (or caused to be output) in any form to an output device. Such output may be any form of visual representation, including, but not limited to, textual, graphical, animation, audio, tactile, and the like. The display device911and computer901may be part of one device, or separate devices. The computer901may operate in a networked environment using logical connections to one or more remote computing devices914a,b,c. A remote computing device may be a personal computer, computing station (e.g., workstation), portable computer (e.g., laptop, mobile phone, tablet device), smart device (e.g., smartphone, smart watch, activity tracker, smart apparel, smart accessory), security and/or monitoring device, a server, a router, a network computer, a peer device, edge device, and so on. Logical connections between the computer901and a remote computing device914a,b,cmay be made via a network915, such as a local area network (LAN) and/or a general wide area network (WAN). Such network connections may be through the network adapter908. The network adapter908may be implemented in both wired and wireless environments. Such networking environments are conventional and commonplace in dwellings, offices, enterprise-wide computer networks, intranets, and the Internet. Application programs and other executable program components such as the operating system905are shown herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computing device901, and are executed by the one or more processors903of the computer. An implementation of the message software906may be stored on or sent across some form of computer readable media. Any of the described methods may be performed by processor-executable instructions embodied on computer readable media. While specific configurations have been described, it is not intended that the scope be limited to the particular configurations set forth, as the configurations herein are intended in all respects to be possible configurations rather than restrictive. Unless otherwise expressly stated, it is in no way intended that any method set forth herein be construed as requiring that its steps be performed in a specific order. Accordingly, where a method claim does not actually recite an order to be followed by its steps or it is not otherwise specifically stated in the claims or descriptions that the steps are to be limited to a specific order, it is in no way intended that an order be inferred, in any respect. This holds for any possible non-express basis for interpretation, including: matters of logic with respect to arrangement of steps or operational flow; plain meaning derived from grammatical organization or punctuation; the number or type of configurations described in the specification. It will be apparent to those skilled in the art that various modifications and variations may be made without departing from the scope or spirit. Other configurations will be apparent to those skilled in the art from consideration of the specification and practice described herein. It is intended that the specification and described configurations be considered as exemplary only, with a true scope and spirit being indicated by the following claims.
88,156
11863504
DETAILED DESCRIPTION Subject matter will now be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific example embodiments. This description is not intended as an extensive or detailed discussion of known concepts. Details that are known generally to those of ordinary skill in the relevant art may have been omitted, or may be handled in summary fashion. The following subject matter may be embodied in a variety of different forms, such as methods, devices, components, and/or systems. Accordingly, this subject matter is not intended to be construed as limited to any example embodiments set forth herein. Rather, example embodiments are provided merely to be illustrative. Such embodiments may, for example, take the form of hardware, software, firmware or any combination thereof. 1. Computing Scenario The following provides a discussion of some types of computing scenarios in which the disclosed subject matter may be utilized and/or implemented. 1.1. Networking FIG.1is an interaction diagram of a scenario100illustrating a service102provided by a set of servers104to a set of client devices110via various types of networks. The servers104and/or client devices110may be capable of transmitting, receiving, processing, and/or storing many types of signals, such as in memory as physical memory states. The servers104of the service102may be internally connected via a local area network106(LAN), such as a wired network where network adapters on the respective servers104are interconnected via cables (e.g., coaxial and/or fiber optic cabling), and may be connected in various topologies (e.g., buses, token rings, meshes, and/or trees). The servers104may be interconnected directly, or through one or more other networking devices, such as routers, switches, and/or repeaters. The servers104may utilize a variety of physical networking protocols (e.g., Ethernet and/or Fiber Channel) and/or logical networking protocols (e.g., variants of an Internet Protocol (IP), a Transmission Control Protocol (TCP), and/or a User Datagram Protocol (UDP). The local area network106may include, e.g., analog telephone lines, such as a twisted wire pair, a coaxial cable, full or fractional digital lines including T1, T2, T3, or T4 type lines, Integrated Services Digital Networks (ISDNs), Digital Subscriber Lines (DSLs), wireless links including satellite links, or other communication links or channels, such as may be known to those skilled in the art. The local area network106may be organized according to one or more network architectures, such as server/client, peer-to-peer, and/or mesh architectures, and/or a variety of roles, such as administrative servers, authentication servers, security monitor servers, data stores for objects such as files and databases, business logic servers, time synchronization servers, and/or front-end servers providing a user-facing interface for the service102. Likewise, the local area network106may comprise one or more sub-networks, such as may employ differing architectures, may be compliant or compatible with differing protocols and/or may interoperate within the local area network106. Additionally, a variety of local area networks106may be interconnected; e.g., a router may provide a link between otherwise separate and independent local area networks106. In the scenario100ofFIG.1, the local area network106of the service102is connected to a wide area network108(WAN) that allows the service102to exchange data with other services102and/or client devices110. The wide area network108may encompass various combinations of devices with varying levels of distribution and exposure, such as a public wide-area network (e.g., the Internet) and/or a private network (e.g., a virtual private network (VPN) of a distributed enterprise). In the scenario100ofFIG.1, the service102may be accessed via the wide area network108by a user112of one or more client devices110, such as a portable media player (e.g., an electronic text reader, an audio device, or a portable gaming, exercise, or navigation device); a portable communication device (e.g., a camera, a phone, a wearable or a text chatting device); a workstation; and/or a laptop form factor computer. The respective client devices110may communicate with the service102via various connections to the wide area network108. As a first such example, one or more client devices110may comprise a cellular communicator and may communicate with the service102by connecting to the wide area network108via a wireless local area network106provided by a cellular provider. As a second such example, one or more client devices110may communicate with the service102by connecting to the wide area network108via a wireless local area network106provided by a location such as the user's home or workplace (e.g., a WiFi (Institute of Electrical and Electronics Engineers (IEEE) Standard 802.11) network or a Bluetooth (IEEE Standard 802.15.1) personal area network). In this manner, the servers104and the client devices110may communicate over various types of networks. Other types of networks that may be accessed by the servers104and/or client devices110include mass storage, such as network attached storage (NAS), a storage area network (SAN), or other forms of computer or machine readable media. 1.2. Server Configuration FIG.2presents a schematic architecture diagram200of a server104that may utilize at least a portion of the techniques provided herein. Such a server104may vary widely in configuration or capabilities, alone or in conjunction with other servers, in order to provide a service such as the service102. The server104may comprise one or more processors210that process instructions. The one or more processors210may optionally include a plurality of cores; one or more coprocessors, such as a mathematics coprocessor or an integrated graphical processing unit (GPU); and/or one or more layers of local cache memory. The server104may comprise memory202storing various forms of applications, such as an operating system204; one or more server applications206, such as a hypertext transport protocol (HTTP) server, a file transfer protocol (FTP) server, or a simple mail transport protocol (SMTP) server; and/or various forms of data, such as a database208or a file system. The server104may comprise a variety of peripheral components, such as a wired and/or wireless network adapter214connectible to a local area network and/or wide area network; one or more storage components216, such as a hard disk drive, a solid-state storage device (SSD), a flash memory device, and/or a magnetic and/or optical disk reader. The server104may comprise a mainboard featuring one or more communication buses212that interconnect the processor210, the memory202, and various peripherals, using a variety of bus technologies, such as a variant of a serial or parallel AT Attachment (ATA) bus protocol; a Uniform Serial Bus (USB) protocol; and/or Small Computer System Interface (SCI) bus protocol. In a multibus scenario, a communication bus212may interconnect the server104with at least one other server. Other components that may optionally be included with the server104(though not shown in the schematic diagram200ofFIG.2) include a display; a display adapter, such as a graphical processing unit (GPU); input peripherals, such as a keyboard and/or mouse; and a flash memory device that may store a basic input/output system (BIOS) routine that facilitates booting the server104to a state of readiness. The server104may operate in various physical enclosures, such as a desktop or tower, and/or may be integrated with a display as an “all-in-one” device. The server104may be mounted horizontally and/or in a cabinet or rack, and/or may simply comprise an interconnected set of components. The server104may comprise a dedicated and/or shared power supply218that supplies and/or regulates power for the other components. The server104may provide power to and/or receive power from another server and/or other devices. The server104may comprise a shared and/or dedicated climate control unit220that regulates climate properties, such as temperature, humidity, and/or airflow. Many such servers104may be configured and/or adapted to utilize at least a portion of the techniques presented herein. 1.3. Client Device Configuration FIG.3presents a schematic architecture diagram300of a client device110whereupon at least a portion of the techniques presented herein may be implemented. Such a client device110may vary widely in configuration or capabilities, in order to provide a variety of functionality to a user such as the user112. The client device110may be provided in a variety of form factors, such as a desktop or tower workstation; an “all-in-one” device integrated with a display308; a laptop, tablet, convertible tablet, or palmtop device; a wearable device mountable in a headset, eyeglass, earpiece, and/or wristwatch, and/or integrated with an article of clothing; and/or a component of a piece of furniture, such as a tabletop, and/or of another device, such as a vehicle or residence. The client device110may serve the user in a variety of roles, such as a workstation, kiosk, media player, gaming device, and/or appliance. The client device110may comprise one or more processors310that process instructions. The one or more processors310may optionally include a plurality of cores; one or more coprocessors, such as a mathematics coprocessor or an integrated graphical processing unit (GPU); and/or one or more layers of local cache memory. The client device110may comprise memory301storing various forms of applications, such as an operating system303; one or more user applications302, such as document applications, media applications, file and/or data access applications, communication applications such as web browsers and/or email clients, utilities, and/or games; and/or drivers for various peripherals. The client device110may comprise a variety of peripheral components, such as a wired and/or wireless network adapter306connectible to a local area network and/or wide area network; one or more output components, such as a display308coupled with a display adapter (optionally including a graphical processing unit (GPU)), a sound adapter coupled with a speaker, and/or a printer; input devices for receiving input from the user, such as a keyboard311, a mouse, a microphone, a camera, and/or a touch-sensitive component of the display308; and/or environmental sensors, such as a global positioning system (GPS) receiver319that detects the location, velocity, and/or acceleration of the client device110, a compass, accelerometer, and/or gyroscope that detects a physical orientation of the client device110. Other components that may optionally be included with the client device110(though not shown in the schematic architecture diagram300ofFIG.3) include one or more storage components, such as a hard disk drive, a solid-state storage device (SSD), a flash memory device, and/or a magnetic and/or optical disk reader; and/or a flash memory device that may store a basic input/output system (BIOS) routine that facilitates booting the client device110to a state of readiness; and a climate control unit that regulates climate properties, such as temperature, humidity, and airflow. The client device110may comprise a mainboard featuring one or more communication buses312that interconnect the processor310, the memory301, and various peripherals, using a variety of bus technologies, such as a variant of a serial or parallel AT Attachment (ATA) bus protocol; the Uniform Serial Bus (USB) protocol; and/or the Small Computer System Interface (SCI) bus protocol. The client device110may comprise a dedicated and/or shared power supply318that supplies and/or regulates power for other components, and/or a battery304that stores power for use while the client device110is not connected to a power source via the power supply318. The client device110may provide power to and/or receive power from other client devices. In some scenarios, as a user112interacts with a software application on a client device110(e.g., an instant messenger and/or electronic mail application), descriptive content in the form of signals or stored physical states within memory (e.g., an email address, instant messenger identifier, phone number, postal address, message content, date, and/or time) may be identified. Descriptive content may be stored, typically along with contextual content. For example, the source of a phone number (e.g., a communication received from another user via an instant messenger application) may be stored as contextual content associated with the phone number. Contextual content, therefore, may identify circumstances surrounding receipt of a phone number (e.g., the date or time that the phone number was received), and may be associated with descriptive content. Contextual content, may, for example, be used to subsequently search for associated descriptive content. For example, a search for phone numbers received from specific individuals, received via an instant messenger application or at a given date or time, may be initiated. The client device110may include one or more servers that may locally serve the client device110and/or other client devices of the user112and/or other individuals. For example, a locally installed webserver may provide web content in response to locally submitted web requests. Many such client devices110may be configured and/or adapted to utilize at least a portion of the techniques presented herein. 2. Presented Techniques One or more computing devices and/or techniques for facilitating communications with service providers using disposable email addresses (DEAs) are provided. For example, a user may want one or more services to be performed (e.g., the user may want plumbing services for repairing a pipe, the user may want electrician services to have a lighting system installed, the user may want to modify an internet service such as an internet speed, etc.) which may require the user to seek service providers that can perform the one or more services. For example, the user may want to remodel a kitchen. The user may want to find a plurality of service providers related to kitchen remodeling and/or may want to be provided with service information (e.g., a quote, availability information associated with a service provider, capabilities of a service provider, etc.) from the plurality of service providers. Finding and/or contacting the plurality of service providers may be a difficult and/or time-consuming process for the user (e.g., the user may need to search through a phone book, the user may need to perform searches using a search engine and/or navigate through web pages, etc.). Alternatively and/or additionally, the user may be required to provide personal information (e.g., email address, phone number, mailing address, etc.) to the plurality of service providers in order to receive the service information. Personal information associated with the user may be misused and/or used in ways the user does not approve (e.g., the personal information may be disclosed to entities without the user's permission, the personal information may be collected and/or used for directing promotional content to the user that the user does not have an interest in, an email account associated with the user may be subscribed to one or more subscription services without the user's permission, etc.). For example, the plurality of service providers may send emails to the email account associated with the user (e.g., using the email address) for an extended period of time. In accordance with one or more of the techniques presented herein, a first email may be received from the email account associated with the user. For example, a requested service (e.g., home improvement, kitchen remodeling, plumbing service, etc.) may be determined based upon the first email. A set of service providers may (automatically) be determined based upon the requested service (and/or a location associated with the user). A DEA, corresponding to the email account may be generated. For example, the DEA may be used by the user and/or the set of service providers for email correspondences between the user and the set of service providers, where the user is not required to disclose the email address associated with the email account to the set of service providers. For example, a second email may be generated based upon the first email. A sender address of the second email may comprise an indication of the DEA (where the email address of the email account may not be comprised within the second email). The second email may be transmitted to a set of email accounts associated with the requested service. Emails received from the set of email accounts that are addressed to the DEA may be transmitted to the first email account. Alternatively and/or additionally, sender address fields of emails composed using the email account and/or transmitted to email accounts of the set of email accounts may comprise indications of the DEA (rather than the email address of the email account). Responsive to receiving a request to deactivate the DEA from a device associated with the user and/or responsive to determining that the requested service is completed, the DEA may be deactivated. An embodiment of facilitating communications with service providers using DEAs is illustrated by an example method400ofFIG.4. A first user, such as user Jill, (e.g., and/or a first client device associated with the first user) may access and/or interact with a communication system (and/or an email system, messaging system, etc.) for sending and/or receiving emails and/or performing communications via messaging, voice calls, video calls, etc. For example, a first email account (and/or a different type of user account) of the first user with the communication system may be accessed and/or interacted with via a first email interface, such as an email client, a web email interface accessed via a browser, an email application, etc. on the first client device. In some examples, the communication system may be associated with an email service provider. For example, the communication system may provide a service where emails (and/or other types of messages), associated with services that users want performed (e.g., catering services, internet services, home improvement and/or repair services, electrical services, etc.), may be transmitted to the communication system. The communication system may determine requested services associated with the emails, select service providers related to each requested service and/or may provide for communications between users and the service providers using DEAs such that users may communicate with the service providers without disclosing personal information (such as email addresses). At402, a first email may be received from the first email account. The first email may comprise one or more indications of a first email address (e.g., “[email protected]”) associated with the first email account. For example, the one or more indications of the first email address may be comprised within a first email header of the first email. For example, the first email header of the first email may comprise a plurality of email header fields, such as a first sender address field, a first subject field, a first date field, a first recipient address field, a first return-path field, a first delivery date field, etc. For example, the first sender address field may be indicative of a sender of the first email and/or may comprise the first email address and/or a sender name (e.g., the first sender email address field may comprise “From: Jill Higgins <[email protected]>”). Alternatively and/or additionally, the first return-path field may be indicative of an email address for return mail (e.g., “Reply-To:”) and/or may comprise the first email address (and/or a different email address) (e.g., the first return-path field may comprise “Return-Path: <[email protected]>”). In some examples, the first email may comprise a first email body composed and/or drafted (by the first user) using the first email interface. For example, the first email body may comprise content (e.g., text, one or more images, etc.) related to a first requested service associated with the first email. For example, the first email body may comprise text, drafted using the first email interface, which may comprise a description of the first requested service. Alternatively and/or additionally, the first email body may comprise one or more images associated with the first requested service. In a first example, the first user may want to have a plumber perform one or more plumbing services (e.g., fix a burst pipe, relocate a radiator, etc.). For example, the first email body may comprise text comprising a description of a burst pipe and/or a request to quote a price for one or more services to fix the burst pipe. Alternatively and/or additionally, the first email body may comprise one or more images of the burst pipe. In a second example, the first user may want to have a caterer to provide food service to an event venue for an event. For example, the first email body may comprise text comprising a description of desired foods and/or a request to quote a price for the desired foods. In some examples, rather than receiving the first email, a service request message may be received from the first client device. For example, a service request interface may be displayed using the first client device. For example, the service request interface may be accessed via an app and/or the service request interface may be a web interface (accessed via a browser, for example). In some examples, the service request interface may comprise a first text field corresponding to an email address. For example, the first email address may be entered into the first text field. In some examples, it may be required that an email address entered into the first text field be associated with the communication system and/or the email service provider. Alternatively and/or additionally, it may not be required that an email address entered into the first text field be associated with the communication system and/or the email service provider. Alternatively and/or additionally, the service request interface may comprise a second text field corresponding to text associated with the first requested service (e.g., a description of the first requested service). For example, text may be entered into the second text field. Alternatively and/or additionally, the service request interface may comprise a selectable list of services, wherein the first requested service may be selected from the selectable list of services. In some examples, the service request message, comprising the first email address and/or the text may be received from the first client device responsive to a selectable input of the service request interface being selected. At404, the first requested service associated with the first email may be determined. For example, the first requested service may be determined by analyzing the first email. For example, the first subject field, the first email body, etc. may be analyzed to determine the first requested service. For example, text of the email body and/or the first subject field may be compared with a plurality of services to determine whether the text of the email body and/or the first subject field comprises one or more words corresponding to a service of the plurality of services. For example, responsive to a determination that one or more words of the first subject field and/or the email body matches a service of the plurality of services, the service may be determined to be the first requested service. Alternatively and/or additionally, the communication system may be associated with a plurality of service email addresses, wherein each email address of the plurality of service email addresses may correspond to a service topic of a plurality of service topics associated with the communication system. For example, the plurality of service email addresses may comprise a first service email address (e.g., “[email protected]”) corresponding to a first service topic “electrician services”, a second service email address (e.g., “[email protected]”) corresponding to a second service topic “internet services”, a third service email address (e.g., “[email protected]”) corresponding to a third service topic “plumbing services”, a fourth service email address (e.g., “[email protected]”) corresponding to a fourth service topic “home improvement services”, etc. Alternatively and/or additionally, the communication system may be associated with a plurality of sets of service sub-topics. For example, each set of service sub-topics of the plurality of sets of service sub-topics may correspond to a service topic of the plurality of service topics and/or an email address of the plurality of service email addresses. For example, a first set of service sub-topics (e.g., light bulb installation, computer cabling, broken switch repair, etc.) may correspond to the first service topic “electrician services”. In some examples, the first email may be transmitted by the first email account to a service email address of the plurality of service email addresses. For example, the first email may be received by the communication system via a service email address of the plurality of service email addresses. The first requested service may be determined based upon a service email address that the first email is addressed to. For example, if the first email is addressed to the first service email address (e.g., if the first email comprises the first service email address within the first recipient address field of the first email header) corresponding to the first service topic “electrician services”, then the first requested service may be determined to be an electrician service. In an example, the first email may be addressed to the fourth service email address “[email protected]” (e.g., the first recipient address field of the first email header may comprise “[email protected]”). The first requested service may be determined to be associated with the fourth service topic “home improvement services” based upon the first email being addressed to the fourth service email address. For example, the first requested service may be determined to be a home improvement service based upon the first email being addressed to the fourth service email address. The first email body (e.g., “I live on Mountain View Rd. and I want to remodel my kitchen”) and/or the first subject field (e.g., “Kitchen Remodeling”) may be analyzed to identify a service sub-topic of the fourth service topic that is associated with the first requested service. For example, the first email body may be analyzed to determine that the first requested service is associated with a first service sub-topic “kitchen remodeling services” of a set of service sub-topics associated with the fourth service topic. For example, the first requested service may be determined to be a kitchen remodeling service based upon the first email body and/or the first subject field. At406, a set of service providers may be determined based upon the first requested service. For example, the set of service providers may be selected from a database of service providers associated with the communication system. The database of service providers may comprise indications of a plurality of service providers. Alternatively and/or additionally, the database of service providers may comprise a plurality of sets of service provider information corresponding to the plurality of service providers. Each set of service provider information may correspond to a service provider of the plurality of service providers. For example, a set of service provider information may comprise one or more of a list of services provided by a service provider of the plurality of service providers, one or more ratings (e.g., customer ratings) associated with the service provider, a name of the service provider, an address of the service provider, an email address associated with the service provider, a location associated with the service provider (e.g., a geolocation associated with the service provider comprising a set of coordinates (e.g., longitude and/or latitude coordinates) corresponding to the service provider), a service region in which the service provider provides services, a telephone number associated with the service provider, a website associated with the service provider, company information associated with the service provider, etc. In some examples, the set of service providers may be selected from the plurality of service providers based upon one or more client locations associated with the first user and/or the first email account. For example, the one or more client locations may correspond to a first geolocation associated with the first client device. For example, the one or more client locations may be determined based upon location information associated with the first client device received from a wireless network (e.g., a WiFi network, a hotspot, a wireless access point (WAP), a network associated with a base station, etc.) that the first client device is connected to. The location information may comprise received signal strength indicators (RSSIs) associated with communications between the first client device and the wireless network. Alternatively and/or additionally, the location information may comprise angle of arrival (AoA) information. One or more RSSI localization techniques and/or one or more trilateration techniques may be performed using the RSSIs and/or the AoA information to determine the one or more client locations of the first client device. Alternatively and/or additionally, the location information may comprise satellite navigation information comprising longitude measurements, latitude measurements and/or altitude measurements associated with locations of the first client device. The satellite navigation information may be received from a satellite navigation system, such as a global navigation satellite system (GNSS) (e.g., Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), Galileo, etc.). In some examples, the one or more client locations of the first client device (and/or the first user) may be determined based upon merely the satellite navigation information. Alternatively and/or additionally, the one or more client locations may be determined based upon a combination of the satellite navigation information, the AoA information and/or the RSSIs. Alternatively and/or additionally, the one or more client locations may be determined based upon email activity of the first email account. For example, emails of the first email account may be analyzed to determine a home location, a work location, etc. associated with the first user and/or the first email account. Alternatively and/or additionally, the one or more client locations may be determined based upon search activity associated with the first client device. For example, queries used to perform searches (using a search engine) may be analyzed to determine one or more locations associated with the queries. Alternatively and/or additionally, the one or more client locations may be determined based upon social media activity associated with the first email account and/or the first client device. For example, social media posts associated with the social media activity and/or a social media profile associated with the first email account may be analyzed to determine the home location, the work location, etc. In some examples, the one or more client locations may be stored in a user profile associated with the first email account. Alternatively and/or additionally, the user profile may comprise historical data corresponding to previous emails transmitted to the communication system associated with requested services. For example, the historical data may comprise indications of the requested services and/or indications of service providers that performed the requested services. Alternatively and/or additionally, the user profile may comprise a user telephone number associated with the first user, a home address associated with the first user, etc. In some examples, a service provider may be selected from the plurality of service providers for inclusion in the set of service providers based upon a determination that a location of the service provider is within a threshold distance (e.g., 10 miles, 100 miles, etc.) from a location of the one or more client locations (e.g., the home location, the work location, a location of the first client device, etc.). For example, the location of the service provider may be determined based upon a set of service provider information associated with the service provider (e.g., an address of the service provider and/or a geolocation associated with the service provider). Alternatively and/or additionally, a service provider may be selected from the plurality of service providers for inclusion in the set of service providers based upon a determination that a location of the one or more client locations is within a service region associated with the service region. For example, the service region may correspond to a region in which the service provider provides services. For example, the service region may comprise an indication of one or more of one or more cities, one or more zip codes, one or more states, etc. The service provider may be selected from the plurality of service providers for inclusion in the set of service providers based upon a determination that a location of the one or more client locations is within one or more of the one or more cities, the one or more zip codes, the one or more states, etc., which may be determined based upon the user address. Alternatively and/or additionally, the service region may comprise a geometrical representation of geographical boundaries of a region in which the service provider provides services. For example, the service provider may be selected from the set of service providers for inclusion in the set of service providers based upon a determination that a location of the one or more client locations is within the geographical boundaries of the service region. Alternatively and/or additionally, a service provider may be selected from the plurality of service providers for inclusion in the set of service providers based upon a determination that an area code of a telephone number associated with the service provider matches an area code of the user telephone number associated with the first user. Alternatively and/or additionally, the plurality of sets of service provider information may be analyzed based upon the first requested service. For example, lists of services associated with the plurality of service providers may be analyzed based upon the first requested service. The first requested service may be compared with the lists of services associated with the plurality of service providers to identify lists of services associated with service providers that comprise a service matching (and/or related to) the first requested service. For example, a service provider may be selected for inclusion in the set of service providers responsive to a determination that a list of services associated with the service provider comprises one or more services matching (and/or related to) the first requested service. In an example, the first requested service may be a kitchen remodeling service. An exemplary service provider may be selected for inclusion into the set of service providers responsive to a determination that a list of services associated with the exemplary service provider comprises “home improvement services” (e.g., the exemplary service provider may be selected based upon a determination that the kitchen remodeling service is related to home improvement services). Alternatively and/or additionally, an exemplary service provider may be selected for inclusion into the set of service providers responsive to a determination that a list of services associated with the exemplary service provider comprises “kitchen remodeling services” (e.g., the exemplary service provider may be selected based upon a determination that the kitchen remodeling service matches kitchen remodeling services). In some examples, names of service providers (e.g., company names, store names, etc.) associated with the plurality of service providers may be analyzed based upon the first requested service. The first requested service may be compared with the names of service providers associated with the plurality of service providers to identify service providers of the plurality of service providers having names that match (and/or are related to) the first requested service. For example, a service provider may be selected for inclusion in the set of service providers responsive to a determination that a name associated with the service provider matches (and/or is related to) the first requested service. In an example, the first requested service may be a plumbing service. An exemplary service provider may be selected for inclusion in the set of service providers responsive to a determination that a service provider name of the exemplary service provider is “JJ's Plumbing Services” (e.g., the exemplary service provider may be selected based upon a determination that “JJ's Plumbing Services” is related to the plumbing service). In some examples, service providers may be selected for inclusion in the set of service providers based upon ratings (e.g., customer ratings) associated with the plurality of service providers. For example, a service provider may be selected for inclusion in the set of service providers responsive to a determination that a rating associated with the service provider is higher than a threshold rating. At408, a first DEA corresponding to the first email account may be generated in association with the first requested service. For example, the first DEA may be generated based upon the first email address. For example, a portion of the first email address may be replaced with one or more characters (e.g., the first email address may be “[email protected]” and/or the first DEA may be “[email protected]”, “[email protected]”, [email protected]”, etc.). Alternatively and/or additionally, one or more characters may be added to the first email address (e.g., the first email address may be “[email protected]” and/or the first DEA may be “[email protected]”, “[email protected]”, etc.). Alternatively and/or additionally, the first DEA may comprise a (random) sequence of characters (e.g., letters, words and/or symbols) (e.g., the first DEA may be “[email protected]”). In some examples, the first DEA may be connected to the first email account via Internet Message Access Protocol (IMAP)-In. At410, a second email may be generated based upon the first email and/or the first DEA. For example, the second email may comprise an indication of the first DEA. In some examples, the first email may be modified to generate the second email. For example, the first email header of the first email may be modified to generate a second email header of the second email (e.g., the second email header may be different than the first email header). For example, the first sender address field (comprising the first email address) of the first email header may be modified to generate a second sender address field, of the second email header, comprising the first DEA. Alternatively and/or additionally, the first return-path field (comprising the first email address) of the first email header may be modified to generate a second return-path field, of the second email header, comprising the first DEA. In some examples, a second subject field of the second email header may be similar (e.g., the same as) the first subject field of the first email header. Alternatively and/or additionally, a second email body of the second email may be generated based upon the first email body of the first email. For example, the second email body of the second email may be generated based upon content of the first email body. The second email body of the second email may comprise the content of the first email body. At412, the second email may be transmitted to a set of email accounts. Each email account of the set of email accounts may be associated with a service provider of the set of service providers associated with the first requested service. In some examples, the second email may be transmitted to the set of email accounts using email addresses associated with the set of service providers. The email addresses may be determined based upon sets of service provider information, in the database of service providers, associated with the set of service providers. At414, a third email, addressed to the first DEA, may be received from a second email account associated with a first service provider of the set of service providers associated with the first requested service. For example, the third email may comprise a third email header. The third email header of the third email may comprise a third recipient address field comprising an indication of the first DEA. At416, the third email may be transmitted to the first email account. For example, the third email may be transmitted to the first email account responsive to a determination that the first DEA corresponds to the first email account. For example, a DEA database may be analyzed based upon the first DEA to identify the first email account. For example, the DEA database may comprise a plurality of DEAs. Each DEA of the plurality of DEAs may be associated with an email account of a plurality of email accounts associated with the communication system. For example, each DEA of the plurality of DEAs may be tagged with an indication of an email account of the plurality of email accounts. The DEA database may be analyzed based upon the first DEA. The third email may be transmitted to the first email account responsive to identifying the first DEA and/or determining that the first DEA is tagged with an indication of the first email account. Alternatively and/or additionally, each DEA of the plurality of DEAs may be tagged with a status tag. For example, a status tag may be indicative of a status of a corresponding DEA. For example, a status tag may be indicative of a DEA being associated with an active status where emails addressed to the DEA are transmitted to an email account corresponding to the DEA. Alternatively and/or additionally, a status tag may be indicative of a DEA being associated with a deactivated status where emails addressed to the DEA are not automatically transmitted to an email account corresponding to the DEA. For example, the third email may be transmitted to the first email account responsive to identifying the first DEA and/or determining that a first status tag associated with the first DEA is indicative of the first DEA being associated with an active status. In some examples, a set of instructions (e.g., machine-readable instructions) may be transmitted to the first client device and/or to the first email account. For example, the set of instructions may be transmitted to the first email account via the third email (e.g., the set of instructions may be comprised within the third email). Alternatively and/or additionally, the set of instructions may be transmitted to the first client device and/or to the first email account separately from the third email. The set of instructions may comprise instructions associated with emails that are addressed to the first DEA (e.g., emails having email headers comprising the first DEA within recipient address fields, such as the third email). For example, the set of instructions may indicate that response emails drafted and/or transmitted using the first email account, in response to emails that are addressed to the first DEA, shall include the first DEA within email headers of the response emails (e.g., an email header of a response email may comprise the first DEA within a sender address field of the email header and/or the first DEA within a return-path field of the email header). Alternatively and/or additionally, the set of instructions may indicate that response emails drafted and/or transmitted using the first email account, in response to emails that are addressed to the first DEA, shall not include the first email address within email headers of the response emails. Alternatively and/or additionally, the set of instructions may indicate that emails, addressed to email accounts of the set of email accounts, shall include the first DEA within email headers of the emails (e.g., the set of instructions may comprise indications of the set of email accounts). Alternatively and/or additionally, the set of instructions may indicate that emails, addressed to one or more email accounts of the set of email accounts, shall not include the first email address within email headers of the emails. For example, the third email may be displayed using the first email interface associated with the first email account. The first email interface may comprise a reply selectable input corresponding to composing a fourth email that is a response to the third email (e.g., the fourth email may be a response email associated with the third email). Responsive to a selection of the reply selectable input, one or more portions of a fourth email header associated with the fourth email may (automatically) be configured in accordance with the set of instructions (e.g., the one or more portions of the fourth email header may be populated in accordance with the set of instructions). For example, a fourth sender address field and/or a fourth return-path field of the fourth email header may be populated using the first DEA (in accordance with the set of instructions) (e.g., the first DEA may be entered into the fourth sender address field and/or the fourth return-path field of the fourth email header). Alternatively and/or additionally, rather than configuring the one or more portions of the fourth email header (e.g., populating the one or more portions of the fourth email header) responsive to the selection of the reply selectable input, the one or more portions of the fourth email header may be configured (e.g., populated), in accordance with the set of instructions, responsive to a selection of a transmit selectable input. For example, the transmit selectable input may correspond to transmitting the fourth email to the second email address. In some examples, responsive to the selection of the transmit selectable input, the first email address may be removed from the fourth sender address field and/or the fourth return-path field of the fourth email header. Alternatively and/or additionally, responsive to the selection of the transmit selectable input, the fourth sender address field and/or the fourth return-path field of the fourth email header may be populated using the first DEA (e.g., the first DEA may be entered into the fourth sender address field and/or the fourth return-path field of the fourth email header). In some examples, the set of instructions may not be transmitted to the first email account and/or to the first client device. Rather than using the first client device and/or the first email interface to modify the fourth email header and/or enter the first DEA into the fourth sender address field and/or the fourth return-path field of the fourth email header (and/or remove the first email address from the fourth email header), the fourth email may be modified to generate a fifth email using a server associated with the communication system (and/or the email service provider). In some examples, the fourth email may be received (by the server associated with the communication system and/or the email service provider) responsive to a selection of the transmit selectable input. The fourth email may be analyzed to determine whether the fourth email comprises an indication of the first email address. For example, it may be determined that the fourth sender address field and/or the fourth return-path field of the fourth email header (of the fourth email) comprises the first email address. Responsive to determining that the fourth email comprises the first email address (within the fourth email header) the fourth email may be modified, based upon the first DEA, to generate the fifth email. For example, a fifth email body of the fifth email may be similar to a fourth email body of the fourth email (e.g., the fourth email body may comprise content of the fifth email body). Alternatively and/or additionally, a fifth email header of the fifth email may be different than the fourth email header of the fourth email. For example, rather than a fifth sender address field and/or a fifth return-path field of the fifth email header comprising the first email address, the fifth sender address field and/or the fifth return-path field may comprise the first DEA. In some examples, the fifth email may be transmitted to the second email account. Alternatively and/or additionally, responsive to determining that the fourth email comprises the first DEA (within the fourth email header) and/or does not comprise the first email address, the fourth email may be transmitted to the second email account (and/or the fifth email may not be generated). At418, a request to deactivate the first DEA may be received from the first client device associated with the first email account (e.g., and/or a different client device associated with the first client device). For example, the first client device may be used to display a deactivation interface (e.g., the deactivation interface may be a web page associated with the communication system, the deactivation interface may be comprised within a notification and/or an email, transmitted by the communication system, to the first email account, etc.). The deactivation interface may comprise a deactivate selectable input corresponding to requesting deactivation of the first DEA. For example, the request to deactivate the first DEA may be received responsive to a selection of the deactivate selectable input. In some examples, one or more emails addressed to the first DEA and/or transmitted by email accounts of the set of email accounts may be analyzed to determine whether the first requested service is completed. For example, it may be determined that the first requested service is completed by identifying an email comprising a payment receipt associated with a payment by the first user in exchange for completion of the first requested service by a service provider of the set of service providers. Alternatively and/or additionally, it may be determined that the first requested service is completed by identifying an email comprising a confirmation of completion of the first requested service. In some examples, responsive to determining that the first requested service is completed by a service provider of the set of service providers, the communication system (e.g., a server associated with the communication system) may transmit a first notification to the first client device. For example, the first notification may be a sixth email transmitted to the first email account. For example, the first notification may comprise a first deactivate selectable input. Responsive to a selection of the first deactivate selectable input, the request to deactivate the first DEA may be received (by the communication system). Alternatively and/or additionally, a second notification may be transmitted to the first client device responsive to a determination that a first duration of time since a time that the first DEA was generated is greater than a first threshold duration of time (e.g., one week, one month, etc.). For example, the second notification may be a seventh email transmitted to the first email account. The second notification may be indicative of the first duration of time being greater than the first threshold duration of time (e.g., the second notification may comprise “The DEA for the plumbing services you requested was generated one month ago. Do you want to deactivate the DEA?”). Alternatively and/or additionally, the second notification may comprise a second deactivate selectable input. Responsive to a selection of the second deactivate selectable input, the request to deactivate the first DEA may be received. Alternatively and/or additionally, a third notification may be transmitted to the first client device responsive to a determination that a second duration of time of email inactivity associated with the first DEA is greater than a second threshold duration of time. For example, the third notification may be an eighth email transmitted to the first email account. The second duration of time of email inactivity may correspond to a time in which an email addressed to the first DEA is not received by the first email account and/or the communication system (e.g., 0 emails addressed to the first DEA are received by the first email account and/or the communication system during the second duration of time of email inactivity). Alternatively and/or additionally, the second duration of time of email inactivity may correspond to a time in which an email is not transmitted to the set of email accounts by the first email account (e.g., 0 emails are transmitted by the first email account to the set of email accounts during the second duration of time of email activity). For example, the third notification may be indicative of the second duration of time being greater than the second threshold duration of time (e.g., the third notification may comprise “You haven't received any emails addressed to the DEA for plumbing services and you haven't sent any emails to any of the service providers in over a month. Do you want to deactivate the DEA?”). The third notification may comprise a third deactivate selectable input. Responsive to a selection of the third deactivate selectable input, the request to deactivate the first DEA may be received. Alternatively and/or additionally, a fourth notification may be transmitted to the first client device responsive to identifying one or more malicious emails addressed to the first DEA. For example, the fourth notification may be a ninth email transmitted to the first email account. The one or more malicious emails may be determined to be malicious based upon a determination that the one or more malicious emails match one or more emails stored in a database of malicious emails comprising emails previously marked as being malicious. Alternatively and/or additionally, the one or more malicious emails may be determined to be malicious based upon a determination that the one or more malicious emails comprise links to unsecure and/or malicious web pages (e.g., blacklisted web pages). For example, the fourth notification may be indicative of the one or more malicious emails (e.g., the ninth email may comprise “Malicious emails addressed to the DEA for plumbing services have been identified. It seems the DEA may be targeted by malicious entities. Do you want to deactivate the DEA?”). The fourth notification may comprise a fourth deactivate selectable input. Responsive to a selection of the fourth deactivate selectable input, the request to deactivate the first DEA may be received. At420, responsive to receiving the request to deactivate the first DEA, the first DEA may be deactivated. Alternatively and/or additionally, the first DEA may be deactivated (automatically) responsive to determining that the first requested service is completed (e.g., based upon one or more emails addressed to the first DEA). Alternatively and/or additionally, the first DEA may be deactivated (automatically) responsive to the determination that the first duration of time since the time that the first DEA was generated is greater than the first threshold duration of time. Alternatively and/or additionally, the first DEA may be deactivated (automatically) responsive to the determination that the second duration of time of email inactivity is greater than the second threshold duration of time. Alternatively and/or additionally, the first DEA may be deactivated (automatically) responsive to identifying the one or more malicious emails addressed to the first DEA. In some examples, deactivating the first DEA may be associated with changing a status of the first DEA from active to deactivated. For example, the first status tag associated with the first DEA may be modified such that rather than the first status tag being indicative of the first DEA being associated with the active status, the first status tag may be indicative of the first DEA being associated with a deactivated status. Alternatively and/or additionally, deactivating the first DEA may be associated with removing the first DEA from the DEA database. After deactivating the first DEA, a tenth email addressed to the first DEA may be received (by the communication system) from a third email account (e.g., the third email account may be an email account of the set of email accounts). In some examples, the tenth email may be discarded responsive to a determination that the first DEA is deactivated. For example, the DEA database may be analyzed to identify the first status tag, indicative of the first DEA having the deactivated status. Alternatively and/or additionally, the DEA database may be analyzed and/or it may be determined that the first DEA is deactivated based upon a determination that the first DEA is not comprised within the DEA database. Alternatively and/or additionally, responsive to the determination that the first DEA is deactivated, a fifth notification may be generated and/or transmitted to the first client device. For example, the fifth notification may be an eleventh email transmitted to the first email account. The fifth notification may comprise an indication of the third email account (associated with the tenth email). The fifth notification may comprise a first selectable input corresponding to a request to be provided with the tenth email. Alternatively and/or additionally, the fifth notification may comprise a second selectable input corresponding to a request to not be provided with the tenth email. In some examples, responsive to a selection of the first selectable input (corresponding to the request to be provided with the tenth email), the tenth email may be transmitted to the first email account. Alternatively and/or additionally, responsive to the selection of the first selectable input, the first DEA may be activated. For example, the first status tag associated with the first DEA may be modified such that rather than the first status tag being indicative of the first DEA having the deactivated status, the first status tag may be indicative of the first DEA having the active status. In some examples, after receiving the selection of the first selectable input, a twelfth email addressed to the first DEA may be received (by the communication system) from a fourth email account (e.g., the fourth email account may be an email account of the set of email accounts). In some examples, the twelfth email may be (automatically) transmitted to the first email account based upon the selection of the first selectable input of the fifth notification. For example, the twelfth email may be transmitted to the first email account responsive to a determination that the first DEA is active (e.g., the first status tag is indicative of the first DEA being associated with the active status). Alternatively and/or additionally, responsive to a selection of the second selectable input of the fifth notification (corresponding to the request to not be provided with the tenth email), the tenth email may be discarded. For example, the status of the first DEA may remain deactivated. Alternatively and/or additionally, the status of the first DEA may be change from deactivated to permanently deactivated. For example, the first status tag associated with the first DEA may be modified such that rather than the first status being indicative of the first DEA being associated with the deactivated status, the first status tag may be indicative of the first DEA being associated with a permanently deactivated status. In some examples, after receiving the selection of the second selectable input, a thirteenth email addressed to the first DEA may be received (by the communication system) from a fifth email account (e.g., the fifth email account may be an email account of the set of email accounts). In some examples, the thirteenth email may be discarded based upon the selection of the second selectable input of the fifth notification. Alternatively and/or additionally, the thirteenth email may be discarded responsive to a determination that the first DEA is permanently deactivated (e.g., that the first status tag is indicative of the first DEA being associated with the permanently deactivated status). It may be appreciated that one or more of the techniques presented herein may be implemented using a communication platform different than an email platform (e.g., text messaging, messaging platforms, social media platforms, etc.). For example, using one or more of the techniques presented herein, a requested service may be determined based upon a first message (e.g., a text message, an instant message, etc.) received from a client device. A set of service providers may be determined based upon the requested service. A DEA may be generated corresponding to the client device, in association with the requested service. A first email may be generated based upon the first message (e.g., content of the first email may comprise content of the first message). The first email may be transmitted to a set of email accounts. Each email account of the set of email accounts may be associated with a service provider of the set of service providers associated with the requested service. In some examples, one or more emails, addressed to the DEA, may be received from the set of email accounts (by the communication system). For example, one or more messages corresponding to the one or more emails may be generated (e.g., each message of the one or more messages may comprise content of an email of the one or more emails). In some examples, the one or more messages may be transmitted to the client device. For example, a second email, addressed to the DEA, may be received from a first email account of the set of email accounts (e.g., the first email account of the set of email accounts may be associated with a first service provider of the set of service providers). A second message, corresponding to the second email, may be generated based upon the second email (e.g., the second message may comprise content of the second email). The second message may be transmitted to the client device. In some examples, the second message may be a text message (e.g., associated with short message service (SMS), multimedia messaging service (MMS), etc.) and/or may be associated with a telephone number. For example, the telephone number may be a return number of the second message. For example, the second message may be displayed using the client device. The second message may be displayed using a text messaging interface and/or the text messaging interface may be indicative of the telephone number associated with the second message. In some examples, the client device may be used to transmit a third message to the telephone number. For example, the third message may be a text message addressed to the telephone number. In some examples, the communication system may receive the third message. For example, the telephone number may be identified and/or may be compared with a telephone number database comprising a plurality of telephone numbers and/or a plurality of email accounts associated with the plurality of telephone numbers. For example, it may be determined that the telephone number is associated with the first email account (associated with the first service provider). A third email may be generated based upon the third message. The third email may be transmitted to the first email account. In some examples, the one or more emails addressed to the DEA may be analyzed to determine whether the requested service is completed. For example, responsive to a determination that the requested service is completed, the DEA may be deactivated. Alternatively and/or additionally, a notification may be transmitted to the client device. For example, the notification may comprise a link to a web page comprising a deactivate interface. For example, the deactivate interface may comprise a deactivate selectable input corresponding to requesting deactivation of the DEA. For example, responsive to a selection of the deactivate selectable input, a request to deactivate the DEA may be received (by the communication system) and/or the DEA may be deactivated. FIGS.5A-5Iillustrate examples of a system501for facilitating communications with service providers using DEAs. A first user, such as user Sam and/or a first client device550(illustrated inFIG.5D) associated with the first user may access and/or interact with a communication system (and/or an email system, messaging system, etc.) for sending and/or receiving emails and/or performing communications via messaging, voice calls, video calls, etc. For example, a first email account (and/or a different type of user account) of the first user with the communication system may be accessed and/or interacted with via a first email interface, such as an email client, a web email interface accessed via a browser, an email application, etc. on the first client device550. In some examples, the communication system may be associated with an email service provider. FIG.5Aillustrates a first email502being received from the first email account. For example, the first email502may comprise a first email header504and/or a first email body506. The first email header504may comprise a plurality of email header fields. For example, the first email header504may comprise a first sender address field508comprising a sender name “SAM B” associated with the first email account and/or a first email address “[email protected]” associated with the first email account. Alternatively and/or additionally, the first email header504may comprise a first subject field510comprising a first subject “Kitchen Remodeling” of the first email502. Alternatively and/or additionally, the first email header504may comprise a first recipient address field512comprising a second email address516“[email protected]” associated with the communication system. Alternatively and/or additionally, the first email header504may comprise a first return-path field514comprising the first email address. In some examples, the first email body506may comprise content (e.g., text, one or more images, etc.) related to a first requested service associated with the first email502. For example, the first email body may comprise text, composed using the first email interface, which may comprise a description of the first requested service. The first email may be transmitted to the second email address516. For example, the second email address516may be associated with a first service topic “home improvement services”. For example, the first requested service may be determined to be a home improvement service based upon the first email being transmitted to the second email address516. Alternatively and/or additionally, the first email body506and/or the first subject field510may be analyzed to determine the first requested service. For example, the first requested service may be determined to be “kitchen remodeling service” based upon the first email body506and/or the first subject field510. In some examples, a set of service providers may be determined based upon the first requested service. For example, each service provider of the set of service providers may provide kitchen remodeling services. The set of service providers may be determined based upon a client location associated with the first user and/or the first email account. For example, each service provider of the set of service providers may be associated with a location that is within a threshold distance from the client location. FIG.5Billustrates a second email520being generated based upon the first email502and/or a first DEA corresponding to the first email account. In some examples, the first DEA “[email protected]” may be generated in association with the first requested service. In some examples, the first email502may be modified to generate the second email520. For example, the first email header504may be modified to generate a second email header530of the second email520. In some examples, the first sender address field508(comprising the first email address) may be modified to generate a second sender address field532, of the second email header530, comprising the first DEA. Alternatively and/or additionally, the first return-path field514(comprising the first email address) may be modified to generate a second return-path field534, of the second email header530, comprising the first DEA. In some examples, the second email520may be transmitted to a set of email accounts528. Each email account of the set of email accounts528may be associated with a service provider of the set of service providers associated with the first requested service. For example, the second email520may be transmitted to a second email account526of the set of email accounts528. For example, the second email account526may be associated with a first service provider “SERVICE PROVIDER 1” of the set of service providers. FIG.5Cillustrates a third email542being transmitted to the first email account. For example, the third email542may be transmitted by the second email account526. The third email542may be addressed to the first DEA (e.g., a third email header of the third email542may comprise a third recipient address field comprising an indication of the first DEA). In some examples, the third email542may be received by a server544associated with the communication system and/or the email service provider. For example, the third email542may be transmitted to the first email account by the communication system and/or the email service provider (e.g., by the server544) responsive to a determination that the first DEA corresponds to the first email account. FIG.5Dillustrates a graphical user interface of the first client device550being controlled to display the first email interface. For example, the first email interface may display a list of emails. The list of emails may correspond to an inbox of the first email account. The list of emails may comprise the third email542. For example, a selection of the third email542may be received via the first email interface. FIG.5Eillustrates the graphical user interface of the first client device550being controlled to display the third email542. For example, the third email542may be displayed responsive to the selection of the third email542from the list of emails. In some examples, the first email interface may comprise a reply selectable input558corresponding to composing a fourth email564(illustrated inFIG.5F) that is a response to the third email542. Responsive to a selection of the reply selectable input558, an email drafting interface may be displayed. For example, the fourth email564may be drafted using the email drafting interface. Alternatively and/or additionally, the fourth email564may be transmitted to the communication system responsive to a selection of a transmit selectable input of the email drafting interface. For example, the fourth email564may be received from the first email account and/or the first client device550responsive to a selection of the transmit selectable input. FIG.5Fillustrates a fifth email566being generated based upon the fourth email564and/or the first DEA corresponding to the first email account. In some examples, the fourth email564may be modified to generate the fifth email566. For example, a fourth email header of the fourth email564may be modified to generate a fifth email header of the fifth email566. In some examples, a fourth sender address field (comprising the first email address) of the fourth email header may be modified to generate a fifth sender address field, of the fifth email header, comprising the first DEA. Alternatively and/or additionally, a fourth return-path field (comprising the first email address) of the fourth email header may be modified to generate a fifth return-path field, of the fifth email header, comprising the first DEA. The fifth email566may be transmitted to the second email account526(by the communication system). FIG.5Gillustrates a sixth email574being transmitted to the first email account. For example, the sixth email574may be transmitted by the second email account526. The sixth email574may be addressed to the first DEA (e.g., a sixth email header of the sixth email574may comprise a sixth recipient address field comprising an indication of the first DEA). In some examples, the sixth email574may be received by the server544(and/or a different server) associated with the communication system and/or the email service provider. For example, the sixth email574may be transmitted to the first email account (by the communication system and/or the email service provider) responsive to a determination that the first DEA corresponds to the first email account. In some examples, the sixth email574may be analyzed to determine whether the first requested service is completed. For example, it may be determined that the first requested service is completed by identifying that the sixth email574comprises a payment receipt associated with a payment by the first user in exchange for completion of the first requested service by the first service provider. In some examples, responsive to determining that the first requested service is completed by the first service provider, the communication system (e.g., a server associated with the communication system) may transmit a seventh email582to the first email account. FIG.5Hillustrates the graphical user interface of the first client device550being controlled to display the first email interface comprising a second list of emails. The second list of emails may correspond to the inbox of the first email account. The second list of emails may comprise the sixth email574. Alternatively and/or additionally, the second list of emails may comprise the seventh email582. For example, a selection of the seventh email582may be received via the first email interface. FIG.5Iillustrates the graphical user interface of the first client device550being controlled to display the seventh email582. For example, the seventh email582may be displayed responsive to the selection of the seventh email582from the second list of emails. In some examples, the seventh email582may comprise a deactivate selectable input592corresponding to a request to deactivate the first DEA. Alternatively and/or additionally, the seventh email582may comprise an activate selectable input594corresponding to a request to not deactivate the first DEA. In some examples, the deactivate selectable input592may be selected. For example, responsive to the deactivate selectable input592being selected, a request to deactivate the first DEA may be received. For example, the first DEA may be deactivated responsive to receiving the request to deactivate the first DEA. It may be appreciated that the disclosed subject matter may assist a user (e.g., and/or a client device associated with the user) in receiving service information, associated with one or more services of interest to the user, from multiple service providers. Implementation of at least some of the disclosed subject matter may lead to benefits including, but not limited to, a reduction in screen space and/or an improved usability of a display (of the client device) (e.g., as a result of automatically identifying a set of service providers associated with a requested service of interest to the user, wherein the user may not need to open a separate application and/or a separate window in order to find service providers associated with the requested service, wherein the user may not need to use search engines and/or navigate through internet content in order to search for service providers associated with the requested service, etc.). Alternatively and/or additionally, implementation of at least some of the disclosed subject matter may lead to benefits including a reduction in screen space and/or an improved usability of the display (e.g., as a result of receiving a first email from a first email account associated with the user, as a result of transmitting an email related to the first email to email accounts associated with the set of service providers, wherein the user may not need to send an email to each email account separately, wherein the user may not need to find email addresses corresponding to each service provider of the set of service providers, wherein the user may not need to enter each email address into an email interface for transmission of the first email, etc.). Alternatively and/or additionally, implementation of at least some of the disclosed subject matter may lead to benefits including a reduction in screen space and/or an improved usability of the display (e.g., as a result of generating a DEA corresponding to the first email account, as a result of using the DEA for email correspondences between the user and the set of service providers, wherein the user may not be required to disclose a first email address associated with the first email account, wherein service providers of the set of service providers may be prevented from sending unwanted and/or undesirable emails to the first email account by deactivating the DEA, wherein the user may not need to scroll through unwanted and/or undesirable emails to consume (desirable) emails, etc.). Alternatively and/or additionally, implementation of at least some of the disclosed subject matter may lead to benefits including a reduction in bandwidth (e.g., as a result of reducing a need for the user to open a separate application and/or a separate window in order to search throughout the internet and/or navigate through internet content to find service providers associated with the requested service). Alternatively and/or additionally, implementation of at least some of the disclosed subject matter may lead to benefits including a reduction in bandwidth (e.g., as a result of preventing the set of service providers from sending unwanted and/or undesirable emails to the first email account by deactivating the DEA, such that the unwanted and/or undesirable emails are not downloaded to the client device). Alternatively and/or additionally, implementation of at least some of the disclosed subject matter may lead to benefits including more accurate and precise transmission of content to intended users (e.g., as a result of preventing the set of service providers from sending unwanted and/or undesirable emails to the first email account by deactivating the DEA, such that the unwanted and/or undesirable emails are not downloaded to the client device and/or merely wanted and/or desirable emails may be sent to the first email account and/or downloaded to the client device). Alternatively and/or additionally, implementation of at least some of the disclosed subject matter may lead to benefits including a faster loading of content on a receiving device. For example, by reducing undesirable emails transmitted to the first email account and/or by reducing undesirable content associated with the undesirable emails downloaded to the client device, as provided for herein, content may be downloaded to the client device at an increased speed, and thus delay between a determination to transmit content and completion of transmission of the content and/or presenting of the content can be reduced. Alternatively and/or additionally, implementation of at least some of the disclosed subject matter may lead to benefits including protecting user privacy and/or preventing unauthorized access to personal information associated with the user (e.g., as a result of enabling the user to receive service information associated with the requested service and/or the set of service providers without being required to provide personal information to the set of service providers, etc.). Alternatively and/or additionally, implementation of at least some of the disclosed subject matter may lead to benefits including decreasing security resources needed to protect the personal information from unauthorized access. In some examples, at least some of the disclosed subject matter may be implemented on a client device, and in some examples, at least some of the disclosed subject matter may be implemented on a server (e.g., hosting a service accessible via a network, such as the Internet). FIG.6is an illustration of a scenario600involving an example non-transitory machine readable medium602. The non-transitory machine readable medium602may comprise processor-executable instructions612that when executed by a processor616cause performance (e.g., by the processor616) of at least some of the provisions herein (e.g., embodiment614). The non-transitory machine readable medium602may comprise a memory semiconductor (e.g., a semiconductor utilizing static random access memory (SRAM), dynamic random access memory (DRAM), and/or synchronous dynamic random access memory (SDRAM) technologies), a platter of a hard disk drive, a flash memory device, or a magnetic or optical disc (such as a compact disc (CD), digital versatile disc (DVD), or floppy disk). The example non-transitory machine readable medium602stores computer-readable data604that, when subjected to reading606by a reader610of a device608(e.g., a read head of a hard disk drive, or a read operation invoked on a solid-state storage device), express the processor-executable instructions612. In some embodiments, the processor-executable instructions612, when executed, cause performance of operations, such as at least some of the example method400ofFIG.4, for example. In some embodiments, the processor-executable instructions612are configured to cause implementation of a system, such as at least some of the example system501ofFIGS.5A-5I, for example. 3. Usage of Terms As used in this application, “component,” “module,” “system”, “interface”, and/or the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Unless specified otherwise, “first,” “second,” and/or the like are not intended to imply a temporal aspect, a spatial aspect, an ordering, etc. Rather, such terms are merely used as identifiers, names, etc. for features, elements, items, etc. For example, a first object and a second object generally correspond to object A and object B or two different or two identical objects or the same object. Moreover, “example” is used herein to mean serving as an instance, illustration, etc., and not necessarily as advantageous. As used herein, “or” is intended to mean an inclusive “or” rather than an exclusive “or”. In addition, “a” and “an” as used in this application are generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Also, at least one of A and B and/or the like generally means A or B or both A and B. Furthermore, to the extent that “includes”, “having”, “has”, “with”, and/or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising”. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing at least some of the claims. Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. Of course, many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter. Various operations of embodiments are provided herein. In an embodiment, one or more of the operations described may constitute computer readable instructions stored on one or more computer and/or machine readable media, which if executed will cause the operations to be performed. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein. Also, it will be understood that not all operations are necessary in some embodiments. Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The disclosure includes all such modifications and alterations and is limited only by the scope of the following claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application.
87,985
11863505
DETAILED DESCRIPTION The disclosed technology is configured to allow Instant Messaging (IM) users to select multiple items in a chat thread, group, label, annotate, comment, and reply to these comments or other postings together in an arrangement in which IM messages are easily grouped by topic and by a plurality of users. The technology permits expanding the plurality of users in a conversation thread so as to include more than two users. The chat items can be categorized and grouped using indicia, such as, by way of non-limiting example, text labels or colors. The disclosed technology treats chat items, user-assigned labels or colors, and associated comments as a chat unit to which a receiver can also reply. In one configuration, a user can generate such groupings of chat items and comments as annotations and save them for the user's own future reference instead of sending them to other individuals. The disclosed technology provides a software component to be integrated with the host IM application and enables the implementation of related functions in the host application. The described approach is configured to enhance the IM user experience and leads to improved productivity. The disclosed technology discloses a method to allow users to group various chat items in a communication thread and categorize the chat items using indicia, such as, by way of non-limiting example, text labels or colors. The user can further comment on these categories. As the receiver of such a message views the groups of chat items along with their labels and comments as a unit of chat item, the receiver of the message can comment on this entire unit. Another configuration of the disclosed technology allows a user to store these groups of chat items, labels, and comments in the form of annotation for the user's own future reference instead of sending them to other users. The disclosed technology provides a method, system, and software component to be integrated with a host IM application and is configured to enable the implementation of related functions in the host application. Different configurations of the disclosed technology are configured to allow the user to select multiple items within a conversation thread, order, group, annotate, comment, and respond to these comments as part of a joint on-line chat. The present disclosure is directed to a message communication system which provides shared communication with a plurality of users to enhance an instant messaging experience by enabling collective comments on multiple items in a communication thread. The communication system includes a host IM application with a routine configured to allow the users to group chat or items in individual communication threads and categorize the individual communication threads into categories by using indicia, thereby allowing each user to select multiple items, order the multiple items, group the multiple items, annotate the multiple items and comment on the multiple items together so as to integrate with the host IM application and enable the implementation of related controls in the host IM application. The communication system also includes a grouping routine configured to allow each user to select multiple items as a selected set, order the multiple items, group the multiple items, annotate the multiple items and comment on the multiple items together, wherein the grouping routine includes selection of a set of names for different open chat sessions of an instant messenger and providing a single responsive posting to the selected set. A particular embodiment of the communication system includes a routine configured to allow each user to annotate the single responsive posting in a chat log to indicate the names in the selected set having received the single responsive posting, and allows the user to select multiple items, order the multiple items, group the multiple items, annotate the multiple items and comment on the multiple items together in the selected set. A routine is configured to permit the user to further comment on the categories, such a message views the communication threads along with their labels and comments as a unit of communication threads as a chat item, thereby permitting commenting on the communication thread as at least part of the selected set. A routine is configured to allow each user to store the communication threads in groups with labels, and comments in the form of annotation for the user's future reference without requiring sending the communication threads with the annotation to other users. And, a routine is configured to provide a capability of expanding the plurality of users in a conversation thread so as to include more than two users. In a particular embodiment, a routine is configured to refer to the conversation items and comment on the multiple items, using assigned item numbers or designators jointly with other users, whereby the user uses the labels assigned to a communication thread identified with the conversation items to compose a response jointly with other users. A further routine is configured to allow a user to annotate conversation items for display of the annotation to that user without changing a display of the annotation to other users. In another embodiment, a routine is configured to allow each user to select one or more messages corresponding to at least one of the annotations, and provide responses to the messages via the host IM application, corresponding to those annotations. The present disclosure is also directed to a method for managing communications in a message communication system implemented through a host IM application. The method includes the steps of: providing shared communication with a plurality of users and enhancing the instant messaging experience by enabling collective comments on multiple items in a communication thread; using a host IM application to allow the users to group chat or items in individual communication threads and categorize the individual communication threads into categories by using indicia, thereby allowing each user to select multiple items, order the multiple items, group the multiple items, annotate the multiple items and comment on the multiple items together so as to integrate with the host IM application and enable the implementation of related controls in the host IM application; providing a routine to allow each user to select multiple items as a selected set by using a grouping routine to order the multiple items, group the multiple items, annotate the multiple items and comment on the multiple items together, wherein the grouping routine includes selection of a set of names for different open chat sessions of an instant messenger and providing a single responsive posting to the selected set; providing a routine to allow each user to annotate the single responsive posting in a chat log to indicate the names in the selected set having received the single responsive posting, and allows the user to select multiple items, order the multiple items, group the multiple items, annotate the multiple items and comment on the multiple items together in the selected set; providing a routine to permit the user to further comment on the categories, such a message views the communication threads along with their labels and comments as a unit of communication threads as a chat item, thereby permitting commenting on the communication thread as at least part of the selected set; providing a routine that allows each user to store the communication threads in groups with labels, and comments in the form of annotation for the user's future reference without requiring sending the communication threads with the annotation to other users; and providing a routine to provide a capability of expanding the plurality of users in a conversation thread so as to include more than two users. The method of the present disclosure also includes one or more of the following steps: providing a routine to permit the user refer to the conversation items and comment on the multiple items, using assigned item numbers or designators jointly with other users, whereby the user uses the labels assigned to a communication thread identified with the conversation items to compose a response jointly with other users; providing a routine to allow each user to allow a user to annotate conversation items for display of the annotation to that user without changing a display of the annotation to other users; and, providing a routine to allow each user to select one or more messages corresponding to at least one of the annotations, and provide responses to the messages via the host IM application, corresponding to those annotations. The present disclosure is also directed to a message communication system which provides shared communication with a plurality of users and enhance instant messaging experience by enabling collective comments on multiple items in a communication thread. The message communication system includes means for providing shared communication with a plurality of users and enhancing the instant messaging experience by enabling collective comments on multiple items in a communication thread; host IM application means, the host IM application means allowing the users to group chat or items in individual communication threads and categorize the individual communication threads into categories by using indicia, thereby allowing each user to select multiple items, order the multiple items, group the multiple items, annotate the multiple items and comment on the multiple items together so as to integrate with the host IM application means and enable the implementation of related controls in the host IM application means; means for providing a routine to allow each user to select multiple items as a selected set by using a grouping routine to order the multiple items, group the multiple items, annotate the multiple items and comment on the multiple items together, wherein the grouping routine includes selection of a set of names for different open chat sessions of an instant messenger and providing a single responsive posting to the selected set; means for providing a routine to allow each user to annotate the single responsive posting in a chat log to indicate the names in the selected set having received the single responsive posting, and allows the user to select multiple items, order the multiple items, group the multiple items, annotate the multiple items and comment on the multiple items together in the selected set; means for providing a routine to permit the user to further comment on the categories, such a message views the communication threads along with their labels and comments as a unit of communication threads as a chat item, thereby permitting commenting on the communication thread as at least part of the selected set; means for providing a routine that allows each user to store the communication threads in groups with labels, and comments in the form of annotation for the user's future reference without requiring sending the communication threads with the annotation to other users; and means for providing a routine to provide a capability of expanding the plurality of users in a conversation thread so as to include more than two users. Turning now to the figures,FIG.1is a schematic diagram showing a configuration in which a user can select multiple chat items in a conversation and number the multiple chat items sequentially. In this configuration, a user can select a plurality of IM chat items101and leave some items in the chat thread unselected103. In this example, the first message selected is automatically assigned sequence number 1105, the second message 2107, and the third message 3109. It is understood that the messages do not have to be selected in the order shown inFIG.1. FIG.2is a schematic diagram showing a configuration in which a user can select multiple items in a conversation and categorize the multiple items using different labels. This example shows a process of categorizing and assigning user-defined labels to the selected messages. A user has selected a plurality of IM messages201, while some other messages in the chat thread may be left unselected203. The user has defined a custom label “Salaries” and assigned it to three chat items “Item_1.xlsx”205, “Item_3.jpeg”207, and “Item_6.docx”209. FIG.3is a schematic diagram showing a configuration in which the user can select multiple items in a conversation and categorize the multiple items using different colors. This implements a process for visual categorization of chat items instead of textual categorization shown in the configuration ofFIGS.1and2. A user may select a plurality of chat items301and may leave some chat items unselected303in a chat thread. The user can group/categorize the selected messages, using different colors or other indicia. By way of non-limiting example, the horizontal, vertical, and diagonal lining patterns represent blue, red, and green colors, respectively. The figure shows the user has labeled the chat item “Proposal.docx” with blue color305, “Design.docx” with red307, and “Accounts.xslx” with green309. FIG.4is a schematic diagram showing a configuration that provides means to refer to the conversation items numbered in the configuration ofFIG.1and comment on the conservation items using the assigned item numbers. This implementation enables a user to use the labels assigned in the configuration of inFIG.1to compose a response to the plurality of messages jointly with other users. Referring toFIG.4, a plurality of messages401have been selected by a user, while some messages403may have been left unselected. The user exploits the assigned sequential labels405to compose a message “For 1 and 2 do merge, For 3 OK”407. FIG.5is a schematic diagram showing a configuration that provides means to refer to the conversation items categorized using labels in the previously-described configurations and comment on them using these labels. This configuration implements user-assigned labels for a plurality of messages to make comments on these labels jointly with other users. The figure shows a user has selected a plurality of messages501while some chat items might be left unselected503. The user has assigned three labels to the selected messages, namely “Salaries”505, “Material”507, and “Expenses”509. The user has commented on the “Salaries” label with “To be Paid”511and the “Expenses” label with “Add the Expenses of February”513. The figure shows the label “Material”515selected by the user, and the system is ready to accept the comment517from the user. FIG.6is a schematic diagram showing a configuration in which the user can refer to the conversation items categorized using colors in the previously-described configurations and comment on them using the assigned colors. The configuration of inFIG.6shows a plurality of chat items601selected by a user, and some items may be left unselected603. The user has categorized the selected chat items with blue605, red607, and green609colors. The user has already entered comments for blue “To be Paid”611and green “Add the Expenses of February”613. The user has selected the red color615, for which the system is ready to accept the user comment617. FIG.7is a schematic diagram showing a configuration in which the user can use the configuration ofFIG.5to annotate a labeled group of messages by entering comments that can be used later for the user's own reference. Although the previously-described configurations are useful for communication among individuals, users sometimes need to annotate chat items for the user's own future reference instead of sending comments to others. For example, they can serve as to-do action items for the future. Other uses involve marking important items for easy retrieval in the future or simply grouping the related items with descriptive comments. The diagram shows a configuration of the disclosed technology to enable a user to annotate a plurality of chat items categorized into user-defined labels. The user can select a plurality of chat items701, while some chat items may be left unselected703. The example shows three labels “Salaries”705, “Material”707, and “Expenses”709. The “Salaries” label713has been annotated711as “Needs review”714, while the “Expenses” label715has been annotated with “Discuss with accountant”716. The system is ready to accept the annotation comment717from the user for the “Material” label719. FIG.8is a schematic diagram showing a configuration in which the user can use the configuration ofFIG.6to annotate a group of messages categorized by colors and give comments that can be used later for the user's own reference. The diagram shows a configuration of the disclosed technology to enable a user to annotate a plurality of chat items categorized using colors. The user can select a plurality of chat items801, while some chat items may be left unselected803. The example shows three colors, blue805, red807, and green809. The blue label813has been annotated as “Needs review”714, while the green label815has been annotated with “Discuss with accountant”816. The system is ready to accept the annotation comment from the user817for the red label819. FIG.9is a schematic diagram showing a configuration in which another configuration shows how the receiver can view the grouped messages and the sender's comments in a labeled group, along with the comments. This shows an example of the receiver side of the comments on categorized and labeled groups of chat items using the configuration ofFIG.5. In this example, a user has assigned three labels to a plurality of messages and commented on these labels. The receiver of this message can view the comment “To be Paid”901on the category “Salaries”903and the associated chat items “Item_1.xslx”, “Item_3.xsl”, and “Item_6.html”905as a unit. Similarly, the comment “Add the expenses of February”907can be viewed for the category labeled “Expenses”909associated with the chat item “Item_4.pptx”911, and the comment “We need to discuss”913for the category labeled “Material”915and the associated chat items “Item_2.wpd” and “Item_5.asp”917. FIG.10is a schematic diagram showing a configuration in which another configuration shows how the receiver can view the grouped messages and the sender's comments in a color-labeled group, along with the comments. This configuration identifies receiver of the categorization by colors and associated comments sent by a sender, as shown in configuration of inFIG.6. The receiver of this message can view the comment “To be Paid”1001on the associated chat items “Item_1.xslx”, “Item_3.xsl”, and “Item_6.html” highlighted in the blue color1003. Similarly, the comment “Add the expenses of February”1005can be viewed for the associated chat item “Item_4.pptx” highlighted in red color1007, and the comment “We need to discuss”1011for chat items “Item_2.wpd” and “Item_5.asp” highlighted in green color1009. FIG.11is a schematic diagram showing a configuration in which the receiver can respond to a comment on a previously created group of messages. The figure shows the process to enable the receiver to reply to the categorized messages with associated comments, as described in configuration depicted inFIG.11. In this example, a user has selected the “Salaries” label with the associated chat items and the comment1101to compose a reply to this communication “Forwarding to accounts”1103. FIG.12is a schematic diagram showing a configuration in which the user can view the annotations previously added to a labeled or colored group of messages. The figure shows the annotations generated by the user in the configuration given above. The figure shows three annotations1201given by the user. The user has selected the annotation “Needs review”1203. The system shows the label for this annotation “Salaries”1205, the associated items “Item_1.xslx”, “Item_3.xsl”, and “Item_6.html”1207, and the complete annotation “Needs review by the accountant”1209. CLOSING STATEMENT It will be understood that many additional changes in the details, materials, steps and arrangement of parts, which have been herein described and illustrated to explain the nature of the subject matter, may be made by those skilled in the art within the principle and scope of the invention as expressed in the appended claims.
20,448
11863506
DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS Exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. These exemplary embodiments will be described in detail for those skilled in the art in order to practice the present invention. It should be appreciated that various exemplary embodiments of the present invention are different but do not have to be exclusive. For example, specific shapes, configurations, and characteristics described in an exemplary embodiment of the present invention may be implemented in another exemplary embodiment without departing from the spirit and the scope of the present invention. In addition, it should be understood that position and arrangement of individual components in each disclosed exemplary embodiment may be changed without departing from the spirit and the scope of the present invention. Therefore, a detailed description described below should not be construed as being restrictive. In addition, the scope of the present invention is defined only by the accompanying claims and their equivalents if appropriate. Similar reference numerals will be used to describe the same or similar functions throughout the accompanying drawings. It will be understood that for the purposes of this disclosure, “at least one of X, Y, and Z” can be construed as X only, Y only, Z only, or any combination of two or more items X, Y, and Z (e.g., XYZ, XYY, YZ, ZZ). The terminology used herein is for the purpose of describing exemplary embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Hereinafter, exemplary embodiments of the present invention are described in detail with reference to the accompanying drawings. It will be understood that when an element is referred to as being “connected to” another element, it can be directly connected to the other element, or intervening elements may be present. A messenger-linked service system and method may enable data of a messenger-linked application to be transmitted to or shared with a friend or a group in a messenger, using a social graph of a messenger platform installed on a smart device. FIG.1illustrates a messenger110and a messenger-linked application120according to exemplary embodiments. As shown inFIG.1, the messenger-linked application120may include, for example, a card application, a camera application, a schedule application, a game application, a photo album application, a calendar application, a social Network Service (SNS) application, a contact application, and the like. Data associated with each application or each application itself may be either transmitted or requested to be shared. For example, messenger110can be used as follows. The card application may be used to send a card to a messenger friend. The camera application may be used to transmit and share photos captured by the camera application or stored photos. A user may transmit, share, or request a schedule stored in the schedule application. The game application may be recommended to a messenger friend, or a request to join a game may be sent to the message friend. A photo album stored in the photo album application may be shared. The calendar application may be used to transmit an important anniversary or a schedule of a user. The SNS application may be used to register a messenger friend as an SNS friend, or to share data. Contact information and personal information stored in the contact application may be transmitted, or a transmission request may be sent to a messenger friend. In other words, a messenger-linked service may enable a large amount of data to be transmitted and shared through the messenger110. A social graph illustrates interconnections among people, groups and organizations in a social network. The term refers to both the social network itself and a diagram representing the network. Individuals and organizations, called actors or friends, are nodes on the graph. Inter-dependencies, called ties or relationships, can be multiple and diverse, including such characteristics or concepts as age, gender, race, genealogy, chain of command, ideas, financial transactions, trade relationships, political affiliations, club memberships, occupation, education, economic status, human relationships, friend relationships and social relationships. A social graph of a particular user consists of the set of nodes and ties connected, directly or indirectly, to that actor. FIG.2illustrates a messenger-linked service system200according to exemplary embodiments. The messenger-linked service system200may use a social graph based on a human relationship of a messenger platform, and may perform user authentication between a messenger and a messenger-linked application. The messenger-linked service system200ofFIG.2may include a relationship extraction unit210, a selection unit220, and an execution unit230. The relationship extraction unit210may extract a social graph between a user of a messenger and a friend. The selection unit220may select data that is either transmitted to or shared with the friend in the messenger-linked application. The execution unit230may transmit the selected data to the friend, or may execute a sharing request. The user authentication in the messenger-linked service system200may be automatically processed by obtaining a user's consent. The user consent may be obtained when user authentication is completed in the messenger. A messenger-linked service system may service a user's consent. The relationship extraction unit210may extract a social graph of a friend relationship, and a social graph of a group relationship, a chat history of the group relationship and a chat history of the friend relationship. Accordingly, a potential target to which the user transmits data or sends a sharing request may be a group, a friend, or a chat room, without any limitation to a friend list of the user. The selection unit220may select the messenger-linked application, or data associated with the messenger-linked application. The data associated with the messenger-linked application may be transmitted to a friend or shared with the friend. The data associated with the messenger-linked application may be extracted from data stored in the messenger-linked application, or may be written as new data associated with the messenger-linked application. The selected data or the selected messenger-linked application may be transmitted to a target extracted by the relationship extraction unit210, or may be requested to be shared. The execution unit230may be linked to a messenger-linked service and may be invoked during execution of the messenger-linked application. The execution unit230may call a social graph or a chat history of each of a friend relationship and a group relationship. The execution unit230may select a friend, a group or a chat history to which data is to be transmitted or shared, and transmit data or execute a sharing request. In this instance, there is no limitation to a number of selected targets or a type of targets. The messenger-linked service system200may transfer information regarding data transmission and execution of the sharing request through a push alert, a notification alert, or a chat message based on a setting of the user. Examples of use of a messenger-linked service will be described with reference toFIGS.3through5.FIG.3illustrates a sender transmitting data using a messenger-linked service.FIGS.4and5illustrate a receiver receiving data from a sender. In accordance with an illustrative example, the term “sender” may be used interchangeably with a “user.” The sender may determine how the receiver receives an alert. The sender may select an alert from among a push alert, a notification alert, or an alert via a chat room, and may enable the receiver to receive the selected alert. Referring toFIG.3, a user may write new data and transmit the new data using a messenger schedule application. The new data may be used to vote for or against available schedules and locations of an event that the user desires to share. For example, as shown inFIG.3, when the user taps on a screen to add a member to whom a sharing request is to be sent a next screen may appear. A friend list ofFIG.3may be a friend list of friends based on a messenger platform, and a target may be selected from among a friend tab, a group tab and a chat tab. For example, a list of groups or meetings to which the sender belongs may be selected from the group tab; and either a chat room shared between or in common with the sender and a friend or a chat room of each group may be selected from the chat tab. However, there is no limitation to the selected target and a number of selected targets. When selection of the target is completed, data or a sharing request may be automatically transmitted to the receiver. In another exemplary embodiment, data or a sharing request may be transmitted to the receiver after selection of the target is completed and confirmation to transmit is received. InFIGS.3and5, a schedule for “Company meeting in October” may be requested to be shared, and a vote on the schedule may be performed. Additionally, “Group 1” may be used as a target for the vote. When the sender transmits a request, a system may perform an alert that is selected in advance by the sender, so that a receiver may instantly verify the request. As shown inFIGS.4and5, the receiver may receive data from the sender ofFIG.3and check the received data, and the sender may send a request to share a vote on a schedule and location of “Company meeting in October.” In an example, when the sender selects, in advance, the alert via the chat room, the receiver may: check information sent by the user via a chat room of a messenger, tap information received via the chat room, and participate in the vote, as illustrated inFIG.4. In another example, when the sender selects the push alert or the notification alert ofFIG.3and sends the request, an alert window may appear on top of an idle screen of the receiver, as shown inFIG.5. Additionally, when a button “details” ofFIG.5is tapped, the receiver may check information sent by the user and may participate in the vote. According to exemplary embodiments, a messenger-linked service system may use a social graph based on a human relationship of a messenger platform. The messenger-linked service system may be operated by executing a messenger-linked application. The messenger-linked service system may include a selection unit, a relationship extraction unit, and an execution unit. The selection unit may select data that is to be transmitted to a friend or shared with the friend in the messenger-linked application. The relationship extraction unit may extract a social graph between a user of a messenger and the friend. The execution unit may transmit the selected data to the friend, or may execute a sharing request. A messenger-linked service system may perform a messenger-linked service using a method in which user authentication is not performed. The messenger-linked service system may be performed by linking a human relationship of a messenger platform to a messenger-linked application. When a messenger is operated, a social graph or a chat history of each of a friend relationship and a group relationship may be automatically called. In some embodiments, the term automatically can be defined as “without data entry by a user.” An alert via a chat room may be selected and transmitted to a chat room of the messenger. A push alert or a notification alert may be selected. A chat history or a target to be shared may be selected. Selected data may be transmitted or a sharing request may be executed. Examples associated with the messenger-linked service system will be described with reference toFIGS.6through8. As described above, a messenger-linked service may be performed by executing the messenger-linked application. FIG.6illustrates a screen of a sender using a messenger-linked service with a card application. The messenger-linked service may be performed by executing a messenger-linked application. The sender writes a card by executing the card application, and may determine a receiver of the card using the messenger-linked service. The sender may operate a messenger without user authentication, and may automatically call a social graph or a chat history of each of a group relationship and a friend relationship. In some embodiments, the term automatically can be defined as “without data entry by a user.” Similar to the example ofFIG.3, a target may be selected from among a friend tab, a group tab, and a chat tab. For example, a list of groups or meetings to which the sender belongs may be selected from the group tab, and a chat room between the sender and a friend, or a chat room of each group may be selected from the chat tab. FIGS.7and8illustrate a receiver checking a card received from a sender via a messenger-linked service. When the messenger-linked service is performed, information regarding data transmission and execution of a sharing request may be transferred through a push alert, a notification alert, or a chat message based on a setting of the sender. When a written card is transferred to the receiver through a chat message, a card alert may be displayed as a chat window on a chat room of a messenger, as shown inFIG.7, so that the receiver may check the card. In another example, as shown inFIG.8, when the card is transferred from the sender to the receiver through the push alert and the notification alert, an alert window may appear on a top of an idle screen of the receiver. In this example, the receiver may tap on “details” of the alert window, and may check the card without a need to execute a messenger. FIG.9illustrates a messenger-linked service method according to exemplary embodiments. The messenger-linked service method may use a social graph based on a human relationship of a messenger platform. The method may be performed by performing user authentication between a messenger and a messenger-linked application. Operations of the messenger-linked service method ofFIG.9may be performed based on the messenger-linked service system200ofFIG.2. The messenger-linked service system200may perform a step910of extracting a social graph between a user of a messenger and a friend, a step920of selecting data that is to be transmitted to the friend or shared with the friend in the messenger-linked application, and a step930of transmitting the selected data to the friend or executing a sharing request. In the messenger-linked service method ofFIG.9, the user authentication may be automatically processed by obtaining a user's consent when the messenger-linked application is used. The messenger-linked service method ofFIG.9may be performed with respect to an authenticated user. In step910, the messenger-linked service system200may extract a social graph of a group relationship of the messenger, a social graph of a friend relationship of the messenger, a chat history of the group relationship and a chat history of the friend relationship. The extracted social graphs and chat histories may be transmitted, and requested to be shared. In step920, the messenger-linked service system200may select the messenger-linked application or data associated with the messenger-linked application. The data associated with the messenger-linked application may be transmitted to a friend or shared with the friend. The data associated with the messenger-linked application may either be extracted from data stored in the messenger-linked application or may be written as new data associated with the messenger-linked application. For example, in step920a camera, a schedule, a game, a photo album, a calendar, an SNS, or contacts may be selected. In some embodiments, data associated with a messenger-linked application, such as, a card application, a camera application, a schedule application, a game application, a photo album application, a calendar application, an SNS application, or a contact application, may be selected. In step930, the messenger-linked service system200may be linked to a messenger-linked service and executed within the messenger-linked application. Additionally, a social graph or a chat history of each of a friend relationship and a group relationship may be called from the messenger-linked application, a friend or a group to which data is to be transmitted or shared, or a chat history may be selected, and the data selected in step920may be transmitted or the sharing request may be executed. In the messenger-linked service method ofFIG.9, the selected data may be transferred through a push alert, a notification alert, or a chat message, based on a setting of the user. FIG.10illustrates a messenger-linked service method according to exemplary embodiments. The messenger-linked service method ofFIG.10may be provided to a user who does not pass through user authentication. Operations of the messenger-linked service method ofFIG.10may be performed based on the messenger-linked service system200ofFIG.2. The messenger-linked service method ofFIG.10may be performed by executing a messenger-linked application, and may use a social graph based on a human relationship of a messenger platform. The messenger-linked service method ofFIG.10may include a step1010of selecting data that is to be transmitted to a friend or shared with the friend in the messenger-linked application, a step1020of extracting a social graph between a user of a messenger and the friend, and a step1030of transmitting the selected data to the friend or executing a sharing request. In step1020, the human relationship of the messenger platform may be linked to the messenger-linked application, and may be extracted. When the messenger is operated, a social graph or a chat history of each of a friend relationship and a group relationship may be automatically called, and a target including a chat history may be selected for transmission or sharing through the messenger. Subsequently, the selected data may be transmitted or the sharing request may be executed. The messenger may use the messenger-linked service and may be executed to call a target to which data is to be transmitted, or a target to which a sharing request is to be sent. According to embodiments of the present invention, data of an application linked to a messenger may be transmitted or shared through a friend, a group, or a chat room in the messenger, and thus it is possible to provide a more convenient linked service, and to diversify a type of the data. According to embodiments of the present invention, it is possible to link a messenger to a messenger-linked application, without a need to individually recommend friends or groups in the messenger and to solve conventional inconveniences. A computer system may be used as a computer-readable medium that includes an instruction to control a messenger-linked service method using a social graph based on a human relationship of a messenger platform to be performed by performing user authentication between a messenger and a messenger-linked application. The instruction may be recorded in a computer-readable medium to control the computer system, by the messenger-linked service method that includes extracting a social graph between a user of the messenger and a friend, selecting data that is to be transmitted to the friend or shared with the friend in the messenger-linked application, and transmitting the selected data to the friend or executing a sharing request. Additionally, a computer system may be used as a computer-readable medium that includes an instruction to control a messenger-linked service method using a social graph based on a human relationship of a messenger platform to be performed by executing a messenger-linked application. The instruction may be recorded in a computer-readable medium to control the computer system, by the messenger-linked service method that includes selecting data that is to be transmitted to a friend or shared with the friend in the messenger-linked application, extracting a social graph between a user of a messenger and the friend, and transmitting the selected data to the friend or executing a sharing request. The media may also include, alone or in combination with the program instructions, data files, data structures, and the like. The media and program instructions may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well-known and available to those having skill in the computer software arts. It will be apparent to those skilled in the art that various modifications and variation can be made in the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.
21,591
11863507
DETAILED DESCRIPTION The inventor has conceived, and reduced to practice, a system and method for automated customer response testing that queries automated response systems at a client contact center, receives responses from the automated response systems, and analyzes the responses to determine whether the automated response systems are functioning properly. The system adds complexity to its queries using a “conversation multiplier” by generating queries based on “personas” that introduce variations that mimic real-world customer interactions. Further, the system can evaluate text and audio communications using a real-time conversation engine by assessing the context, meaning, and level of formality of the communication and, where necessary, generate responses customized to the style of the original text and audio communications, even additional cues such as environmental noises. One or more different aspects may be described in the present application. Further, for one or more of the aspects described herein, numerous alternative arrangements may be described; it should be appreciated that these are presented for illustrative purposes only and are not limiting of the aspects contained herein or the claims presented herein in any way. One or more of the arrangements may be widely applicable to numerous aspects, as may be readily apparent from the disclosure. In general, arrangements are described in sufficient detail to enable those skilled in the art to practice one or more of the aspects, and it should be appreciated that other arrangements may be utilized and that structural, logical, software, electrical and other changes may be made without departing from the scope of the particular aspects. Particular features of one or more of the aspects described herein may be described with reference to one or more particular aspects or figures that form a part of the present disclosure, and in which are shown, by way of illustration, specific arrangements of one or more of the aspects. It should be appreciated, however, that such features are not limited to usage in the one or more particular aspects or figures with reference to which they are described. The present disclosure is neither a literal description of all arrangements of one or more of the aspects nor a listing of features of one or more of the aspects that must be present in all arrangements. Headings of sections provided in this patent application and the title of this patent application are for convenience only, and are not to be taken as limiting the disclosure in any way. Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more communication means or intermediaries, logical or physical. A description of an aspect with several components in communication with each other does not imply that all such components are required. To the contrary, a variety of optional components may be described to illustrate a wide variety of possible aspects and in order to more fully illustrate one or more aspects. Similarly, although process steps, method steps, algorithms or the like may be described in a sequential order, such processes, methods and algorithms may generally be configured to work in alternate orders, unless specifically stated to the contrary. In other words, any sequence or order of steps that may be described in this patent application does not, in and of itself, indicate a requirement that the steps be performed in that order. The steps of described processes may be performed in any order practical. Further, some steps may be performed simultaneously despite being described or implied as occurring non-simultaneously (e.g., because one step is described after the other step). Moreover, the illustration of a process by its depiction in a drawing does not imply that the illustrated process is exclusive of other variations and modifications thereto, does not imply that the illustrated process or any of its steps are necessary to one or more of the aspects, and does not imply that the illustrated process is preferred. Also, steps are generally described once per aspect, but this does not mean they must occur once, or that they may only occur once each time a process, method, or algorithm is carried out or executed. Some steps may be omitted in some aspects or some occurrences, or some steps may be executed more than once in a given aspect or occurrence. When a single device or article is described herein, it will be readily apparent that more than one device or article may be used in place of a single device or article. Similarly, where more than one device or article is described herein, it will be readily apparent that a single device or article may be used in place of the more than one device or article. The functionality or the features of a device may be alternatively embodied by one or more other devices that are not explicitly described as having such functionality or features. Thus, other aspects need not include the device itself. Techniques and mechanisms described or referenced herein will sometimes be described in singular form for clarity. However, it should be appreciated that particular aspects may include multiple iterations of a technique or multiple instantiations of a mechanism unless noted otherwise. Process descriptions or blocks in figures should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of various aspects in which, for example, functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those having ordinary skill in the art. Definitions The term “dialect” as used herein means a regional linguistic accent, vocabulary, phraseology, style, accent, or character, whether in writing or verbal speech. The term “environment” as used herein means the environment in which a communication has been made, and includes, but is not limited to the audio environment in which spoken communication occurs and the platform on which a text communication is made (e.g. written communications typed on a computer tend to have fewer mistakes and abbreviations than those typed on a mobile phone). The phrase “level of formality” as used herein means the formality with which a communication is made. For example, communications may be formal, such as in professional writing, formal social invitations, educational writing, and the like. Communications may be informal, such as in casual writing, and notes or letters between friends or close acquaintances. Communications may be very informal, such as in the use of abbreviations or slang, Short Message Service (SMS) codes and substitutions, emojis, and the like. The level of formality may provide indirect indications about the communicator, such as level of education, closeness between communicators, emotional content, etc. Conceptual Architecture FIG.1(PRIOR ART) is a typical system architecture diagram of a contact center100known to the art. A contact center is similar to a call center, but a contact center has more features. While a call center may communicate mainly by voice, a contact center may communicate via email; text chat, such as, but not limited to, instant messaging, social media posts, and SMS interaction; and web interfaces in addition to voice communication in order to facilitate communications between a customer endpoint110and a resource endpoint120. Resource120may include, but is not limited to, agents, sales representatives, service representatives, or collection agents handling communications with customers110on behalf of an enterprise. Resources120may be in-house within contact center100, or may be remote, such as out-sourcing to a third party, or agents working from home. Contact center100may be independently operated or networked with additional centers, and may often be linked to a corporate computer network. Contact center100may further comprise network interface130, text channels140, multimedia channels145, and contact center components150. Text channels140may be communications conducted mainly through text, and may comprise social media141, email142, short message service (SMS)143, or instant messaging (IM)144, and would communicate through their counterparts within contact center components150, each respectively being social server159, email server157, SMS server160, and IM server158. Multimedia channels145may be communications conducted through a variety of mediums, and may comprise a media server146, private branch exchange (PBX)147, interactive voice response (IVR)148, and bots149. Text channels140and multimedia channels145may act as third parties to engage with outside social media services and so a social server159may be required to interact with the third-party social media141. Multimedia channels145, are typically present in an enterprise's datacenter; but could be hosted in a remote facility, in a cloud facility, or in a multifunction service facility. Contact center components150may comprise a routing server151, a session initiation protocol (SIP) server152, an outbound server153, a computer telephony integration (CTI) server154, a state and statistics (STAT) server155, an automated call distribution facility (ACD)156, an email server157, an IM server158, a social server159, a SMS server160, a routing database170, a historical database172, and a campaign database171. It is possible that other servers and databases may exist within a contact center, but in this example the referenced components are used. Contact center components150, including servers, databases, and other key modules that may be present in a typical contact center may work in a black box environment, may be used collectively in one location, or may be spread over a plurality of locations. Contact center components150may even be cloud-based, and more than one of each component shown may be present in a single location. Customers110may communicate by use of any known form of communication known in the art, be it by a telephone111, a mobile smartphone112, a tablet113, a laptop114, or a desktop computer115, to name a few examples. Similarly, resources120may communicate by use of any known form of communication known in the art, be it by a telephone121, a mobile smartphone122, a tablet123, a laptop124, or a desktop computer125, to name a few examples. Communication may be conducted through a network interface130by way of at least one channel, such as a text channel140or a multimedia channel145, which communicates with a plurality of contact center components150. Available network interfaces130may include, but are not limited to, a public switched telephone network (PSTN)131, an internet network132, a wide area network (WAN)133, or a local area network (LAN)134. To provide a few example cases, a customer calling on telephone handset111may connect through PSTN131and terminate on PBX147; a video call originating from tablet123may connect through internet connection132and terminate on media server146; or a customer device such as a smartphone112may connect via WAN133, and terminate on IVR148, such as in the case of a customer calling a customer support line for a bank or a utility service. In another example, an email server157would be owned by the contact center100and would be used to communicate with a third-party email channel142. The number of communication possibilities are vast between the number of possible devices of resources120, devices of customers110, networks130, text channels140, multimedia channels145, and contact center components150, hence the system diagram onFIG.1indicates connections between delineated groups rather than individual connections for clarity. Continuing from the examples given above, in some conditions where a single medium (such as ordinary telephone calls) is used for interactions that require routing, media server146may be more specifically PBX147, ACD156, or similar media-specific switching system. Generally, when interactions arrive at media server146, a route request, or a variation of a route request (for example, a SIP invite message), is sent to SIP server152or to an equivalent system such as CTI server154. A route request may be a data message sent from a media-handling device, such as media server146, to a signaling system, such as SIP server152. The message may comprise a request for one or more target destinations to which to send (or route, or deliver) the specific interaction with regard to which the route request was sent. SIP server152or its equivalent may, in some cases, carry out any required routing logic itself, or it may forward the route request message to routing server151. Routing server151executes, using statistical data from STAT server155and, optionally, data from routing database170, a routing script in response to the route request message and sends a response to media server146directing it to route the interaction to a specific target in resources120. In another case, routing server151uses historical information from historical database172, or real-time information from campaign database171, or both, as well as configuration information (generally available from a distributed configuration system, not shown for convenience) and information from routing database170. STAT server154receives event notifications from media server146, SIP server152, or both regarding events pertaining to a plurality of specific interactions handled by media server146, SIP server152, or both, and STAT server155computes one or more statistics for use in routing based on the received event notifications. Routing database170may comprise multiple distinct databases, either stored in one database management system or in separate database management systems. Examples of data that may normally be found in routing database170may include, but are not limited to: customer relationship management (CRM) data; data pertaining to one or more social networks, including, but not limited to network graphs capturing social relationships within relevant social networks, or media updates made by members of relevant social networks; skills data pertaining to a members of resources120, which may be human agents, automated software agents, interactive voice response scripts, and so forth; data extracted from third party data sources including cloud-based data sources such as CRM and other data from SALESFORCE.COM™, credit data from EXPERIAN™, consumer data from DATA.COM™; or any other data that may be useful in making routing decisions. It will be appreciated by one having ordinary skill in the art that there are many means of data integration known in the art, any of which may be used to obtain data from premise-based, single machine-based, cloud-based, public or private data sources as needed, without departing from the scope of the invention. Using information obtained from one or more of STAT server155, routing database170, campaign database172, historical database171, and any associated configuration systems, routing server151selects a routing target from among a plurality of available resource devices120, and routing server151then instructs SIP server152to route the interaction in question to the selected resource120, and SIP server152in turn directs media server146to establish an appropriate connection between customer110and target resource120. In this case, the routing script comprises at least the steps of generating a list of all possible routing targets for the interaction regardless of the real-time state of the routing targets using at least an interaction identifier and a plurality of data elements pertaining to the interaction, removing a subset of routing targets from the generated list based on the subset of routing targets being logged out to obtain a modified list, computing a plurality of fitness parameters for each routing target in the modified list, sorting the modified list based on one or more of the fitness parameters using a sorting rule to obtain a sorted target list, and using a target selection rule to consider a plurality of routing targets starting at the beginning of the sorted target list until a routing target is selected. It should be noted that customers110are generally, but not necessarily, associated with human customers or users. Nevertheless, it should be understood that routing of other work or interaction types is possible, although in any case, is limited to act or change without input from a management team. FIG.2is a diagram of an exemplary application of a customer response testing system implementation, showing the customer response testing system in relation to the contact center under test. A customer response testing system210exists which communicates over at least one, and possibly a plurality of, networks220, to a variety of servers used in the contact center under test230,231,232,233,234. Networks220may include a Public Switched Telephone Network (PSTN), the Internet, a Wide Area Network (WAN), or a Local Area Network (LAN), or any other network common for telecommunications as is common in the art. In this way, a customer response testing system210may be able to send and receive emails from an email server231, send and receive SMS messages from an SMS server232, send and receive other text communications from a chat server233, and send and receive voice data from a voice server234, or some combination of these, such as sending an email and receiving an SMS response, or receiving an email as part of a voice server234query response, such as confirming a login into a user account over the phone, or two-factor authentication systems. The customer response testing system210generates queries for each type of communication under test, initiates a communication session, makes the query, receives a response to the query, and analyzes the response received. For instance, an email query may be sent from a customer response testing system210, through a network or networks220, to an email server in a contact center's infrastructure231, which the contact center's automated email customer response system processes and formulates a reply being sent back through the appropriate server such as an email server231, to be relayed back to the customer response testing system210. Upon receipt of the automated response from the email server, the customer response testing system runs a series of tests to determine the quality of the response, including such things as how quickly the response was received, whether the response to the query makes sense in context, whether the response answers the question posed by the query, etc. The analysis helps to determine whether the automated customer response system received the query, properly understood the query, and generated an appropriate response.FIG.3is a diagram showing the overall system architecture of an exemplary customer response testing system. A query generator310retrieves a test case from a test case database305and retrieves a persona from a persona database315. The test case database305contains data on the format and content of completed tests for the contact center under test, including some or a plurality of: initial query sent to the contact center, a response from the contact center, a secondary query sent to the contact center, and a further response from the contact center. Such queries and responses may be of the same sort (email, SMS, etc.) or may be of different types. The persona database315contains data on simulated personas to use in the generation of a contact center query. The persona is data representing a set of attributes for a simulated (hypothetical) customer who might interact with a contact center. For instance, an initial query from a test case in a test case database305may be modified to fit the persona of a person of a particular age, from a particular location, with certain applicable account or personal information which may be used in such an initial query, the confluence of the test case and the persona being used to generate a full query by the query generator310. The query generator generates an ideal query (i.e., a direct and clear query without typographical errors, grammatical mistakes, idioms, and the like, which may occur in real-world queries) which is sent to a conversation multiplier320, which produces both data for text and voice bot testing, and which multiplies a query by using alternative wording, mistakes, idiosyncrasies, neologisms, typos, and colloquialisms, or some combination or permutation of these, with the intent of testing whether these variations and alterations in phrasings of a query will be accepted by a contact center's automated response systems. The conversation multiplier320may additionally use input from the persona to generate queries that mimic the persona of a particular simulated person. For example, a particular persona may be a simulation of a person with a specific regional dialect who often rides a bus, in which case the queries produced by the conversation multiplier320for that persona will modify the ideal query to use the specific regional dialect with typographical errors introduced to simulate inaccuracies in type from riding on a moving bus. Further, in some embodiments, the conversation multiplier will obtain additional enhancements from a real-time conversation engine330, whose purpose is to analyze communications for context, dialect, level of formality, etc., and either introduce variations based on those analyses into queries or determine the appropriateness of responses to queries. Query text may be sent directly from the conversation multiplier320to the real-time conversation engine330, which will send back query text enhanced with contextual cues, regional or dialectical variants, or formality cues. After the queries are generated by the conversation multiplier, queries intended for text-based testing (e.g. email, chat, SMS) may be sent through the appropriate networks220to the contact center under test230. For queries intended for audio-based testing (i.e., voice communications), the generated text queries are first fed into a text-to-speech engine325, where the text of the query becomes converted to audio data corresponding to speech. In some embodiments, this audio may be sent to the real-time conversation engine330for enhancement (for example, to add environmental sounds such as transportation noises simulating a particular persona riding on a bus). The text-to-speech audio is then sent via an appropriate network220to the contact center under test230. Text responses from the contact center under test230are sent to a response analyzer340, which compares the response with the original query to determine the quality and appropriateness of the response. In some embodiments, responses may be sent to the real-time conversation engine for further analysis, to determine whether the context, dialect, level of formality, etc., of the response matches the context, dialect, level of formality, etc., of the query. Audio responses from the contact center under test230may first be sent to a non-speech sound filter or keyword spotter335for analysis. The non-speech sound filter335attempts to clarify the received audio by filtering out any non-word or non-speech, or unimportant, audio data. The keyword spotter335attempts to identify key words and phrases in the speech audio. Keyword spotting is faster than full speech-to-text conversion and filtering. This filtered and/or searched data is then sent to a speech-to-text engine345before being forwarded to the response analyzer340for analysis in the same manner as for text-based communications. In some embodiments, the audio response may also be sent to the response analyzer340to use in conjunction with the real-time conversation engine to determine whether the context, dialect, level of formality, etc., of the response matches the context, dialect, level of formality, etc., of the query. In some embodiments, the responses and queries will be sent to a contact center system mapper341to map the contact center's response system (e.g. on a voice call, mapping the DTMF tones associated with voice prompts in the system). FIG.4is a block diagram showing an aspect of the customer response testing system, the real time conversation engine330. As text is received, a text analyzer410uses a natural language understanding (NLU) engine411to analyze groupings of words or sentence fragments, punctuation, and individual words in order to understand language meaning for a given textual input. Simultaneously, a keyword spotter (KWS)412may locate individual high-value words such as nouns in a sentence faster than full analysis from an NLU engine411allowing for faster or real-time processing of conversational data to take place. As speech (audio) is received, an audio analyzer420uses a real time speech-to-text converter421to detect and convert audio speech data into text. A real time voice quality analyzer422collects and analyzes performance metrics for audio and voice quality. A real time classifier423classifies the audio it is receiving into a plurality of audio classes such as silence, speech, music, ring, comfort noise, earcons, etc. The real-time classifier423may operate in conjunction with the speech-to-text converter421to send only detected speech to the speech-to-text converter421to speed up operations and increase accuracy of conversion. Data from both the NLU engine411and KWS412is sent to a conversation manager430, which performs analyses on the text to determine the context, dialect, level of formality, etc., of the communication. A context analyzer431may use word and phrase associations in both the query and response to determine the context in which the speech is taking place. A dialect analyzer432may use dictionaries of regional dialects and slang to determine the dialect that the writer or speaker of a particular communication is using. A formality analyzer433may use compilations of speech from persons of different educations, backgrounds, and occupations, as well as compilations of speech from persons in different settings (e.g., informal gatherings, office environments, weddings, etc.) as well as dictionaries of proper grammar and usage, slang, and the like, to determine a level of formality that the writer or speaker of the communication is using. In some embodiments, the real time conversation engine330will further comprise an output generator440, which will use the information from the analyses from the conversation manager430to generate an outgoing communication that is appropriate in terms of context, dialect/slang, and level of formality to the incoming communication that was analyzed. For example, an incoming text query may contain informal slang speech from a particular dialect (often indicating that the writer is from a particular region), which dialect analyzer would identify as being a particular dialect and the formality analyzer433would recognize as being informal speech from that dialect. The outgoing communication would be generated by a natural language generation (NLG) engine441, matching the dialect, slang, and level of formality of the customer. In an embodiment, outgoing communication from the NLG would match the dialect, slang, and level of formality of the a virtual customer as defined by the persona and test case associated with the virtual customer. Where the outgoing communication is audio, the text may be converted to speech using a real-time text-to-speech engine442. In some embodiments, environmental cues may be introduced into the outgoing communication by an environment simulator443to make the communication seem more natural or “real.” For example, where the context of the audio is an office environment, background noises from an office environment may be introduced into the audio, or where the writer of a text communication is riding on a bus, typographical errors may be introduced to simulate the environment (i.e., motion) of the bus. FIG.5is a method diagram illustrating exemplary functionality of the customer response testing system. First it retrieves a test case from a test case database510, the test case comprising data on the entire expected interaction between the customer response testing system and the contact center under test. For instance, it may include data on an email query to be sent, an email reply to be received, and a final email response to be sent to the contact center. The testing system then retrieves a persona from a persona database520which includes either manually entered or automatically generated data to simulate an actual customer, such as a fake name and false personal information, to test the interaction of the contact center's communications with an actual customer depending on the requisite personal information required. The testing system then generates an ideal simulated customer query based on the test case as modified by the persona530, before creating variations of the ideal simulated customer query using a conversation multiplier, each variation reflecting a likely real-world variant of the ideal simulated customer query540, for instance using typos due to a simulated customer being on a bus during communication, or using neologisms or colloquial speech for a geographic region or customer persona as necessary, to test the contact center's responses to such variations in a customer query. After the queries are generated and multiplied, the testing system then connects to a customer response system at a contact center550, whether through an email server, SMS server, chat server, or voice system, and transmits each variation of the ideal simulated customer query to a customer response system at a contact center560as appropriate. It then waits to receive responses to each variation sent from the customer response system at the contact center570, on the expected channels as specified by the test case data, before analyzing each response received to determine whether the response is appropriate to the variation sent580and produce a result of the analysis590, indicating how long the interaction took, whether responses given were the expected responses, and any errors or anomalies during the test execution. FIG.6is a method diagram illustrating exemplary functionality of the real time conversation engine. In a first step, incoming text is received601, and the text is analyzed using a natural language understanding (NLU) engine to analyze groupings of words or sentence fragments, punctuation, and individual words in order to understand language meaning for a given textual input603. Simultaneously, the text is processed through a keyword spotter (KWS), which may locate individual high-value words such as nouns in a sentence faster than full analysis from an NLU engine allowing for faster or real-time processing of conversational data to take place602. In a related step, incoming speech (audio) is received604, and the audio is simultaneously processed to analyze performance metrics for audio and voice quality605and to classify portions of the audio606into a plurality of audio classes such as silence, speech, music, ring, comfort noise, earcons, etc. Portions of the audio that both meet a certain quality level and are classified as speech are then converted to text607, which is then sent for further textual analysis as in steps601, et seq. Text that has been processed through an NLU engine603and subjected to keyword spotting602is then sent for contextual608, dialectic609, and formality analysis610. At the context analysis stage608, a context analyzer may use word and phrase associations in both the query and response to determine the context in which the speech is taking place. At the dialectic analysis stage609, a dialect analyzer may use dictionaries of regional dialects and slang to determine the dialect that the writer or speaker of a particular communication is using. At the formality analysis stage610, a formality analyzer may use compilations of speech from persons of different educations, backgrounds, and occupations, as well as compilations of speech from persons in different settings (e.g., informal gatherings, office environments, weddings, etc.) as well as dictionaries of proper grammar and usage, slang, and the like, to determine a level of formality that the writer or speaker of the communication is using. In some embodiments, the incoming communication (text or audio) may be passed through a response analyzer, which compares the response with the original query to determine the quality and appropriateness of the response611, although this step may also be performed outside of the real-time conversation engine. In some embodiments, the real time conversation engine will use the information from the contextual, dialectic, and formality analyses to generate an outgoing communication that is appropriate in terms of context, dialect/slang, and level of formality to the incoming communication that was analyzed612. For example, an incoming text query may contain informal slang speech from a particular dialect (often indicating that the writer is from a particular region), which dialectic analysis would identify as being a particular dialect and the formality analysis would recognize as being informal speech from that dialect. The outgoing communication would be generated by a natural language generation (NLG) engine using the same dialect and a similar level of informality. Where the outgoing communication is audio, the text may be converted to speech using a real-time text-to-speech engine (not shown). In some embodiments, environmental cues may be introduced into the outgoing communication by an environment simulator to make the communication seem more “real” or natural613. For example, where the context of the audio is an office environment, background noises from an office environment may be introduced into the audio, or where the writer of a text communication is riding on a bus, typographical errors may be introduced to simulate the environment (i.e., motion) of the bus. Hardware Architecture Generally, the techniques disclosed herein may be implemented on hardware or a combination of software and hardware. For example, they may be implemented in an operating system kernel, in a separate user process, in a library package bound into network applications, on a specially constructed machine, on an application-specific integrated circuit (ASIC), or on a network interface card. Software/hardware hybrid implementations of at least some of the aspects disclosed herein may be implemented on a programmable network-resident machine (which should be understood to include intermittently connected network-aware machines) selectively activated or reconfigured by a computer program stored in memory. Such network devices may have multiple network interfaces that may be configured or designed to utilize different types of network communication protocols. A general architecture for some of these machines may be described herein in order to illustrate one or more exemplary means by which a given unit of functionality may be implemented. According to specific aspects, at least some of the features or functionalities of the various aspects disclosed herein may be implemented on one or more general-purpose computers associated with one or more networks, such as for example an end-user computer system, a client computer, a network server or other server system, a mobile computing device (e.g., tablet computing device, mobile phone, smartphone, laptop, or other appropriate computing device), a consumer electronic device, a music player, or any other suitable electronic device, router, switch, or other suitable device, or any combination thereof. In at least some aspects, at least some of the features or functionalities of the various aspects disclosed herein may be implemented in one or more virtualized computing environments (e.g., network computing clouds, virtual machines hosted on one or more physical computing machines, or other appropriate virtual environments). Referring now toFIG.7, there is shown a block diagram depicting an exemplary computing device10suitable for implementing at least a portion of the features or functionalities disclosed herein. Computing device10may be, for example, any one of the computing machines listed in the previous paragraph, or indeed any other electronic device capable of executing software- or hardware-based instructions according to one or more programs stored in memory. Computing device10may be configured to communicate with a plurality of other computing devices, such as clients or servers, over communications networks such as a wide area network, a metropolitan area network, a local area network, a wireless network, the Internet, or any other network, using known protocols for such communication, whether wireless or wired. In one aspect, computing device10includes one or more central processing units (CPU)12, one or more interfaces15, and one or more busses14(such as a peripheral component interconnect (PCI) bus). When acting under the control of appropriate software or firmware, CPU12may be responsible for implementing specific functions associated with the functions of a specifically configured computing device or machine. For example, in at least one aspect, a computing device10may be configured or designed to function as a server system utilizing CPU12, local memory11and/or remote memory16, and interface(s)15. In at least one aspect, CPU12may be caused to perform one or more of the different types of functions and/or operations under the control of software modules or components, which for example, may include an operating system and any appropriate applications software, drivers, and the like. CPU12may include one or more processors13such as, for example, a processor from one of the Intel, ARM, Qualcomm, and AMD families of microprocessors. In some aspects, processors13may include specially designed hardware such as application-specific integrated circuits (ASICs), electrically erasable programmable read-only memories (EEPROMs), field-programmable gate arrays (FPGAs), and so forth, for controlling operations of computing device10. In a particular aspect, a local memory11(such as non-volatile random access memory (RAM) and/or read-only memory (ROM), including for example one or more levels of cached memory) may also form part of CPU12. However, there are many different ways in which memory may be coupled to system10. Memory11may be used for a variety of purposes such as, for example, caching and/or storing data, programming instructions, and the like. It should be further appreciated that CPU12may be one of a variety of system-on-a-chip (SOC) type hardware that may include additional hardware such as memory or graphics processing chips, such as a QUALCOMM SNAPDRAGON™ or SAMSUNG EXYNOS™ CPU as are becoming increasingly common in the art, such as for use in mobile devices or integrated devices. As used herein, the term “processor” is not limited merely to those integrated circuits referred to in the art as a processor, a mobile processor, or a microprocessor, but broadly refers to a microcontroller, a microcomputer, a programmable logic controller, an application-specific integrated circuit, and any other programmable circuit. In one aspect, interfaces15are provided as network interface cards (NICs). Generally, NICs control the sending and receiving of data packets over a computer network; other types of interfaces15may for example support other peripherals used with computing device10. Among the interfaces that may be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, graphics interfaces, and the like. In addition, various types of interfaces may be provided such as, for example, universal serial bus (USB), Serial, Ethernet, FIREWIRE™, THUNDERBOLT™, PCI, parallel, radio frequency (RF), BLUETOOTH™, near-field communications (e.g., using near-field magnetics), 802.11 (WiFi), frame relay, TCP/IP, ISDN, fast Ethernet interfaces, Gigabit Ethernet interfaces, Serial ATA (SATA) or external SATA (eSATA) interfaces, high-definition multimedia interface (HDMI), digital visual interface (DVI), analog or digital audio interfaces, asynchronous transfer mode (ATM) interfaces, high-speed serial interface (HSSI) interfaces, Point of Sale (POS) interfaces, fiber data distributed interfaces (FDDIs), and the like. Generally, such interfaces15may include physical ports appropriate for communication with appropriate media. In some cases, they may also include an independent processor (such as a dedicated audio or video processor, as is common in the art for high-fidelity AN hardware interfaces) and, in some instances, volatile and/or non-volatile memory (e.g., RAM). Although the system shown inFIG.7illustrates one specific architecture for a computing device10for implementing one or more of the aspects described herein, it is by no means the only device architecture on which at least a portion of the features and techniques described herein may be implemented. For example, architectures having one or any number of processors13may be used, and such processors13may be present in a single device or distributed among any number of devices. In one aspect, a single processor13handles communications as well as routing computations, while in other aspects a separate dedicated communications processor may be provided. In various aspects, different types of features or functionalities may be implemented in a system according to the aspect that includes a client device (such as a tablet device or smartphone running client software) and server systems (such as a server system described in more detail below). Regardless of network device configuration, the system of an aspect may employ one or more memories or memory modules (such as, for example, remote memory block16and local memory11) configured to store data, program instructions for the general-purpose network operations, or other information relating to the functionality of the aspects described herein (or any combinations of the above). Program instructions may control execution of or comprise an operating system and/or one or more applications, for example. Memory16or memories11,16may also be configured to store data structures, configuration data, encryption data, historical system operations information, or any other specific or generic non-program information described herein. Because such information and program instructions may be employed to implement one or more systems or methods described herein, at least some network device aspects may include nontransitory machine-readable storage media, which, for example, may be configured or designed to store program instructions, state information, and the like for performing various operations described herein. Examples of such nontransitory machine-readable storage media include, but are not limited to, magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROM disks; magneto-optical media such as optical disks, and hardware devices that are specially configured to store and perform program instructions, such as read-only memory devices (ROM), flash memory (as is common in mobile devices and integrated systems), solid state drives (SSD) and “hybrid SSD” storage drives that may combine physical components of solid state and hard disk drives in a single hardware device (as are becoming increasingly common in the art with regard to personal computers), memristor memory, random access memory (RAM), and the like. It should be appreciated that such storage means may be integral and non-removable (such as RAM hardware modules that may be soldered onto a motherboard or otherwise integrated into an electronic device), or they may be removable such as swappable flash memory modules (such as “thumb drives” or other removable media designed for rapidly exchanging physical storage devices), “hot-swappable” hard disk drives or solid state drives, removable optical storage discs, or other such removable media, and that such integral and removable storage media may be utilized interchangeably. Examples of program instructions include both object code, such as may be produced by a compiler, machine code, such as may be produced by an assembler or a linker, byte code, such as may be generated by for example a JAVA™ compiler and may be executed using a Java virtual machine or equivalent, or files containing higher level code that may be executed by the computer using an interpreter (for example, scripts written in Python, Perl, Ruby, Groovy, or any other scripting language). In some aspects, systems may be implemented on a standalone computing system. Referring now toFIG.8, there is shown a block diagram depicting a typical exemplary architecture of one or more aspects or components thereof on a standalone computing system. Computing device20includes processors21that may run software that carry out one or more functions or applications of aspects, such as for example a client application24. Processors21may carry out computing instructions under control of an operating system22such as, for example, a version of MICROSOFT WINDOWS™ operating system, APPLE macOS™ or iOS™ operating systems, some variety of the Linux operating system, ANDROID™ operating system, or the like. In many cases, one or more shared services23may be operable in system20, and may be useful for providing common services to client applications24. Services23may for example be WINDOWS™ services, user-space common services in a Linux environment, or any other type of common service architecture used with operating system21. Input devices28may be of any type suitable for receiving user input, including for example a keyboard, touchscreen, microphone (for example, for voice input), mouse, touchpad, trackball, or any combination thereof. Output devices27may be of any type suitable for providing output to one or more users, whether remote or local to system20, and may include for example one or more screens for visual output, speakers, printers, or any combination thereof. Memory25may be random-access memory having any structure and architecture known in the art, for use by processors21, for example to run software. Storage devices26may be any magnetic, optical, mechanical, memristor, or electrical storage device for storage of data in digital form (such as those described above, referring toFIG.7). Examples of storage devices26include flash memory, magnetic hard drive, CD-ROM, and/or the like. In some aspects, systems may be implemented on a distributed computing network, such as one having any number of clients and/or servers. Referring now toFIG.9, there is shown a block diagram depicting an exemplary architecture30for implementing at least a portion of a system according to one aspect on a distributed computing network. According to the aspect, any number of clients33may be provided. Each client33may run software for implementing client-side portions of a system; clients may comprise a system20such as that illustrated inFIG.8. In addition, any number of servers32may be provided for handling requests received from one or more clients33. Clients33and servers32may communicate with one another via one or more electronic networks31, which may be in various aspects any of the Internet, a wide area network, a mobile telephony network (such as CDMA or GSM cellular networks), a wireless network (such as WiFi, WiMAX, LTE, and so forth), or a local area network (or indeed any network topology known in the art; the aspect does not prefer any one network topology over any other). Networks31may be implemented using any known network protocols, including for example wired and/or wireless protocols. In addition, in some aspects, servers32may call external services37when needed to obtain additional information, or to refer to additional data concerning a particular call. Communications with external services37may take place, for example, via one or more networks31. In various aspects, external services37may comprise web-enabled services or functionality related to or installed on the hardware device itself. For example, in one aspect where client applications24are implemented on a smartphone or other electronic device, client applications24may obtain information stored in a server system32in the cloud or on an external service37deployed on one or more of a particular enterprise's or user's premises. In some aspects, clients33or servers32(or both) may make use of one or more specialized services or appliances that may be deployed locally or remotely across one or more networks31. For example, one or more databases34may be used or referred to by one or more aspects. It should be understood by one having ordinary skill in the art that databases34may be arranged in a wide variety of architectures and using a wide variety of data access and manipulation means. For example, in various aspects one or more databases34may comprise a relational database system using a structured query language (SQL), while others may comprise an alternative data storage technology such as those referred to in the art as “NoSQL” (for example, HADOOP CASSANDRA™, GOOGLE BIGTABLE™, and so forth). In some aspects, variant database architectures such as column-oriented databases, in-memory databases, clustered databases, distributed databases, or even flat file data repositories may be used according to the aspect. It will be appreciated by one having ordinary skill in the art that any combination of known or future database technologies may be used as appropriate, unless a specific database technology or a specific arrangement of components is specified for a particular aspect described herein. Moreover, it should be appreciated that the term “database” as used herein may refer to a physical database machine, a cluster of machines acting as a single database system, or a logical database within an overall database management system. Unless a specific meaning is specified for a given use of the term “database”, it should be construed to mean any of these senses of the word, all of which are understood as a plain meaning of the term “database” by those having ordinary skill in the art. Similarly, some aspects may make use of one or more security systems36and configuration systems35. Security and configuration management are common information technology (IT) and web functions, and some amount of each are generally associated with any IT or web systems. It should be understood by one having ordinary skill in the art that any configuration or security subsystems known in the art now or in the future may be used in conjunction with aspects without limitation, unless a specific security36or configuration system35or approach is specifically required by the description of any specific aspect. FIG.10shows an exemplary overview of a computer system40as may be used in any of the various locations throughout the system. It is exemplary of any computer that may execute code to process data. Various modifications and changes may be made to computer system40without departing from the broader scope of the system and method disclosed herein. Central processor unit (CPU)41is connected to bus42, to which bus is also connected memory43, nonvolatile memory44, display47, input/output (I/O) unit48, and network interface card (NIC)53. I/O unit48may, typically, be connected to keyboard49, pointing device50, hard disk52, and real-time clock51. NIC53connects to network54, which may be the Internet or a local network, which local network may or may not have connections to the Internet. Also shown as part of system40is power supply unit45connected, in this example, to a main alternating current (AC) supply46. Not shown are batteries that could be present, and many other devices and modifications that are well known but are not applicable to the specific novel functions of the current system and method disclosed herein. It should be appreciated that some or all components illustrated may be combined, such as in various integrated applications, for example Qualcomm or Samsung system-on-a-chip (SOC) devices, or whenever it may be appropriate to combine multiple capabilities or functions into a single hardware device (for instance, in mobile devices such as smartphones, video game consoles, in-vehicle computer systems such as navigation or multimedia systems in automobiles, or other integrated hardware devices). In various aspects, functionality for implementing systems or methods of various aspects may be distributed among any number of client and/or server components. For example, various software modules may be implemented for performing various functions in connection with the system of any particular aspect, and such modules may be variously implemented to run on server and/or client components. The skilled person will be aware of a range of possible modifications of the various aspects described above. Accordingly, the present invention is defined by the claims and their equivalents.
53,757
11863508
DETAILED DESCRIPTION The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. Aspects of the present disclosure include systems, methods, techniques, instruction sequences, and computing machine program products that provide for progressive insertion of additional content into a sequence of content based on input from a user. For example, in some aspects, an advertisement may be inserted into a sequence of content being viewed by the user. If the advertisement is of particular interest to the user, the user may indicate a request for additional information. For example, the user may perform a “swipe up” gesture in response to the advertisement to indicate their interest in more information. Upon receiving the request, the social messaging system may present additional content to the user providing further information. In some aspects, this additional content may take the form of a long form video, which describes more detailed information on a particular topic to the user. In response to learning more about the subject area from the additional content, the user may decide to perform additional actions. For example, the additional content may convey benefits associated with installation of a particular third party application on the user's device. If the user desires the benefits described by the additional content, the user may agree to proceed with an installation of the third party application. Alternatively, the additional content may describe benefits that may be obtained via a web interface. Upon agreement by the user, the social networking system may facilitate a visit to the web interface for the user. For example, the social network system may open the web interface within a browser that is pre-installed on the user's mobile device. The web interface may provide a number of features useful to the user. In some aspects, a user may select to view a sequence of content. The sequence may be defined, in some aspects, by a chronological order in which the content was added to a messaging system. In other aspects, the sequence may be defined by a user. For example, a first user may arrange content into a particular sequence, such that a second user views the content, the content is presented to the user in the sequence as arranged by the first user. As the second user is viewing the content, additional content may be inserted into the sequence. For example, in some aspects, the additional content may be inserted periodically. In some aspects, the additional content may provide a brief description of a subject area. In some aspects, the additional content may include a short video. The video may provide a brief introduction to a subject area. In some aspects, the additional content may be displayed within a user interface that can accept at least two types of input. A first type of input may request that the user be returned to the sequence of content. For example, a “swipe down” input may indicate that the user requests that the display of the additional content be stopped, and the user returned to the sequence of content. A second type of input may indicate the user requests second additional contet relating to the first additional content. For example, in some aspects, a “swipe up” gesture may signal a request for second additional content. In response to the second type of input, further information may be displayed. In some aspects that utilize a video for the additional content, a longer video may be provided as the second additional information. The second additional information may be displayed in a user interface that also accepts at least two types of input. Similar to the first user interface described above, the second user interface may also accept input requesting a return to the sequence of content. A second type of input may request third additional information. For example, the first additional information may include an installation dialog, enabling the user to install software, or the third additional information may be a web link, enabling the user to link to web content in a browser application. FIG.1is a block diagram showing an example messaging system100for exchanging data (e.g., messages and associated content) over a network. The messaging system100includes multiple client devices102, each of which hosts a number of applications including a messaging client application104. Each messaging client application104is communicatively coupled to other instances of the messaging client application104and a messaging server system108via a network106(e.g., the Internet). As used herein, the term “client device” may refer to any machine that interfaces with a communications network (such as the network106) to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistant (PDA), smart phone, tablet, ultra book, netbook, laptop, multi-processor system, microprocessor-based or programmable consumer electronics system, game console, set-top box, or any other communication device that a user may use to access a network. In the example shown inFIG.1, each messaging client application104is able to communicate and exchange data with another messaging client application104and with the messaging server system108via the network106. The data exchanged between the messaging client applications104, and between a messaging client application104and the messaging server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video, or other multimedia data). The network106may include, or operate in conjunction with, an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network106or a portion of the network106may include a wireless or cellular network and the connection to the network106may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third-Generation Partnership Project (3GPP) including 3G, fourth-generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High-Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long-Term Evolution (LTE) standard, or others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. The messaging server system108provides server-side functionality via the network106to a particular messaging client application104. While certain functions of the messaging system100are described herein as being performed by either a messaging client application104or by the messaging server system108, it will be appreciated that the location of certain functionality either within the messaging client application104or the messaging server system108is a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system108, but to later migrate this technology and functionality to the messaging client application104where a client device102has a sufficient processing capacity. The messaging server system108supports various services and operations that are provided to the messaging client application104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client application104. This data may include message content, client device information, geolocation information, media annotation and overlays, message content persistence conditions, social network information, and live event information, as examples. Data exchanges within the messaging system100are invoked and controlled through functions available via user interfaces (UIs) of the messaging client application104. Turning now specifically to the messaging server system108, an Application Programming Interface (API) server110is coupled to, and provides a programmatic interface to, an application server112. The application server112is communicatively coupled to a database server118, which facilitates access to a database120in which is stored data associated with messages processed by the application server112. The API server110receives and transmits message data (e.g., commands and message payloads) between the client device102and the application server112. Specifically, the API server110provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client application104in order to invoke functionality of the application server112. The API server110exposes various functions supported by the application server112, including account registration; login functionality; the sending of messages, via the application server112, from a particular messaging client application104to another messaging client application104; the sending of media files (e.g., images or video) from a messaging client application104to the application server112, for possible access by another messaging client application104; the setting of a collection of media data (e.g., story); the retrieval of a list of friends of a user of a client device102; the retrieval of such collections; the retrieval of messages and content; the adding and deletion of friends to and from a social graph; the location of friends within a social graph; and the detecting of an application event (e.g., relating to the messaging client application104). The application server112hosts a number of applications and subsystems, including a messaging server application114and a social network system116. The messaging server application114implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content) included in messages received from multiple instances of the messaging client application104. As will be described in further detail, the text and media content from multiple sources may be aggregated into collections of content (e.g., called stories or galleries). These collections are then made available, by the messaging server application114, to the messaging client application104. Other processor- and memory-intensive processing of data may also be performed server-side by the messaging server application114, in view of the hardware requirements for such processing. The social network system116supports various social networking functions and services, and makes these functions and services available to the messaging server application114. To this end, the social network system116maintains and accesses an entity graph within the database120. Examples of functions and services supported by the social network system116include the identification of other users of the messaging system100with whom a particular user has relationships or whom the user is “following,” and also the identification of other entities and interests of a particular user. The disclosed methods and systems may utilize the messaging system100to provide for progressive presentation of content on one or more client devices102, as explained in more detail below. FIG.2is block diagram illustrating further details regarding the messaging system100, according to exemplary embodiments. Specifically, the messaging system100is shown to comprise the messaging client application104and the application server112, which in turn embody a number of subsystems, namely an ephemeral timer system202, a collection management system204, an annotation system206, and a progressive attachment system208. The ephemeral timer system202is responsible for enforcing the temporary access to content permitted by the messaging client application104and the messaging server application114. To this end, the ephemeral timer system202incorporates a number of timers that, based on duration and display parameters associated with a message, or collection of messages (e.g., a story, such as the story component404discussed below), selectively display and enable access to messages and associated content via the messaging client application104. Further details regarding the operation of the ephemeral timer system202are provided below. The collection management system204is responsible for managing collections of media (e.g., collections of text, image, video, and audio data). In some examples, a collection of content (e.g., messages, including images, video, text, and audio) may be organized into an “event gallery” or an “event story.” Such a collection may be made available for a specified time period, such as the duration of an event to which the content relates. For example, content relating to a music concert may be made available as a “story” for the duration of that music concert. The collection management system204may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of the messaging client application104. The annotation system206provides various functions that enable a user to annotate or otherwise modify or edit media content associated with a message. For example, the annotation system206provides functions related to the generation and publishing of media overlays for messages processed by the messaging system100. For example, the annotation system206operatively supplies a media overlay (e.g., a filter) to the messaging client application104based on a geolocation of the client device102. In another example, the annotation system206operatively supplies a media overlay to the messaging client application104based on other information, such as social network information of the user of the client device102. A media overlay may include audio and visual content and visual effects. Examples of audio and visual content include pictures, texts, logos, animations, and sound effects. An example of a visual effect includes color overlaying. The audio and visual content or the visual effects can be applied to a media content item (e.g., a photo) at the client device102. For example, the media overlay may include text that can be overlaid on top of a photograph generated by the client device102. In another example, the media overlay includes an identification of a location (e.g., Venice Beach), a name of a live event, or a name of a merchant (e.g., Beach Coffee House). In another example, the annotation system206uses the geolocation of the client device102to identify a media overlay that includes the name of a merchant at the geolocation of the client device102. The media overlay may include other indicia associated with the merchant. The media overlays may be stored in the database120and accessed through the database server118. In one exemplary embodiment, the annotation system206provides a user-based publication platform that enables users to select a geolocation on a map, and upload content associated with the selected geolocation. The user may also specify circumstances under which a particular media overlay should be offered to other users. The annotation system206generates a media overlay that includes the uploaded content and associates the uploaded content with the selected geolocation. In another exemplary embodiment, the annotation system206provides a merchant-based publication platform that enables merchants to select a particular media overlay associated with a geolocation via a bidding process. For example, the annotation system206associates the media overlay of a highest-bidding merchant with a corresponding geolocation for a predefined amount of time. The progressive attachment system208may provide for the insertion of content into a sequence of other content. For example, in some aspects, the progressive attachment system208may insert content into a sequence of the other content periodically, for example, after a period of time elapses. In some aspects, the progressive attachment system may provide for a user to control an amount of content inserted into the sequence of content. For example, user input may indicate that no further content is to be inserted into the sequence of content after presentation of a first piece of content. Alternatively, the user input may indicate a request to insert additional content into the sequence of content after the user has viewed and considered the first piece of content. FIG.3is a schematic diagram300illustrating data which may be stored in the database120of the messaging server system108, according to certain exemplary embodiments. While the content of the database120is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database). The database120includes message data stored within a message table314. An entity table302stores entity data, including an entity graph304. Entities for which records are maintained within the entity table302may include individuals, corporate entities, organizations, objects, places, events, etc. Regardless of type, any entity regarding which the messaging server system108stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown). The entity graph304furthermore stores information regarding relationships and associations between or among entities. Such relationships may be social, professional (e.g., work at a common corporation or organization), interested-based, or activity-based, merely for example. The database120also stores annotation data, in the example form of filters, in an annotation table312. Filters for which data is stored within the annotation table312are associated with and applied to videos (for which data is stored in a video table310) and/or images (for which data is stored in an image table308). Filters, in one example, are overlays that are displayed as overlaid on an image or video during presentation to a recipient user. Filters may be of varies types, including user-selected filters from a gallery of filters presented to a sending user by the messaging client application104when the sending user is composing a message. Other types of filters include geolocation filters (also known as geo-filters), which may be presented to a sending user based on geographic location. For example, geolocation filters specific to a neighborhood or special location may be presented within a user interface by the messaging client application104, based on geolocation information determined by a Global Positioning System (GPS) unit of the client device102. Another type of filter is a data filter, which may be selectively presented to a sending user by the messaging client application104, based on other inputs or information gathered by the client device102during the message creation process. Examples of data filters include a current temperature at a specific location, a current speed at which a sending user is traveling, a battery life for a client device102, or the current time. Other annotation data that may be stored within the image table608is so-called “lens” data. A “lens” may be a real-time special effect and sound that may be added to an image or a video. As mentioned above, the video table310stores video data which, in one embodiment, is associated with messages for which records are maintained within the message table314. Similarly, the image table308stores image data associated with messages for which message data is stored in the entity table302. The entity table302may associate various annotations from the annotation table312with various images and videos stored in the image table308and the video table310. A story table306stores data regarding collections of messages and associated image, video, or audio data, which are compiled into a collection (e.g., a story or a gallery). The creation of a particular collection may be initiated by a particular user (e.g., a user for whom a record is maintained in the entity table302). A user may create a “personal story” in the form of a collection of content that has been created and sent/broadcast by that user. To this end, the user interface of the messaging client application104may include an icon that is user-selectable to enable a sending user to add specific content to his or her personal story. A collection may also constitute a “live story,” which is a collection of content from multiple users that is created manually, automatically, or using a combination of manual and automatic techniques. For example, a “live story” may constitute a curated stream of user-submitted content from various locations and events. Users whose client devices have location services enabled and who are at a common location or event at a particular time may, for example, be presented with an option, via a user interface of the messaging client application104, to contribute content to a particular live story. The live story may be identified to the user by the messaging client application104, based on his or her location. The end result is a “live story” told from a community perspective. A further type of content collection is known as a “location story,” which enables a user whose client device102is located within a specific geographic location (e.g., on a college or university campus) to contribute to a particular collection. In some embodiments, a contribution to a location story may require a second degree of authentication to verify that the end user belongs to a specific organization or other entity (e.g., is a student on the university campus). A content collection may define a sequence of content. For example, the sequence of content may be defined by an order in which the content was inserted into the collection by a user. Alternatively, the sequence may be user defined. For example, an initial sequence may be defined based on the insertion sequence. This insertion sequence may then be subsequently modified via input by the user. For example, the user may be able to drag and drop content within a user interface to define changes to the sequence of content defined by the collection. The methods and systems disclosed herein may insert further content into the sequence of content defined by the collection. The message table314may be a relational database table in some aspects, with a row of the table representing a single message. In some aspects, each row in the message table may store content for the message, and a deletion time for the message. As discussed above, the ephemeral timer system202may delete messages according to a time associated with the message. For example, when a user creates a message, they may specify a maximum life time of the message, such as by providing an expiration date/time of the message or an amount of time the message is to remain (e.g. 3 hours). This time information may be stored in the message table314. As discussed below, in some aspects, the time information may be adjusted based on when certain content may be viewed by a user. Additionally, time remaining for particular content/messages may effect an order in which content is viewed and/or whether additional content is inserted into a sequence of content. FIG.4is a block diagram illustrating functional components of the progressive attachment system208that forms part of the messaging system100, according to some example embodiments. To avoid obscuring the inventive subject matter with unnecessary detail, various functional components (e.g., modules, engines, and databases) that are not germane to conveying an understanding of the inventive subject matter have been omitted fromFIG.4. However, a skilled artisan will readily recognize that various additional functional components may be supported by the progressive attachment system208to facilitate additional functionality that is not specifically described herein. As shown, the progressive attachment system208includes a sequencing component, an insertion component404, an input control component406, and a presentation component408. The content sequencing component402identifies a sequence of media that may be presented on a display screen of a client device102. The content sequencing component402may interface with the collection management system204to obtain the sequence of media. For example, the sequence of media may originate from an event story or event gallery as discussed above. The content sequencing component402may retrieve the sequence of media from the collection management system204in some aspects. The insertion component404may be responsible for determining when to insert one or more additional media into the sequence of media. For example, in some aspects, the insertion component may determine an amount of elapsed time since a previous insertion of additional media, and determine a next time for insertion based on the elapsed time. In some aspects, the insertion component may also determine an amount of time since a user registered with the messaging system100, and may inhibit insertions until the amount of time reaches a threshold. The input control component406may receive input from a user. For example, the input control component may receive inputs indicating a “swipe up” or a “swipe down,” or other inputs that may be provided via a touch device, such as a touch screen display. The presentation component408may present media on an electronic display of a client device102. In some aspects, the presented media is an image file. In other aspects, the presented media may be a video file. In some aspects, the presented media may be an installation dialog, such as a dialog enabling a user to install additional software on the client device. In some aspects, the presented media may be a web dialog. The above referenced functional components of the progressive attachment system208are configured to communicate with each other (e.g., via a bus, shared memory, a switch, or APIs). Collectively, these components facilitate selective presentation of content to users. As is understood by skilled artisans in the relevant computer and Internet-related arts, each functional component illustrated inFIG.4may be implemented using hardware (e.g., a processor of a machine) or a combination of logic (e.g., executable software instructions) and hardware (e.g., memory and the processor of a machine) for executing the logic. For example, any component included as part of the progressive attachment system208may physically include an arrangement of one or more processors410(e.g., a subset of or among one or more processors of a machine) configured to perform the operations described herein for that component. As another example, any component of the progressive attachment system208may include software, hardware, or both, that configure an arrangement of the one or more processors410to perform the operations described herein for that component. Accordingly, different components of the progressive attachment system208may include and configure different arrangements of such processors410or a single arrangement of such processors410at different points in time. Furthermore, the various functional components depicted inFIG.4may reside on a single machine (e.g., a client device or a server) or may be distributed across several machines in various arrangements such as cloud-based architectures. Moreover, any two or more of these components may be combined into a single component, and the functions described herein for a single component may be subdivided among multiple components. FIG.5shows two exemplary sequences of content display. The display sequence500includes media502a-d. Media502a-dmay originate from an event story or an event gallery, for example, via the collection management system204. In some aspects one or more of media502a-dmay be an ephemeral message. The ephemeral message may be defined to exist within the social network system116for a limited period of time. After the period of time elapses, the ephemeral message may be deleted by the social network system116 The media502a-dmay include any form of media. For example, in some aspects, the media502a-dmay be images. In other aspects, the media502a-dmay be videos. In other aspects, the media502a-dmay be a mix of media types, including one or more of images, video, audio, or other forms of media. The media502a-dmay be part of a predefined sequence. For example, in some aspects, the sequence may be defined by an order of the media within an event story or an event gallery. In some aspects, the sequence of media may be defined when the media502a-dare added to the event gallery or story. For example, in some aspects, the sequence may be a chronological sequence with respect to times at which the media was added to the event gallery or story. In some aspects, the sequence may be a chronological sequence with respect to a creation time of the media itself, which may be different than a time when the media was added to the event gallery or story. Sequence525includes the sequence500of media502a-d, but also includes an additional media504. The additional media504may have been inserted between two of the media502band502cof the sequence500(502a-d). In some aspects, media504may be a different type of media than the media502a-d. For example, while media502a-dmay be video media in some aspects, media504may be a fixed image, such as a photo, in some aspects. In other aspects, the media502a-dand504may be the same type of media. The additional media504may be inserted into the sequence500by the insertion component404to form the sequence525. In some aspects, additional media504may provide information on a particular subject. In some aspects, the media502a-dmay be media that are included as part of an event gallery or story, for example, as defined by a first user. A second user may then view the first user's story, and view the media502a-din an order defined by the sequence502a-d. The insertion component404may determine, based on one or more criteria, that additional media is to be inserted at some point in the sequence500. The exemplary sequence525shows the media504inserted between the media502band502cin the sequence. In some aspects, a decision by the insertion component404on whether to insert additional media within the sequence500may be based on an amount of time remaining in any one or more ephemeral messages included in the content502a-d. For example, if the insertion component determines that one or more of the content502a-dmay be deleted before a user completes viewing the sequence500, the insertion component404may determine that additional content is to be inserted, in some aspects, to replace or augment that ephemeral content which is scheduled to be deleted within a threshold period of time. In some aspects, a view rate of content included in the sequence500may be determined. For example, a number of content viewed over a period of time may be used to determine the view rate. From this information, the insertion component404may estimate a view time of each content in the sequence500that has not yet been viewed. The estimated view time may then be compared to a content deletion time of any yet un viewed content within the sequence500. If the estimated view time for particular content is after the content's deletion time, the insertion component may, in some aspects, change an order of the content for viewing such that the ephemeral content is more likely to be viewed before it is deleted. In some other aspects, new content may be inserted before the ephemeral content to augment the sequence of content and compensate for the loss of the ephemeral content before the user is likely to view it. FIG.6shows a user interface sequence600for displaying the media502a-dofFIG.5. The user interface sequence600includes four user interfaces602a-d, each interface602a-ddisplaying media502a-drespectively. The sequence600also illustrates that user inputs, shown as exemplary left swipes610a-d, may be used to advance the user through the sequence of media502a-das shown by the user interfaces602a-drespectively. The sequence600also shows the insertion of an additional user interface620, which displays media504. The user interface620is configured to receive at least two types of input. A first type of input630is shown as an exemplary “swipe up”. A second type of input632is shown as an exemplary “swipe down.” Upon receiving the input632, the sequence600may move from user interface620, displaying the media504, to the user interface602c, which is shown displaying the media502c. In some aspects, upon receiving the input630, the sequence600is shown moving from user interface620to user interface640. The user interface640displays media645. In some aspects, media645may be a long form video, which may present information on a similar subject as media504, but may be a longer video for example, and thus may explain the subject in more depth than media504in some aspects. The user interface645may accept at least two types of user input. A first type of input646may be a “swipe up” gesture. A second type of input648may be a “swipe down” gesture. In response to the input648, the sequence600may transition from user interface640back to user interface620. Alternatively, in some aspects, in response to the input648, the sequence600may transition from user interface640to user interface602c, which displays media502c. In response to the input646, the sequence600may transition from user interface640to user interface660. If the media645is a video, the video may pause at a pause point when the sequence600transitions from the user interface640to the user interface660. If the sequence600returns to the user interface640, the video may resume from the pause point. The user interface660may enable a user to install an addition software application on the device. Alternatively, the user interface660may be a web interface. The user interface660may receive at least two forms of input. A first form of input671may be a “swipe left” gesture in some aspects. The input671may trigger additional actions, such as installation of another software application, or opening of a web based interface. In aspects that provide a web interface implementation of user interface660, loading of user interfaced660maybe initiated in response to the user interface645being displayed. By initiating loading of the user interface660upon presentation of user interface645, delays in displaying the user interface660are reduced relative to implementations that would wait to load user interface660until it was explicitly requested by the user. A second type of input received by user interface660may be a “swipe down” gesture. In response to receiving the input672, the sequence600may transition from user interface660back to user interface640. Alternatively, in some aspects, the sequence600may transition from user interface660to user interface602cin response to input672. In some aspects, a swipe up630such as that illustrated with respect to content620, may suspend ephemeral timers for any of the content502a-d. Thus, any estimated deletion times for this content may be moved forward in time while the ephemeral timer(s) are suspended. Upon receiving the swipe down input632, the ephemeral timer(s) for content within the sequence502a-dmay be resumed. Thus, between the time of a first input (e.g.630) and a second input (e.g.632), with respect to a first content of a sequence of content, one or more ephemeral timers for other content of the sequence of content may be suspended. FIG.7is an exemplary embodiment of the user interface640ofFIG.6. The user interface640ofFIG.7shows a fixed image705. As discussed above, the user interface640may receive at least two types of input. A first type of input may request a return to the user interface620or602c. A second type of input may request additional information, such as that provided by the user interface660. A prompt710may prompt the user for the second type of information. FIG.8is another exemplary embodiment of the user interface640ofFIG.6. The user interface640ofFIG.8shows a video805. The user interface640ofFIG.8also shows a progress bar810for the video805. A pause prompt815is also shown. As discussed above, the user interface640may accept at least two input types. Prompt820prompts the user for the second type of input, which may indicate a request for the information provided by the user interface660, as discussed above with respect toFIG.6. FIG.9is a flowchart for an exemplary method of selecting content. One or more of the functions discussed below with respect to process900andFIG.9may be performed by an electronic hardware processor. For example, instructions stored in an electronic hardware memory may configure the electronic hardware processor to perform one or more of the functions discussed below. For example, in some aspects, instructions stored in the messaging client application104, and/or one or more of the content sequencing component402, insertion component404, input control component406, and/or presentation component408, may configure a hardware processor, such as the processing unit1154ofFIG.11or the processor1204ofFIG.12to perform one or more of the functions discussed below. In block910, a sequence of media to present on an electronic display is determined. In some aspects, the sequence of media is presented to a user on a touchscreen of an electronic device, such as a mobile device. In some aspects, the determination may be in response to a user interface selection input, selecting a source of the sequence of media. For example, a user may select an event gallery or an event story. The selected event gallery or event story may be the source for the sequence of media. The sequence of content may be defined by the event gallery or event story. For example, the sequence may be defined based on a sequence in which the media included in the gallery or story were added to the event gallery or event story. Alternatively, the sequence may be defined by a chronological order in which the media was created, edited, or captured. The sequence of media may include two or more media. The media may be any combination of videos, gifs, photos, documents, images, or any media type. In block920, a determination is made to present second media between two media of the sequence of media. In some aspects, the determination may be based on an elapsed time since a previous insertion of media into the sequence has been performed. In some aspects, the determination to insert the second media may be based on a content consumption rate of the user. For example, if the user is consuming content at a rate below a rate threshold, and an elapsed time since a previous insertion is above a time threshold, then a determination to insert the second media may be made, and process900may move to block930. Otherwise, the insertion may not be performed, and process900may transition via off-page reference B to block960. In block930, the second media is presented between the two media. As shown above with respect toFIG.6, in some aspects, a user interface, such as user interface620may present the second media (e.g.504). The user interface620may be configured to accept two or more types of input in some aspects. In block940, a first input is received. As discussed above, the user interface620may be configured to receive at least two types of input. The input may be, in some aspects, a gesture entered on a touch screen display, such as that used by a smartphone. Decision block950determines whether the input of block940requests additional media or requests to return to the sequence of media. If the input requests to return to the sequence of media, process900transitions through off-page reference B to present further media in the sequence, as explained below. If the input requests the presentation of additional media, process900moves from block950to block960. As discussed above, the example sequence600may transition from the user interface620to the user interface640upon receiving a particular input, such as an exemplary “swipe up” gesture, such as input630inFIG.6. Once presented, the user interface640may provide for at least two further inputs. The user interface640may be configured to, for example, receive a first input indicating that a return in user interface620is requested. A second input may indicate a transition to user interface660is requested. After the additional media is presented in block960, process900transitions via off page reference A to block965ofFIG.10. In block965, a second input is received. The second input may be, in some aspects, a gesture on a touchscreen. For example, the second input may correspond to a “swipe left” or “swipe down” gesture in some aspects on the user interface640. Decision block970determines whether the second input requests further media be displayed, or a return to the sequence of media is requested. If a return to the sequence is requested, process900moves from block970to block980. Block980may present a next media in the sequence after the first media. For example, as shown with respect toFIGS.5-6, after media502bin the sequence of502a-dis presented, media504is inserted. After media504, the sequence returns by presentation of media502c, which is immediately subsequent to media502bin the sequence of media502a-d. If further media is requested by the second input, process900moves from block970to block975, which presents the further media. In some aspects, the further media may be presented in a user interface such as user interface660, discussed above with respect toFIG.6. Software Architecture FIG.11is a block diagram illustrating an example software architecture1106, which may be used in conjunction with various hardware architectures herein described.FIG.11is a non-limiting example of a software architecture and it will be appreciated that many other architectures may be implemented to facilitate the functionality described herein. The software architecture1106may execute on hardware such as a machine1200ofFIG.12that includes, among other things, processors1204, memory/storage1206, and I/O components1218. A representative hardware layer1152is illustrated and can represent, for example, the machine1200ofFIG.12. The representative hardware layer1152includes a processing unit1154having associated executable instructions1104. The executable instructions1104represent the executable instructions of the software architecture1106, including implementation of the methods, components, and so forth described herein. For example, the instructions1104may configure the processing unit1154to perform one of more of the functions discussed above with respect to process900, discussed above with respect toFIGS.9and10respectively. The hardware layer1152also includes memory and/or storage1156, which also have the executable instructions1104. The hardware layer1152may also comprise other hardware1158. As used herein, the term “component” may refer to a device, a physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, and/or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various exemplary embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a Field-Programmable Gate Array (FPGA) or an Application-Specific Integrated Circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. A processor may be, or include, any circuit or virtual circuit (a physical circuit emulated by logic executing on an actual processor) that manipulates data values according to control signals (e.g., “commands,” “op codes,” “machine code,” etc.) and that produces corresponding output signals that are applied to operate a machine. A processor may, for example, be a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), or any combination thereof. A processor may further be a multi-core processor having two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In embodiments in which multiple hardware components are configured or instantiated at different times, communications between or among such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some exemplary embodiments, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other exemplary embodiments, the processors or processor-implemented components may be distributed across a number of geographic locations. In the exemplary architecture ofFIG.11, the software architecture1106may be conceptualized as a stack of layers where each layer provides particular functionality. For example, the software architecture1106may include layers such as an operating system1102, libraries1120, frameworks/middleware1118, applications1116, and a presentation layer1114. Operationally, the applications1116and/or other components within the layers may invoke API calls1108through the software stack and receive a response as messages1110. The layers illustrated are representative in nature and not all software architectures have all layers. For example, some mobile or special-purpose operating systems may not provide a frameworks/middleware1118layer, while others may provide such a layer. Other software architectures may include additional or different layers. The operating system1102may manage hardware resources and provide common services. The operating system1102may include, for example, a kernel1122, services1124, and drivers1126. The kernel1122may act as an abstraction layer between the hardware and the other software layers. For example, the kernel1122may be responsible for memory management, processor management (e.g., scheduling), component management, networking, security settings, and so on. The services1124may provide other common services for the other software layers. The drivers1126are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers1126include display drivers, camera drivers, Bluetooth® drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth depending on the hardware configuration. The libraries1120provide a common infrastructure that is used by the applications1116and/or other components and/or layers. The libraries1120provide functionality that allows other software components to perform tasks in an easier fashion than by interfacing directly with the underlying operating system1102functionality (e.g., kernel1122, services1124, and/or drivers1126). The libraries1120may include system libraries1144(e.g., C standard library) that may provide functions such as memory allocation functions, string manipulation functions, mathematical functions, and the like. In addition, the libraries1120may include API libraries1146such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as MPEG4, H.264, MP3, AAC, AMR, JPG, and PNG), graphics libraries (e.g., an OpenGL framework that may be used to render 2D and 3D graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like. The libraries1120may also include a wide variety of other libraries1148to provide many other APIs to the applications1116and other software components/modules. The frameworks/middleware1118provide a higher-level common infrastructure that may be used by the applications1116and/or other software components/modules. For example, the frameworks/middleware1118may provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks/middleware1118may provide a broad spectrum of other APIs that may be utilized by the applications1116and/or other software components/modules, some of which may be specific to a particular operating system1102or platform. The applications1116include built-in applications1138and/or third-party applications1140. Examples of representative built-in applications1138may include, but are not limited to, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, and/or a game application. The third-party applications1140may include an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform, and may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or other mobile operating systems. The third-party applications1140may invoke the API calls1108provided by the mobile operating system (such as the operating system1102) to facilitate functionality described herein. The applications1116may use built-in operating system functions (e.g., kernel1122, services1124, and/or drivers1126), libraries1120, and frameworks/middleware1118to create user interfaces to interact with users of the system. Alternatively, or additionally, in some systems interactions with a user may occur through a presentation layer, such as the presentation layer1114. In these systems, the application/component “logic” can be separated from the aspects of the application/component that interact with a user. The applications1116may include instructions1104that implement the methods discussed herein, such as those discussed above with respect toFIGS.9and/or10. Exemplary Machine FIG.12is a block diagram illustrating exemplary components (also referred to herein as “modules”) of a machine1200. In some aspects, the machine is configured to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.12shows a diagrammatic representation of the machine1200in the example form of a computer system, within which instructions1210(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1200to perform any one or more of the methodologies discussed herein may be executed. As such, the instructions1210may be used to implement modules or components described herein. For example, the instructions1210may implement the content selection system208in some aspects, which may include, in some of these aspects, one or more of the functions discussed above with respect toFIGS.9and10. The instructions1210transform the general, non-programmed machine1200into a particular machine1200programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, the machine1200operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine1200may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1200may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1210, sequentially or otherwise, that specify actions to be taken by machine1200. Further, while only a single machine1200is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions1210to perform any one or more of the methodologies discussed herein. The machine1200may include processors1204, memory/storage1206, and I/O components1218, which may be configured to communicate with each other such as via a bus1202. The memory/storage1206may include a memory1214, such as a main memory, or other memory storage, and a storage unit1216, both accessible to the processors1204such as via the bus1202. The storage unit1216and memory1214store the instructions1210embodying any one or more of the methodologies or functions described herein. The instructions1210may also reside, completely or partially, within the memory1214, within the storage unit1216, within at least one of the processors1204(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1200. Accordingly, the memory1214, the storage unit1216, and the memory of the processors1204are examples of machine-readable media. As used herein, the term “machine-readable medium,” “computer-readable medium,” or the like may refer to any component, device, or other tangible medium able to store instructions and data temporarily or permanently. Examples of such media may include, but are not limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., Electrically Erasable Programmable Read-Only Memory (EEPROM)), and/or any suitable combination thereof. The term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine-readable medium” may also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., code) for execution by a machine, such that the instructions, when executed by one or more processors of the machine, cause the machine to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” may refer to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” excludes transitory signals per se. The I/O components1218may include a wide variety of components to provide a user interface for receiving input, providing output, producing output, transmitting information, exchanging information, capturing measurements, and so on. The specific I/O components1218that are included in the user interface of a particular machine1200will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components1218may include many other components that are not shown inFIG.12. The I/O components1218are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various exemplary embodiments, the I/O components1218may include output components1226and input components1228. The output components1226may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components1228may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instruments), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. The input components1228may also include one or more image-capturing devices, such as a digital camera for generating digital images and/or video. In further exemplary embodiments, the I/O components1218may include biometric components1230, motion components1234, environment components1236, or position components1238, as well as a wide array of other components. For example, the biometric components1230may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components1234may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environment components1236may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detect concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components1238may include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components1218may include communication components1240operable to couple the machine1200to a network1232or devices1220via a coupling1224and a coupling1222respectively. For example, the communication components1240may include a network interface component or other suitable device to interface with the network1232. In further examples, the communication components1240may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices1220may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components1240may detect identifiers or include components operable to detect identifiers. For example, the communication components1240may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF4111, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components1240, such as location via Internet Protocol (IP) geo-location, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. Where a phrase similar to “at least one of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, or C,” or “one or more of A, B, and C” is used, it is intended that the phrase be interpreted to mean that A alone may be present in an embodiment, B alone may be present in an embodiment, C alone may be present in an embodiment, or any combination of the elements A, B, and C may be present in a single embodiment; for example, A and B, A and C, B and C, or A and B and C may be present. Changes and modifications may be made to the disclosed embodiments without departing from the scope of the present disclosure. These and other changes or modifications are intended to be included within the scope of the present disclosure, as expressed in the following claims. A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever. The following notice applies to the software and data as described below and in the drawings that form a part of this document:
68,669
11863509
DETAILED DESCRIPTION A technology for publish-subscribe message transformations is provided. In one example, a publish-subscribe messaging method may include receiving a definition of a transformation rule for transforming a message received from a publisher. A transformation rule may be a transformation function or may be a filter to filter messages for transformation using a transformation function based on predefined criteria. Transformation functions may query information, correlate data, combine data, etc. substantially in real time to add value to the data from the publisher. For example, the message data may be combined or correlated with other streaming or static data and/or may be used to derive additional data using the transformation function. The message may be received from the publisher at a broker. The message may identify a topic and may include message data. A determination may be made as to whether the message is associated with a transformation rule for transforming the message. The message may be associated with a transformation rule when the message content, a message source, a message flag or any other portion of the message satisfies criteria for transformation defined by the transformation rule. The method may further include transforming the message as defined by the transformation rule and publishing the transformed message to a destination, such as a subscriber subscribing to the topic or to any other suitable destination. In examples where the transformation rule is not the transformation function, the transformation rule may be used to filter messages for execution of the transformation function or to identify inline functions in the messages, etc. Transforming the message as defined by the transformation rule in such examples may include determining that the message includes one or more predefined criteria, identifying a transformation function to execute when the one or more predefined criteria are included in the message, and executing the transformation function on the message. A broker service may receive the message, filter the message, transform the message, etc. before transmitting to the destination. In another example, a publish-subscribe messaging method may include receiving a definition of a transformation rule configured for transforming a message from a publisher. In addition, a message may be received from the publisher at a broker. The message may identify a topic and include message data. A determination may be made as to whether the message is associated with a transformation rule for transforming the message data of the message. For example, a message associated with predefined criteria defined in the transformation rule may be transformed using a transformation function at the broker. In one example, secondary data may be retrieved from a secondary data source as defined by the transformation rule. The function may be used to transform the message by combining the secondary data with the message data and generate a transformed message. The transformed message may be published to message destination. The transformed message may be sent (i.e., published) across a computer network, such as the internet or a local virtual network, to the destination (e.g., subscriber). The present technology enables identifying the type of message or message content and indicates to the broker that sends messages to the subscriber that the message data is to be transformed before transmission to subscribers. Immediate (e.g., at least near real time or recent-time) message transformations may be applied to virtually any type of data or message. For example, a subscriber may desire to know the geo-location of an important asset (e.g., a vehicle or commercial truck) as the asset moves, and supplement that geo-location with information about the geo-location as a function of time (e.g., looking at past or predicted geo-locations), weather, and correlated with an identification of a person associated with the asset (e.g., a truck driver). The present technology may be provided using a service provider environment. For example, the service provider environment may provide one or more services to host data published by publishers and/or any other data desired to be hosted. The service provider environment may enable customers or subscribers to initiate services on demand to perform any of a variety of functions on data in messages published by a publisher to transform the messages into transformed messages having increased value to subscribers. When a service is initiated on demand, underlying resources associated with the service may be initiated, such as to accommodate processing, storage, networking or other demands for the service. In one example, the service may be represented by one or more transformation functions to be applied to the messages. The service provider environment may provide a marketplace to enable the purchasing of sharing and hosting of data for subscribers, publishers, etc. The service provider environment may provide a marketplace to enable subscribers and publishers to purchase compute time to enable the execution of functions on the data. The service provider environment may provide a marketplace to enable to purchase of transformation rules and/or functions. FIG.1Aillustrates an example system for transforming message data in a publication-subscription type of system. The system includes a publisher105, a broker115, and one or more subscribers110. The publisher105may publish messages to the broker115and the broker115may transmit the messages to one or more subscribers110, such as those who have subscribed to the messages from the publisher105or the topic120identified in the message published by the publisher105. In some examples, a topic120may identify a subject or group of subjects or may identify a publisher105or group of publishers. Further, a topic may be a specific type of data stream that is not necessarily human readable but is readable by machines which consume published information. The present technology may be utilized in a topic-based system, where messages are published to “topics” or named logical channels or message queues. Subscribers in a topic-based system may receive messages published to the topics to which the subscribers are subscribed, and each subscriber to a topic will receive the same messages. The publisher may be responsible for defining the classes of messages or topics to which subscribers may subscribe. The present technology may be utilized in a content-based system where messages are delivered to a subscriber if the attributes or content of those messages matches or is associated with constraints defined by the subscriber. The subscriber may be responsible for classifying the messages. The present technology may be utilized in a hybrid system where publishers post messages to a topic while subscribers register content-based subscriptions to one or more topics. The broker115may use a queue for distributing the messages to the subscribers110and ensuring that each subscriber110receives a copy of the message. The order in which individual subscribers110receive the message relative to other subscribers110may be in any suitable order. In other words, the method of queuing transmission of the messages to the subscribers110is not particularly limited. In one example, the subscribers110may be queued in alphabetical order. In another example, the subscribers110may be queued according to an age of the subscription. In another example, the subscribers110may be queued according to geographical region or according to proximity to the broker115. Various other examples are also contemplated which will be apparent to one of skill in the art. In addition, while a queuing type of system is described here, other types of message receiving systems may be used. For example, the messages may be received into a message bus type of system for distribution. When the broker115receives a message for a topic120, the broker115may queue publication of the message to the subscribers110. The messages may include a tuple defining, for example, one or more of message data, sender data (e.g., an identifier, address, name or the like), a broker address, a topic (which may optionally be included in the message data), a transform flag (or flag state) indicating whether to transform the message data, and/or an inline function defining how to transform the data. When a message reaches the front of the queue, the broker115may transform the message using a transformation function (e.g., the inline function, or a transformation function referenced by a transformation rule corresponding to one or more contents of the message). For example, the broker115may replace at least a portion of the message data or combine the message data with secondary data from a secondary data source or manipulate the data as defined by the function, etc. to create a transformed message. The transformed message may then be transmitted to the subscribers110. When a flag is not set and/or the message is not associated with a transformation rule, the message may not be transformed and may be published to the subscribers110as received from the publisher105without transformation. In the example ofFIG.1A, the secondary data source130may be external to the broker115. The publisher105may have or include a message data source135, which may be a primary data source used to provide the message data in the message published to the broker115. In one example, the broker115may send a request over a network, such as a local network, a virtualized network on a hardware substrate network, or the internet, for the secondary data at secondary data source130and may receive the secondary data in response. The broker115system or components may be a server or a service (which may optionally comprise a plurality of servers or other components) hosted in a service provider environment100. The secondary data source130may optionally also be hosted in the service provider environment100separately from the broker115. Alternatively, the secondary data source130may be external to the service provider environment and may optionally be geographically remote from where the service provider environment is executed. In some examples, the message data may be generated by a human operator at the publisher. In other examples, the data source may be a device having a transducer, sensor, or other data generating device for capturing, generating or creating the message data. The message data source135may, in various examples, generate updated message data periodically and trigger the sending of messages in response to updated data. The triggering of the messages may occur upon request from the publisher, upon request from the broker, continuously in a stream, at random intervals, upon occurrence of defined events, etc. The message data may include one or more data types such as images at a publisher location, a geolocation of a moving publisher, environmental temperature at the publisher, internal temperature of the publisher, state of a publisher, or the like. The type of data to be updated in the message may be any of a wide variety of other types of data or combinations of data. In one example (FIG.1B), the message data may be obtained by the publisher105from another source, such as a third party. The message data may optionally be transformed by combining the message data with secondary data from another publisher. The other publisher may publish messages to the broker115and the broker115may execute a transformation127on the message from the publisher105using a transformation function from a transformation function data store160to combine the message data from publisher105with the secondary data from the other publisher, represented as secondary data source130. The transformation function data store160may store multiple available transformation functions which may be called when appropriate for the message, as may be determined using transformation rules. The system ofFIG.1Bmay provide a generic interface for plug-in datasets. In other words, any data source, such as secondary data source130, may be connected to the service provider environment to allow the data to be used in transformations of other data from a publisher105. The secondary data source130may be a static data source, a streaming data source or any other suitable type of data source. The broker115may be configured to initiate a service provider service on-demand, including for example initiating servers, processors, memory, networking or other resources for use in executing transformation functions125(FIG.1A) on datasets. As an example use of the systems ofFIGS.1A-1C, a customer may provide a dictionary dataset as secondary data source130for use in the service provider environment100. The dictionary dataset may be a dictionary for translating commands, scripts, data, etc. from one format, protocol, language or the like to another. Specifically in the context of IoT (Internet of Things) devices, the dictionary dataset may enable IoT devices to communicate with one another even when the IoT devices have disparate manufacturers, communicate using different protocols, etc. The service provider may execute underlying services to host the dictionary dataset and run APIs (Application Programming Interfaces) to access the dataset. Customers may call translation APIs from other services or can run translations when messages are sent from one device to another. For example, a customer with a smart home appliance and a smart phone may wish to control the appliance using the phone. The phone may be the publisher105and may publish a message with control instructions for controlling the appliance. The appliance may be a subscriber110and may subscribe to a topic120of messages from the publisher105for controlling the subscriber110. The message may include the translation API call. When the message arrives at the broker115in the service provider environment100, the broker115may execute the transformation function125based on the translation API call to translate the command from the phone into a format that is usable/understandable by the appliance using the dictionary dataset. The present technology may provide a data marketplace (e.g., Broker Marketplace150) as shown inFIG.1Cand an interface for customers to plug-in new datasets (e.g., secondary data sources130), as well as define user-defined functions (UDFs) or transformation functions (in transformation function data store160) and/or transformation rules (in transformation rule data store155) to start building and executing service provider services running the functions on demand. The technology may provide extensible pay-per-use functionality of the rule sets that contain IoT rules. For example, a customer, a product manufacturer or the like may pay a fee each time a transformation function is executed, or each time a message is transformed (potentially using multiple transformation functions), or based on a size of message, or based on another data source accessed to perform the transformation, etc. The present technology may utilize a client/server or virtualized network architecture that involves client computers connecting through a server with other client computers. Such a configuration may facilitate message brokering, subscription to topics, transformation of published messages with secondary data and so forth. An example of the client/server architecture or virtualized network of the present technology provides a central data center having at least one server provided therein. In one example, the systems ofFIGS.1A-1Cmay be used to facilitate autonomous driving technologies. For example, cars, street lights, traffic cameras and the like may be IoT publishers and/or subscribers. In one example, a camera on a car may be a publisher and a navigation device for the car may be a subscriber, where the publisher publishes images from the car, such as may depict what is seen by the car. A transformation function may analyze the images using machine recognition technology to identify that there is currently a pedestrian in a crosswalk and may correlate this identification with safe maneuvering rules to send back to the navigation device so that the car can navigate safely in the vicinity of the crosswalk and avoid an impact with the pedestrian. As another example, geolocation or other data of vehicles in a fleet may be published to a fleet manager for managing locations or other data from the vehicles. Drivers of the vehicles may carry a mobile device associated with the driver and which is trackable by the fleet manager. The broker115may receive location data from the vehicle and may execute a transformation function to correlate the location with the closest tracked mobile device to determine which mobile device is likely at a particular vehicle. Another secondary data store may be a dataset associating mobile devices with drivers. A transformation function may identify a driver from the mobile device in the same position as the vehicle and transmit to the fleet manager the location of the vehicle together with the name of the driver of the vehicle. As another example, the geolocation of cars in a fleet may be published to a subscriber fleet manager to identify a present location of cars in the fleet. A secondary data store may define a geofence. The transformation function may compare the car location with the geofence to determine whether the car is inside or outside of the geofence. If the car is outside of the geofence, a second transformation function may be executed to provide a notification to the fleet manager or to the driver, or to remotely disable the car or perform any other desired function. As another example, a coffee maker may be a subscriber of time and other instructional data and may subscribe to publications of time and operation instructions. A smart device hub may be a publisher that publishes the time and the operation instructions. The smart device hub may instruct the coffee maker to make coffee at a set time each day. The broker115may use transformation rules to intercept instructions to make the coffee while simply passing through the messages including the time. If a person who has requested the coffee is not at home, then the transformation function may transform the coffee making instruction to delay or cancel making coffee. For example, a geolocation of a mobile device of the person may provide secondary data which is used by the broker115to determine whether to make the coffee. In examples where the coffee making instructions are to begin making coffee at a future time, the coffee maker may relay on the time publications to substantially accurately make the coffee on time, particularly if the coffee maker lacks an internal clock or has an inaccurate clock. Depending on a frequency of publication of the time or a frequency of receipt of the publication, the coffee maker may use delay loops or the like based on an actual time received prior to the instructed coffee making time in order to turn on at approximately the correct time. Referring now toFIG.2, a process for transforming message data is illustrated as a simple decision tree or flow diagram. After a start200of the process, receipt of a message from a publisher may be detected210. If no message has been received (‘no’ at210), then the process may return to the start200and wait for a message to be received. If a message has been received (‘yes’ at210), then a determination may be made at225as to whether the message is associated with a transformation rule. For example, a transformation rule identify message criteria such as a publisher identity, a specified topic, a check for whether a transformation flag is set, a check for an inline function in the message, or any other criteria useful in determining whether to perform the transformation and/or which transformation(s) to perform. If the message is associated with, corresponds to or at least partially matches the criteria of the transformation rule (‘yes’ at225), a broker or transformation service may execute one or more transformation functions on the message based on the transformation rule or the contents of the message to transform the message at230. After transforming the message, the message may be sent to a destination235. If the message is not associated with a transformation rule (‘no’ at225), the message may be sent to the destination at235without transformation. After a message is sent to the destination, the process may start again at waiting for a message. The destination to which the message is sent may be a subscriber, an interested party, or may be a different destination. For example, the transformed message may be stored in a data store for persistence and may be accessible by interested parties. In another example, the destination may be the publisher. In another example, the destination may be the broker. Specifically, the message may be published from the broker back to the broker to determine whether additional transformations are to be performed, whether the transformed message is associated with a topic, etc., similarly as when an initial or original publication of a message is received. As described with respect toFIGS.1A-1C, the transformation of the message at230may include pulling data from different sources, such as data streams, static data or the like. The broker has the ability to funnel transformed data to different endpoints (i.e., different subscribers). Any number of subscribers may subscribe to any number of topics or publishers. Publishers may be represented by disparate devices providing data and the published data may be sent to other types of devices, services, or the like. Transformation functions may query information, correlate data, combine data, etc. substantially in real time to add value to the data from the publishers. The transformation of the data may occur as the data streams in from a publisher and before transmission to subscribers. For example, using a vehicle mapping example, with connected vehicles and devices, etc. managed by a fleet manager, data may be received from the vehicles. The vehicles may periodically or upon occurrence of events (e.g., movement of the vehicle) transmit latitude and longitude coordinates to the broker. A transformation rule may specify that vehicle data including the coordinates for the identified fleet is to have a transformation function performed thereon. The transformation function may perform a reverse geo-coding function by pulling the location data out of the message and referencing a different dataset for map-matching, finding a closest street to the vehicle, etc. in real time. Geospatial data may be a secondary data store which is accessed and used by the transformation function to transform the publication data to something useful for the fleet manager to track drivers, set up geofencing, route vehicles from a current location to desired location, avoid traffic congestion, etc. The message data may be further transformed through combination or correlation with data from other devices in the field or from third parties, such as to correlate with weather and compare with speed limit data and combine with data from other fleets, etc. The transformation rules may use a Structured Query Language (SQL) syntax to receive streaming data, execute a transformation rule to determine whether to further process and transform the data, and to execute a transformation function based on values of the data (e.g., JSON (JavaScript Object Notation) values). IoT devices may be publishers constantly streaming messages or publishing data. The data may include a suitable type or class of data such as may be generated by IoT devices or sensors, for example, and may optionally be structured for particular use cases. If a first customer defines a transformation rule to identify geospatial streaming data, such a rule may be used to filter geospatial data for performing a transformation function. A second customer may create a second rule or extend the first rule, such as to transform the streaming data by integrating weather data or any other type of data from a secondary data source. The service provider may provide one service for each customer (i.e., one-to-one) or one service for all customers (i.e., one-to-many). The service provider may integrate any query desired by a customer as the transformation rule. If data is received which meets the transformation rule criteria, a transformation function may be executed to query data out of a different data source. As an example, a vehicle sending a message may include a computer. The vehicle computer may have no relationship to the driver. Further, the vehicle may not have a relationship to the vehicle computer carried by the vehicle. A company that owns the vehicle can use data to identify the driver by querying a secondary data store to determine who the driver is. A third data store may be accessed to identify which vehicle corresponds to the vehicle computer. A transformation function may put data from these different sources together and output the result to another data store. A second transformation rule may collect temperature, speed, weather or other data and further combine this with the result of the previous function. Speed and temperature data may be included in a message payload from the vehicle and the weather data may be retrieved separately from an external or third-party data store. The messages may include inline functions, such as to retrieve weather data. Alternatively, receipt of the message may trigger execution of the function based on the transformation rules. For example, a customer may define a transformation rule which specifies message contents of a web service call (a message) published from the publisher and a transformation function. The transformation function may define a location of a resource (e.g., secondary data store), an expected return model or format for the data from the resource, and one or more processes to perform when the data is obtained, such as to combine the data from the resource with the message data. The secondary data from the resource may be received at the broker as a JSON response. More complex computational tasks may also be performed using the transformation function. For example, an inline function called by the message may result in data being returned from a secondary data source, which may trigger an additional conditional function (e.g., if secondary data matches specified criteria then perform another function). In other words, a message may trigger an expandable transformation rule or transformation function which expands to any number of complex transformation rules or functions. In another example, a publisher may publish messages to a topic. The SQL query may be performed by the broker inside the service provider environment. If the message is not associated with a transformation rule, then no transformation functions may be performed. If there is no subscriber for the data being published, then no transformation functions may be performed. In yet another example, the service provider may allow customers to create a virtual subscriber when creating transformation rules or functions that will ‘listen’ for data provided through the service. The transformation rule may be the definition of a topic. When messages are received for the topic, the transformation function may be performed. The publish-subscribe message transformation technology using the methods or aspects described may be executed or maintained in a data center or service provider environment for a computing service provider.FIG.3illustrates how components of a data center may function as a computing service300in a service provider environment to provide a platform for computing instances which the present technology may use to execute nodes as described. The computing service300(i.e., the cloud provider or service provider) may be capable of delivery of computing and storage capacity as a service to a community of end recipients. In an example implementation, the computing service may be established for an organization by or on behalf of the organization. That is, the computing service300may offer a “private cloud environment.” In another implementation, the computing service300may support a multi-tenant environment, wherein a plurality of customers operate independently (i.e., a public cloud environment). Generally speaking, the computing service300can provide the following models: Infrastructure as a Service (“IaaS”), Platform as a Service (“PaaS”), and/or Software as a Service (“SaaS”). Other models may also be provided. In some implementations, end users access the computing service300using networked client devices, such as desktop computers, laptops, tablets, smartphones, etc. running web browsers or other lightweight client applications. Those skilled in the art will recognize that the computing service300can be described as a “cloud” environment. The particularly illustrated computing service300may include a plurality of server computers302A-302D. While four server computers are shown, any number may be used, and large centers may include thousands of server computers. The server computers302A-302D may provide computing resources for executing software instances306A-306D. In one implementation, the instances306A-306D may be virtual machines. A virtual machine may be an instance of a software implementation of a machine (i.e. a computer) that executes applications like a physical machine. In the example of virtual machine, each of the servers302A-302D may be configured to execute an instance manager308capable of executing the instances. The instance manager308may be a hypervisor or another type of program configured to enable the execution of multiple instances306on a single server. Additionally, each of the instances306may be configured to execute one or more applications. It should be appreciated that although the implementations disclosed herein are described primarily in the context of virtual machines, other types of instances can be utilized with the concepts and technologies disclosed herein. For instance, the technologies disclosed herein can be utilized with storage resources, data communications resources, and with other types of computing resources. The implementations disclosed herein might also execute all or a portion of an application directly on a computer system without utilizing virtual machine instances. One or more server computers304may be reserved for executing software components for managing the operation of the server computers302and the instances306. For example, the server computer304may execute a management component310. A customer may access the management component310to configure various aspects of the operation of the instances306purchased by the customer (i.e., the administrator of a service to be executed using the instances and made available to traffic from client devices). For example, the customer may purchase, rent or lease instances and make changes to the configuration of the instances. The customer may also specify settings regarding how the purchased instances are to be scaled in response to demand. An auto scaling component312may scale the instances306vertically or horizontally based upon rules defined by the customer. In one implementation, the auto scaling component312allows a customer to specify scale-up policies for use in determining when new instances should be instantiated, including what type of instance to instantiate, and scale-down policies for use in determining when existing instances should be terminated. The auto scaling component312may consist of a number of subcomponents executing on different server computers302or other computing devices. The auto scaling component312may monitor available computing resources over an internal management network and modify resources available based on predictions of need as well as based on actual need. A deployment component314may be used to assist customers in the deployment of new instances306of computing resources. The deployment component314may have access to account information associated with the instances, such as who is the owner of the account, credit card information, country of the owner, etc. The deployment component314may receive a configuration from a customer that includes data describing how new instances306should be configured. For example, the configuration may specify one or more applications to be installed in new instances306, provide scripts and/or other types of code to be executed for configuring new instances306, provide cache logic specifying how an application cache should be prepared, and other types of information. The deployment component314may utilize the customer-provided configuration and cache logic to configure, prime, and launch new instances306. The configuration, cache logic, and other information may be specified by a customer using the management component310or by providing this information directly to the deployment component314. Customer account information316may include any desired information associated with a customer of the multi-tenant environment. For example, the customer account information can include a unique identifier for a customer, a customer address, billing information, licensing information, customization parameters for launching instances, scheduling information, auto-scaling parameters, previous IP addresses used to access the account, etc. Information such as the unique identifier, IP addresses used to access the account and so forth may be used in authenticating a user to the service provider environment. The computing service300may be used to host or provide any number of potential services to customers, such as storage, compute, or other services. In one example, a publish-subscribe service350may be provided for managing subscriptions, message receipt, message transformation, message transmission and the like between the server computers302A-302D, or between devices (e.g., multiple of local device360) external to the computing service300or between a server computer302A-302D and a device (e.g., local device360) external to the computing service300as has been described. In one example, the publish-subscribe service may be hosted on one or more of the server computers302A-302D rather than being separate from these server computers302A-302D as illustrated. A network330may be utilized to interconnect the server computers302A-302D and the server computer304. The network330may be a local area network (LAN) and may be connected to a Wide Area Network (WAN) so that end users may access the computing service300. It should be appreciated that the network topology illustrated inFIG.3has been simplified and that many more networks and networking devices may be utilized to interconnect the various computing systems disclosed herein. Referring now toFIG.4, a block diagram of a system in a service provider environment for managing publish-subscribe message transformations is illustrated in accordance with an example of the present technology. The system elements may be implemented using one or more computing devices in a service provider environment, such as a broker server400or broker service as an example computing device, as well as client devices460which may be external to the service provider environment, and may be implemented across a network455. The system may include one or more data stores435-450and a number of modules or services415-430,465as part of a publish-subscribe message transformation service for transforming messages published by publishers and received by the broker server400prior to republishing to subscribers. Client device460may represent a plurality of client devices comprising the publisher(s) and subscriber(s), where messages are transmitted over the network455. Computing services offered by a service provider environment, may include a computing device (e.g., a droplet) that executes one or more servers or computing instances on the computing device. One or more servers (e.g. a computing instance operating as a server) may be operated to execute an operating system and implement a communications application which is scalable and asynchronous. A user may create, launch, and terminate servers as desired. The user may have some control over the geographical location of servers or clusters of servers to optimize latency and provide high levels of redundancy. The broker server(s)400may be a virtual computing instance as previously explained, and the virtual computing instance may be implemented using a virtualization computing environment in a service provider environment, which may include a virtual distributed computing system with a virtualization layer executing on a hardware substrate layer. The hardware layer may include a plurality of physical computers, servers or processing nodes. The virtualization layer (e.g., hypervisors and virtualization control plane) may provide platforms on which virtual computing instances may be created. In other words, the virtual computing instances may execute on the hardware layer by using the platform provided by the virtualization layer. This computing service architecture that supports computing instances is illustrated in more detail inFIG.4. The broker server400may be configured to receive messages from a publisher. Upon receipt of a message, the broker server400may queue the message for delivery to a subscriber. The broker server400may maintain a data store of subscriber data435identifying subscribers and a data store of topic data440identifying topics to which a subscriber may subscribe or to which a publisher has published or may publish one or more messages. The broker server400may identify a topic in a message, either explicitly identified in the message, implicitly determinable based on the message data, or based on the publisher. If the topic in the message corresponds to a topic in the topic data store440then subscribers having subscribed to the topic may be queued to receive the message. In one example, the subscriber data store435and the topic data store440may comprise a single data store managing the relationships of subscribers and topics. The broker server400may store or manage publisher data and/or authentication data for authenticating publishers or subscribers as well. In this example, the subscriber data store435may operate as a subscription data store to manager aspects of the subscription in addition to those pertaining specifically to subscribers. The message from a publisher may include a tuple defining message data, a transformation flag indicating whether the message is to be transformed, a publisher identifier, or any other suitable information or information fields. The message analyzer415may determine whether the message is associated with a transformation rule from the transformation rules data store445for transforming the message. The message analyzer415may also determine whether a message matches or is associated with a specific topic in the topic data store440and/or one or more subscriptions in the subscriber data store435. The message transformer420may transform the message into a transformed message based on a result from the message analyzer. For example the message transformer may use one or more transformation functions in the transformation functions data store450to transform the message according to the function(s). The message transformer may optionally utilizer a data retriever430to retrieve secondary data from a secondary data source (130,FIG.1A) when the transformation rule identifies instructions for inclusion of the secondary data in the transformed message. The dispatcher425may transmit or publish the transformed message from the broker server400to the subscriber(s). The system may include a digital marketplace465for receiving a subscription to the topic from the subscriber. The digital marketplace465may facilitate the identification of the transformation rule. For example, the digital marketplace465may provide an interface through which a subscriber may define transformation rules or functions, or select from and purchase pre-defined rules or functions defined by publishers, service providers, third-parties or the like. The interface provided by the digital marketplace465may also be used to create, cancel or otherwise manage subscriptions and payment information. For example, the digital marketplace may enable customers or subscribers to pay for a number of transformation rules executed, retrieval of the secondary data, a number of messages transmitted, or the like as has been described. Client devices460may be available to access and interact with the server400in a computing service provider environment or one or more computing instances or clusters, over a network455. Example client devices460may include, but are not limited to, a desktop computer, a laptop, a tablet, a mobile device, a television, a cell phone, a smart phone, a hand held messaging device, a personal data assistant, an electronic book reader, heads up display (HUD) glasses or any device with a display that may receive and present the message content. The service provider environment may be implemented across one or more computing device(s) connected via a network455. For example, a computing device may include a data store and various engines and/or modules such as those described above and such modules may be executable by a processor405of the computing device. The system may be implemented as a plurality of computing nodes or computing instances, each of which comprises at least one processor405and a memory410, where the computing nodes are configured to collectively implement the modules, data stores and so forth. The modules that have been described may be stored on, accessed by, accessed through, or executed by a computing device. The computing device may comprise, for example, one or more processors405and one or more memory modules410. The computing device may comprise, for example, a server computer or any other system providing computing capability. Alternatively, a plurality of computing devices may be employed that are arranged, for example, in one or more server banks, blade servers or other arrangements. For example, a plurality of computing devices together may comprise a clustered computing resource, a grid computing resource, and/or any other distributed computing arrangement. Such computing devices may be located in a single installation or may be distributed among many different geographical locations. For purposes of convenience, the computing device is referred to herein in the singular form. Even though the computing device is referred to in the singular form, however, it is understood that a plurality of computing devices may be employed in the various arrangements described above. Various applications and/or other functionality may be executed in the computing device according to various implementations, which applications and/or functionality may be represented at least in part by the modules that have been described. Also, various data may be stored in a data store that is accessible to the computing device. The data store may be representative of a plurality of data stores as may be appreciated. The data stored in the data store, for example, may be associated with the operation of the various modules, applications and/or functional entities described. The components executed on the computing device may include the modules described, as well as various other applications, services, processes, systems, engines or functionality not discussed in detail herein. The client device shown inFIG.4may be representative of a plurality of client devices460that may be coupled to the network455. The client device(s)460may communicate with the computing device over any appropriate network, including an intranet, the Internet, a cellular network, a local area network (LAN), a wide area network (WAN), a wireless data network or a similar network or combination of networks. In one example, the network455may be the communications network of the present technology. Although a specific structure may be described herein that defines server-side roles (e.g., of content delivery service) and client-side roles (e.g., of the content access application), it is understood that various functions may be performed at the server side or the client side. Certain processing modules may be discussed in connection with this technology. In one example configuration, a module may be considered a service with one or more processes executing on a server or other computer hardware. Such services may be centrally hosted functionality or a service application that may receive requests and provide output to other services or customer devices. For example, modules providing services may be considered on-demand computing that is hosted in a server, cloud, grid or cluster computing system. An application program interface (API) may be provided for each module to enable a second module to send requests to and receive output from the first module. Such APIs may also allow third parties to interface with the module and make requests and receive output from the modules. FIGS.5-6illustrate flow diagrams of methods according to the present technology. For simplicity of explanation, the method is depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term article of manufacture, as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media. Any of a variety of other process implementations which would occur to one of ordinary skill in the art, including but not limited to variations or modifications to the process implementations described herein, are also considered to be within the scope of this disclosure. Referring now toFIG.5, a flow diagram of a method is illustrated for a publish-subscribe messaging method. The method may include receiving510a definition of a transformation rule configured for transforming a message from a publisher. A message may be received520from the publisher at a broker. The message may identify a topic and include message data. A determination530may be made as to whether the message is associated with a transformation rule for transforming the message data of the message. For example, a message associated with predefined criteria defined in the transformation rule may be transformed using a transformation function at the broker. In one example, secondary data may be retrieved540from a secondary data source as defined by the transformation rule. The function may be used to transform550the message by combining the secondary data with the message data and generate a transformed message. The transformed message may be published560to the subscriber subscribing to the topic identified in the message. In one example, retrieving the secondary data from the secondary data source is performed when the topic of the message is associated with the transformation rule. In other words, the transformation rule may define that the transformation function be performed when a message of a particular topic is received. In a more specific example, retrieving the secondary data from the secondary data source is performed when both the topic and the message data of the message is associated with the transformation rule. For example, topics of received messages may be identified. When the topic is associated with the transformation rule then the message may be further analyzed to determine whether content of the message (i.e., in the message data) is also associated with the transformation rule. The transformation function may be performed after determining that both the topic and the message data are associated with the transformation rule. In yet another example, the secondary data may be retrieved when the message is associated with the transformation rule, regardless of whether the topic is associated with the transformation rule. The message may optionally include a flag indicating that a transformation rule is to be applied. The message data may optionally include the transformation function as an inline function within the message. In this example, the transformation rule may be used to identify the presence of the inline function. The transformation rule may define retrieval of the secondary data from the secondary data source identified in the inline function. In other words, the transformation rule may define the secondary data source as defined by the inline function. The publisher and/or the subscriber may be IoT devices. For example, the publisher may be an IoT device that includes a sensor or transducer for generating data which is used as the message data. In another example, the secondary data may include data generated by a sensor. As an implementation example, the publisher may publish location data of a physical asset. Secondary data may be a weather or temperature monitor. The secondary data source may be a secondary publisher. The transformation rule may define that when a location of the physical asset changes, to perform the transformation function. The transformation function may combine the location and temperature data, optionally as a function of time, to transform the message to the transformed message before the transformed message is transmitted to a subscriber. The subscriber may be a single subscriber or may include any number of subscribers. The subscriber may be an entity responsible for tracking the physical asset and may make a record of the received messages in order to provide reports, alerts or the like based on the location and temperature data. In one example, the method may include sending the message as received from the publisher when the message is not associated with a transformation rule. For example, a transform flag may not be set in the message, and the transformation rule may test for whether the flag is set. The flag may be set by the publisher. If the flag is not set, the message data may be sent to the subscriber without transformation. In one example, the message is received from the publisher at a service. The service may be a broker service for managing the publication of messages to the subscribers. The service may include a server clock. The service may transform or update the message data to include the current time using the server clock as the data source according to applicable transformation rules or functions. In another example, the secondary data source may be a secondary publisher. The secondary data source may be external to the service. The secondary data source may be external to a service provider environment hosting the service. In some examples, this or other methods described herein may be implemented wholly or partially as computer readable program code executed by a processor and the computer readable code may be embodied on a non-transitory computer usable medium. Referring now toFIG.6, a flow diagram of a method is illustrated for publish-subscribe message transformations. In this example, the publish-subscribe messaging method may include receiving610a definition of a transformation rule for transforming a message received from a publisher. The message may be received620from the publisher at a broker. The message may identify a topic and may include message data. A determination630may be made as to whether the message is associated with a transformation rule for transforming the message. The method may further include transforming640the message as defined by the transformation rule and publishing650the transformed message to a subscriber subscribing to the topic. The method may include receiving the definition of the transformation rule from the subscriber. Alternatively, the method may include receiving selection of the transformation rule from the subscriber after receiving the definition, such as from another source. A service provider may provide a marketplace for transformation rules and/or functions, in addition to data to be used with the transformation. Thus, the definition may be provided by the service provider. In another example, third parties may provide the definitions from which subscribers may select a desired definition. The transformation rule may include or define the transformation function to be executed on the message. The method may be performed in the marketplace and incur an expense or cost to the subscriber when the function is executed. For example, a the cost may depend on the complexity of the function or the quantity of secondary data retrieved, or the number of sources from which secondary data is retrieved, and so forth. In one example, the message data includes geolocation data. The transformation rule may be executed on the message when the geolocation data indicates a geolocation outside of a predefined geofence. In other words, the transformation rule may call a function to be executed when the geolocation data is outside of the predefined geofence as defined in the transformation rule. The message data may include geolocation data and the transformation rule may define inclusion of secondary data to combine with the geolocation data for generating the transformed message. The method may include retrieving secondary data from a secondary data source as defined by the transformation rule when the message is associated with the transformation rule. The secondary source may include multiple secondary data sources and the method may include retrieving the secondary data from each of the secondary data sources. The transformation function may combine or manipulate the data from these sources to create the transformed message for transmission to the subscriber. Each of the secondary data sources may respectively provide different data than other of the secondary data sources. The method may include transforming the message when the transformation rule identifies an inline function in the message to use in transforming the message. The method may include determining that the message data satisfies a predefined condition and transforming the message when the transformation rule identifies a function to execute on the message based on the predefined condition. The message data, or the transformed message data, may include any of a variety of suitable data types. For example, the message data may include any one or more of text (e.g., alphanumeric characters), an image, a video, audio, time, temperature, speed, location, or any of a number of other types not listed but which would be apparent to one of skill in the art and which are considered to be within the scope of this disclosure. In one example, the method may include determining whether a flag is set for the message to be transformed. When the flag is not set, the message may be published as received from the publisher without transforming the message data. The flag may be set or not set, such as by the inclusion of one or more characters or strings that identifies a ‘set’ state and/or one or more characters or strings that identifies a not set′ state. In one example, presence and absence of a character or string in a flag field of a tuple in the message data may indicate the set or not set states. In one example, when the flag field is empty then the flag is not set. The secondary data source may be a third party data source. For example, the data source may belong to or be managed by the publisher or some other third party. The data source may optionally be external to a service provider environment in which the method is performed. For example, the data source may be a global positioning system (GPS) device, a camera, a thermometer or the like. In one example, the message may be published by transmitting the transformed message to the subscriber via a transmission control protocol (TCP). However, other types of transmission protocols which are not listed here are also contemplated. FIG.7illustrates a computing device710on which services or modules of this technology may execute. A computing device710is illustrated on which a high level example of the technology may be executed. The computing device710may include one or more processors712that are in communication with memory devices720. The computing device710may include a local communication interface718for the components in the computing device. For example, the local communication interface718may be a local data bus and/or any related address or control busses as may be desired. The memory device720may contain modules730that are executable by the processor(s) and data for the modules. A data store722may also be located in the memory device720for storing data related to the modules and other applications along with an operating system that is executable by the processor(s)712. The computing device710may further include or be in communication with a client device, which may include a display device. The client device may be available for an administrator to use in interfacing with the computing device710, such as to review operation of a virtual computing instance, make improvements to machine learning models and so forth. Various applications may be stored in the memory device720and may be executable by the processor(s)712. Components or modules discussed in this description that may be implemented in the form of software using high programming level languages that are compiled, interpreted or executed using a hybrid of the methods. The computing device710may also have access to I/O (input/output) devices714that are usable by the computing devices. An example of an I/O device714is a display screen that is available to display output from the computing devices. Other known I/O device may be used with the computing device as desired. Networking devices716and similar communication devices may be included in the computing device710. The networking devices716may be wired or wireless networking devices716that connect to the internet, a LAN, WAN, or other computing network. The components or modules that are shown as being stored in the memory device720may be executed by the processor712. The term “executable” may mean a program file that is in a form that may be executed by a processor712. For example, a program in a higher level language may be compiled into machine code in a format that may be loaded into a random access portion of the memory device720and executed by the processor712, or source code may be loaded by another executable program and interpreted to generate instructions in a random access portion of the memory to be executed by a processor712. The executable program may be stored in any portion or component of the memory device720. For example, the memory device720may be random access memory (RAM), read only memory (ROM), flash memory, a solid state drive, memory card, a hard drive, optical disk, floppy disk, magnetic tape, or any other memory components. The processor712may represent multiple processors and the memory720may represent multiple memory units that operate in parallel to the processing circuits. This may provide parallel processing channels for the processes and data in the system. The local interface may be used as a network to facilitate communication between any of the multiple processors and multiple memories. The local interface may use additional systems designed for coordinating communication such as load balancing, bulk data transfer, and similar systems. While the flowcharts presented for this technology may imply a specific order of execution, the order of execution may differ from what is illustrated. For example, the order of two more blocks may be rearranged relative to the order shown. Further, two or more blocks shown in succession may be executed in parallel or with partial parallelization. In some configurations, one or more blocks shown in the flow chart may be omitted or skipped. Any number of counters, state variables, warning semaphores, or messages might be added to the logical flow for purposes of enhanced utility, accounting, performance, measurement, troubleshooting or for similar reasons. Some of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like. Modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more blocks of computer instructions, which may be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which comprise the module and achieve the stated purpose for the module when joined logically together. Indeed, a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices. The modules may be passive or active, including agents operable to perform desired functions. The technology described here may also be stored on a computer readable storage medium that includes volatile and non-volatile, removable and non-removable media implemented with any technology for the storage of information such as computer readable instructions, data structures, program modules, or other data. Computer readable storage media include, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tapes, magnetic disk storage or other magnetic storage devices, or any other computer storage medium which may be used to store the desired information and described technology. The computer readable storage medium may, for example, be in the form of a non-transitory computer readable storage medium. As used herein, the terms “medium” and “media” may be interchangeable with no intended distinction of singular or plural application unless otherwise explicitly stated. Thus, the terms “medium” and “media” may each connote singular and plural application. The devices described herein may also contain communication connections or networking apparatus and networking connections that allow the devices to communicate with other devices. Communication connections are an example of communication media. Communication media typically embodies computer readable instructions, data structures, program modules and other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. A “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared, and other wireless media. The term computer readable media as used herein includes communication media. It is noted that any of the distributed system implementations described above, or any of their components, may be implemented as one or more web services. In some implementations, a web service may be implemented by a software and/or hardware system designed to support interoperable machine-to-machine interaction over a network. A web service may have an interface described in a machine-processable format, such as the Web Services Description Language (WSDL). Other systems may interact with the web service in a manner prescribed by the description of the web service's interface. For example, the web service may define various operations that other systems may invoke, and may define a particular application programming interface (API) to which other systems may be expected to conform when requesting the various operations. In various implementations, a web service may be requested or invoked through the use of a message that includes parameters and/or data associated with the web services request. Such a message may be formatted according to a particular markup language such as Extensible Markup Language (XML), and/or may be encapsulated using a protocol such as Simple Object Access Protocol (SOAP). To perform a web services request, a web services client may assemble a message including the request and convey the message to an addressable endpoint (e.g., a Uniform Resource Locator (URL)) corresponding to the web service, using an Internet-based application layer transfer protocol such as Hypertext Transfer Protocol (HTTP). In some implementations, web services may be implemented using Representational State Transfer (“RESTful”) techniques rather than message-based techniques. For example, a web service implemented according to a RESTful technique may be invoked through parameters included within an HTTP method such as PUT, GET, or DELETE, rather than encapsulated within a SOAP message. Reference was made to the examples illustrated in the drawings, and specific language was used herein to describe the same. It will nevertheless be understood that no limitation of the scope of the technology is thereby intended. Alterations and further modifications of the features illustrated herein, and additional applications of the examples as illustrated herein, which would occur to one skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the description. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more examples. In the preceding description, numerous specific details were provided, such as examples of various configurations to provide a thorough understanding of examples of the described technology. One skilled in the relevant art will recognize, however, that the technology may be practiced without one or more of the specific details, or with other methods, components, devices, etc. In other instances, well-known structures or operations are not shown or described in detail to avoid obscuring aspects of the technology. Although the subject matter has been described in language specific to structural features and/or operations, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features and operations described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. Numerous modifications and alternative arrangements may be devised without departing from the spirit and scope of the described technology.
68,594
11863510
DETAILED DESCRIPTION FIGS.1through6, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device. Embodiments of the present disclosure recognize that users of virtual reality headsets may be drawn into a virtual reality experience to the exclusion of the real world around them. Accordingly, if a person who is not experiencing virtual reality wishes to communicate with a person that is experiencing virtual reality, they may be forced to physically interrupt the virtual reality experience by tapping the person who is experiencing virtual reality, which may startle or upset the person. Additionally, the person experiencing virtual reality may not see or hear important messages in the environment, such as notifications delivered over a public address system, or signs, lights, or the like that are visible in the environment for notification purposes. Accordingly, the present disclosure includes systems and methods to allow a person using a virtual reality headset to receive notifications from other people around them and to receive notifications from environmental sources. In various embodiments, these notifications are delivered through short range communication methods such as Bluetooth®, Wi-Fi®, near field communications (NFC), or the like. The systems and methods of the present disclosure may also be used with electronic devices that do not provide virtual reality experiences. For example, a mobile device may provide an augmented reality (AR) experience by overlaying computer generated graphics on an image of a camera that is displayed on a mobile device display. Although less immersive than a virtual reality experience, an AR experience still demands a user's full focus on the mobile device display. Additionally, the present disclosure recognizes that users of mobile devices often become very focused on interacting with their mobile devices in other use cases, for example by watching media content, messaging contacts, reading web page content, or the like. In some cases, these users become so focused on their mobile devices that they do not see or hear what is happening around them. Accordingly, systems and methods of the present disclosure may be useful to provide notifications to users of mobile devices even though they are not experiencing virtual reality. FIG.1illustrates an example computing system100according to various embodiments of this disclosure. The embodiment of the computing system100shown inFIG.1is for illustration only. Other embodiments of the computing system100could be used without departing from the scope of this disclosure. As shown inFIG.1, the system100includes a network102, which facilitates communication between various components in the system100. For example, the network102may communicate Internet Protocol (IP) packets, frame relay frames, Asynchronous Transfer Mode (ATM) cells, or other information between network addresses. The network102may include one or more local area networks (LANs), metropolitan area networks (MANs), wide area networks (WANs), all or a portion of a global network such as the Internet, or any other communication system or systems at one or more locations. The network102may facilitate communications between at least one notification server104and personal electronic devices such as mobile device106or virtual reality devices108. Each notification server104includes any suitable computing hardware or processor that can provide computing services for one or more personal electronic devices. Each notification server104could, for example, include one or more processors, one or more memories storing instructions and data, and one or more network interfaces facilitating communication over the network102. The mobile device106may be any suitable computing or processing device that interacts with at least one server or other computing device(s) over the network102. The mobile device106could alternatively be a smart watch, fitness tracker, or other wearable device, a personal digital assistant (PDA), a laptop computer, or a tablet computer. The virtual reality device108may be any suitable computing or processing device that interacts with at least one server or other computing device(s) over the network102, and is able to provide a virtual reality experience. The virtual reality device108could, for example, be a mobile device such as a mobile phone used in a headset, an accessory device connected to another computing device, a virtual reality headset, or the like. Any other or additional electronic devices could be used in the computing system100. In this example, the mobile device106and virtual reality devices108communicate directly with each other. For example, the mobile device106and virtual reality devices108communicate via Bluetooth®, Wi-Fi Direct®, NFC, or the like. In other embodiments, the mobile device106and virtual reality devices108communicate indirectly with the network102. For example, the mobile device106and virtual reality devices108communicate via one or more base stations110, such as IEEE 802.11 wireless access points, or via cellular base stations or eNodeBs. Note that these examples are for illustration only and that the mobile device106and virtual reality devices108could communicate directly or indirectly with each other or indirectly with the network102via any suitable intermediate device(s) or network(s). As described in more detail below, the notification server104may provide notifications through a notification subscription service to the virtual reality devices108. In some embodiments, a user of the virtual reality device108subscribes to a notification service for the surrounding area112, and the notification server104pushes notifications to the virtual reality devices108. In other embodiments, the virtual reality devices108automatically subscribes to a notification service when it arrives in the area112, and the notification server104pushes notifications to the virtual reality devices108. AlthoughFIG.1illustrates one example of a computing system100, various changes may be made toFIG.1. For example, the system100could include any number of each component in any suitable arrangement. In general, computing and communication systems come in a wide variety of configurations, andFIG.1does not limit the scope of this disclosure to any particular configuration. WhileFIG.1illustrates one operational environment in which various features disclosed in this patent document can be used, these features could be used in any other suitable system. FIGS.2and3illustrate example devices in a computing system according to this disclosure. In particular,FIG.2illustrates an example computer system200,FIG.3illustrates an example electronic device300. For example, the computer system200could represent the notification server104inFIG.1, the electronic device300could represent the virtual reality device108and/or the mobile device106inFIG.1. In some embodiments, the electronic device300could comprise a mobile phone combined with a virtual reality accessory, such as a headset. As shown inFIG.2, the computer system200includes a bus system205, which supports communication between at least one processor210, at least one storage device215, at least one communication interface220, at least one input/output (I/O) unit225, and a notification unit240. The processor210executes instructions that may be loaded into a memory230. The processor210may include any suitable number(s) and type(s) of processors or other devices in any suitable arrangement. Example types of processors210include microprocessors, microcontrollers, digital signal processors, field programmable gate arrays, application specific integrated circuits, and discrete circuitry. The memory230and a persistent storage235are examples of storage devices215, which represent any structure(s) capable of storing and facilitating retrieval of information (such as data, program code, and/or other suitable information on a temporary or permanent basis). The memory230may represent a random access memory or any other suitable volatile or non-volatile storage device(s). The persistent storage235may contain one or more components or devices supporting longer-term storage of data, such as a read only memory, hard drive, Flash memory, or optical disc. The communication interface220supports communications with other systems or devices. For example, the communication interface220could include a network interface card or a wireless transceiver facilitating communications over the network102, which may be, for example, a LAN that covers the area112ofFIG.1. The communication interface220may support communications through any suitable physical or wireless communication link(s). The I/O unit225allows for input and output of data. For example, the I/O unit225may provide a connection for user input through a keyboard, mouse, keypad, touchscreen, or other suitable input device. The I/O unit225may also send output to a display, printer, or other suitable output device. The notification unit240handles subscription notification services for electronic devices such as the virtual reality devices108and the mobile device106, as will be described in more detail below. The notification unit240receives requests to subscribe virtual reality devices108to notification services and handles the setup of such notification services with the requesting devices. The notification unit240additionally determines when a notification should be sent to a subscribed device, and handles pushing notifications to subscribed devices, for example via the communication interface220. The notification unit240may operate a push notification service that facilitates push notification delivery to subscribed devices. In some embodiments, the notification unit240performs the same functions for electronic device300. Note that whileFIG.2is described as representing the notification server104ofFIG.1, the same or similar structure could be used in the mobile device106or any other electronic device in system100. As shown inFIG.3, the electronic device300includes a communication unit310that may include, for example, a radio frequency (RF) transceiver, a Bluetooth® transceiver, or a Wi-Fi® transceiver. The electronic device300also includes a speaker330, a processor340, an input/output (I/O) interface (IF)345, an input interface350, a display355, a memory360, and sensors365. The memory360includes an operating system (OS) program361and one or more applications362. In some embodiments, the electronic device300also functions as a mobile phone. The communication unit310may receive an incoming RF signal such as a Bluetooth® or Wi-Fi® signal. The communication unit310may down-convert the incoming RF signal to generate an intermediate frequency (IF) or baseband signal, then generate a processed baseband signal by filtering, decoding, and/or digitizing the baseband or IF signal. The communication unit310transmits the processed baseband signal to the processor340for further processing (such as for web browsing data, online gameplay data, notification data, or other message data). The communication unit310also receives analog or digital voice data or other outgoing baseband data (such as web data, e-mail, or interactive video game data) from the processor340. The communication unit310encodes, multiplexes, and/or digitizes the outgoing baseband data to generate a processed baseband or IF signal. In the case that the communication unit310is an RF transceiver, the communication unit310up-converts the baseband or IF signal to an RF signal that is transmitted via an antenna. The processor340can include one or more processors or other processing devices and execute the OS361stored in the memory360in order to control the overall operation of the electronic device300. For example, the processor340could control the reception of forward channel signals and the transmission of reverse channel signals by the communication unit310in accordance with well-known principles. The processor340could also receive analog or digital voice data from the microphone320, and output analog or digital audio to the speaker330. In some embodiments, the processor340includes at least one microprocessor or microcontroller. The processor340is also capable of executing other processes and programs resident in the memory360. The processor340can move data into or out of the memory360as required by an executing process. In some embodiments, the processor340is configured to execute the applications362based on the OS361or in response to signals received from external devices or an operator. The processor340is also coupled to the I/O interface345, which provides the electronic device300with the ability to connect to other devices such as laptop computers and handheld computers. The I/O interface345is the communication path between these accessories and the processor340. The processor340is also coupled to the input interface350and the display355. The operator of the electronic device300can use the input interface350to enter data into the electronic device300. The display355may be a liquid crystal display or other display capable of rendering a virtual reality environment, including rendering text and/or graphics in the virtual reality environment, such as notifications and messages. The memory360is coupled to the processor340. Part of the memory360could include a random access memory (RAM), and another part of the memory360could include a Flash memory or other read-only memory (ROM). The sensors365detect information external to the electronic device300and relay it to the processor340for further processing. For example, the sensors365may detect patterns of light that correspond to emergency lighting, patterns of sound that correspond to sirens or other emergency notifications, or the like. AlthoughFIGS.2and3illustrate examples of devices in a computing system, various changes may be made toFIGS.2and3. For example, various components inFIGS.2and3could be combined, further subdivided, or omitted and additional components could be added according to particular needs. As a particular example, the processors210and340could be divided into multiple processors, such as one or more central processing units (CPUs) and one or more graphics processing units (GPUs). In addition, as with computing and communication networks, electronic devices and computer systems can come in a wide variety of configurations, andFIGS.2and3do not limit this disclosure to any particular client device or server. FIG.4illustrates an example signal sequence400of communications between a notification server104and a virtual reality device108according to illustrative embodiments of the present disclosure. In this embodiment, the notification server104provides notifications to the virtual reality device108based on a subscription notification service. In some embodiments, the virtual reality device108runs an application that facilitates subscribing to a push notification service, which may be called a subscription notification application. Upon entering the area112ofFIG.1the application may facilitate communication between a notification server104operating in the area112, either directly via base station110or indirectly via network102and base station110, and the virtual reality device108as illustrated in signal diagram400. In other embodiments, the functions of the application may be built into the operating system361of the virtual reality device108. In this example, the area112corresponds to a location such as a cafe, an airport, an airplane cabin, a train, a bus, a ferry, a waiting area, or another similar public location where a user of a virtual reality device108may be stationary and may engage in a virtual reality experience. In some embodiments, the notification server104may be owned and operated by an owner of the location that includes area112, for example an owner of a cafe or an airline that owns an airplane. In other embodiments, the notification server104may be a cloud server that is rented by an owner of the location that includes area112. In a preliminary operation the notification server104and virtual reality device108establish communication by exchanging any appropriate messages. This may occur, for example, when the virtual reality device108enters the area112. In one embodiment, the notification server104may periodically send out discovery information that allows the virtual reality device108to discover and begin communication with the notification server104. Alternatively, the virtual reality device108may send out a discovery request that prompts the notification server104to send discovery information to the virtual reality device108. In some embodiments, the information provided by the notification server104to the virtual reality device108may indicate that the notification server104provides subscription notification services. The messages exchanged at the preliminary operation may include a request from the virtual reality device108for available subscription notification services. The messages exchanged at the preliminary operation may also include an indication from the virtual reality device108that the virtual reality device108is running a subscription notification application. When communication between the notification server104and the virtual reality device108has been established, the notification server104detects the application running on the virtual reality device108at operation402, for example based on the messages received in the preliminary operation. In some embodiments, the virtual reality device108transmits a message requesting a list of subscription notification services that are available at the location of area112. In message404, the notification server104transmits to the virtual reality device a list of subscription notification services that are available at the location of area112. For example, if the area112is inside an airport, the available subscription notifications services may include flight departure notifications, flight arrival notifications, airport emergency notifications, or the like. In some embodiments, the list of available subscription notification services may include additional information, such as a category identifier (e.g., emergency notifications, convenience notifications, or the like). Further levels of identifier granularity may be provided, for example flight departure notifications may include a sub-category of all departures for a certain airline, departure time of specific flights, life-threatening emergencies, or the like. At operation406, a user of the virtual reality device108may select one or more of the available subscription notification services to subscribe to. In some embodiments, the list of available notification services may be presented on a display such as display355of the virtual reality device108so that the user may view and select from the list. In other embodiments, the virtual reality device108may automatically choose subscriptions without direct user input. For example, the user may pre-configure the virtual reality device108to automatically subscribe to certain categories of subscription notification services (e.g., emergency notifications), or to specific subscription notification services (e.g., flight departure notifications) when they are available. In such a case, the virtual reality device108may automatically select available subscription notification services without disturbing the user. In message408, the virtual reality device108transmits to the notification server104a request to subscribe to the subscription notification services selected in operation406. The notification server104may track subscriptions in various ways. For example, the notification server104may maintain and update a list of virtual reality devices108that are subscribed to each offered subscription notification service, may maintain a separate list of subscriptions for each virtual reality device108in area112, or the like. At operation410, the notification server104detects an event. For example, the notification server104may communicate with another server that tracks flight departure and arrival status, and may monitor for events such as changes in departure and arrival times as well as arrivals and departures that are about to occur. The notification server104then determines whether a detected event constitutes a notification event for any virtual reality devices108. For example, the notification server104compares the event with a list or database of events that are designated as notification events for a particular subscription notification service. In message412, the notification server104responds to the detected notification event by transmitting to the virtual reality device108a notification message. The notification message may contain a subscription service identifier to inform the virtual reality device108to which subscription the notification pertains. The notification message may further contain information describing the event. For example, the notification message may include a tag or code indicating that a flight departure time has changed, that a flight is about to arrive, or the like. In some embodiments, the virtual reality device108may maintain a list of tags or codes that correspond to various pre-configured notification events. In other embodiments, a notification message may include plain text that describes the notification event. At operation414, the virtual reality device108presents a notification corresponding to the notification event on a display355of the virtual reality device108. In some embodiments, the virtual reality device108may use a tag or code included in the message412to look up a notification event in a list or database. The list or database may include a plaintext notification that corresponds to the tag or code, and the virtual reality device108may present this plaintext message on the display355. In other embodiments, the message412contains a custom plaintext message, and the virtual reality device108presents this plaintext message on the display355. In some embodiments, the messages of signal diagram400may be facilitated by a push notification service. It is understood that any suitable method of enabling push notifications may be used in the operations of signal diagram400. In some embodiments, a notification server104is unnecessary to the operation of the subscription notification service. Instead, the subscription notification application running on the virtual reality device108may create its own notifications. For example, the application may search the Internet for information that indicates a notification should be generated (e.g., searching an airline website for flight delay information). The application may use a cellular data service or a Wi-Fi® connection via base station110to perform this search. Alternatively, the application may access sensors365of the virtual reality device108and determine whether to display a notification message based on sensed information (e.g., sensing an emergency siren via an external microphone on the virtual reality device108, sensing a “fasten seatbelt” tone on an airplane via an external microphone on the virtual reality device108, sensing a “fasten seatbelt” visual indicator on an airplane via an external camera on the virtual reality device108, etc.). AlthoughFIG.4illustrates an example of communications between a notification server104and a virtual reality device108, various changes could be made toFIG.4. For example, the virtual reality device108could be a mobile device. Additionally, some communications shown inFIG.4could be excluded, or additional communications could be included. FIG.5illustrates an example signal sequence500of communications between a mobile device106and a virtual reality device108according to illustrative embodiments of the present disclosure. In this embodiment, the mobile device106and the virtual reality device108communicate via a peer-to-peer (P2P) connection. At operation502, the mobile device106discovers virtual reality devices108that are within discovery range. The discovery range may be, for example, the range of a wireless communications protocol used to discover the virtual reality devices108, such as Wi-Fi Direct®, Bluetooth®, or the like. In some embodiments, the area112ofFIG.1may correspond to the discovery range of the mobile device106. In this example, at least one virtual reality device108has previously entered a discoverable state that allows discovery by mobile devices106, for example by receiving a discovery request and returning a discovery response with identifying information for the virtual reality device108. If the virtual reality device108has not entered a discoverable state, the mobile device106will not be able to discover it. In response to discovering one or more virtual reality devices108, the mobile device106may display a list of discovered devices, for example on a display of the mobile device106. A user of the mobile device106may choose to pair with one or more virtual reality devices108on the list of discovered devices in order to enable P2P communication with the selected one or more virtual reality devices108. However, in cases where more than one virtual reality device108is discovered, the user may not be certain which virtual reality device108belongs to the person that they wish to initiate P2P communications with. In such a case, at operation504, a user of the mobile device106selects a virtual reality device108from the list of discovered devices and requests that it enter an indication mode to assist the user of the mobile device106in determining which virtual reality device108to pair with. In some embodiments, a user of the mobile device106selects a virtual reality device108for this operation by highlighting the virtual reality device108in a user interface of the mobile device106for a predetermined period of time (also known as hovering). In message506, the mobile device106transmits a message to the selected virtual reality device108requesting that the virtual reality device108enter an indication mode. At operation508, the virtual reality device108receives the message506and enters an indication mode. In this mode, for example, the virtual reality device108turns on or blinks a light emitting diode (LED) on the exterior of the device. The user of the mobile device106may look around the area112to visually determine which virtual reality device108they have selected based on the illuminated LED. In other embodiments, the indication mode may entail a different type of indication, such as a noise emitted by speakers on the virtual reality device108. The user of the mobile device106may, in some cases, repeat operation504and sequentially select each virtual reality device108on the list of discovered devices in this manner until the user of the mobile device106determines which virtual reality device108belongs to the person that they wish to initiate P2P communication with. In other embodiments, the user of the mobile device106may initiate a determination mode that automatically requests that each virtual reality device108on the list sequentially enter indication mode for a certain period of time. In some embodiments, the user of the mobile device106may repeat operation504sequentially by dragging a pointer across a list of virtual reality devices108in a user interface of the mobile device106to sequentially highlight each virtual reality device108for a period time. In this manner, the user of the mobile device106may sequentially select each virtual reality device108while looking around the area112at LEDs on the virtual reality devices108rather than looking at the mobile device106. At operation510, the user of the mobile device106, after determining which virtual reality device108they wish to communicate with, initiates communication with the selected virtual reality device108. In some embodiments, for example when the mobile device106and the virtual reality device108communicate via Bluetooth®, the mobile device106and the virtual reality device108exchange pairing messages512to allow P2P communications. In other embodiments, pairing may not be necessary to establish a P2P connection, and pairing messages512are not exchanged. After the devices are paired, the user of the mobile device106sends a P2P message to the virtual reality device108. For example, the user of the mobile device106may select from a number of predetermined messages available through a messaging application (e.g., “Do you have time to talk?” or “Time to go”), or the user of the mobile device106may enter a custom message intended for the user of the virtual reality device108. At operation516, the virtual reality device108displays the received P2P messages, for example on a display355of the virtual reality device108. In this way, the user of the mobile device106may communicate with the user of the virtual reality device108without disturbing the user of the virtual reality device108during a virtual reality experience. In some embodiments, the virtual reality device108may be in a do-not-disturb (DND) mode. In this case, the virtual reality device108does not display the received message to the user. In some embodiments, the message is stored for later display after the DND mode is deactivated. The user of the virtual reality device108may return a message518to the mobile device106. For example, the user of the virtual reality device108may select from a number of predetermined messages (e.g., “No,” or “Okay”), or may enter a custom message via a text entry interface of the virtual reality device108. If the virtual reality device108is in a DND mode, the virtual reality device108may automatically return a message518indicating to the user of the mobile device106that the virtual reality device108is in a DND mode, and that the message514was received but not displayed. AlthoughFIG.5illustrates an example of communications between a mobile device106and a virtual reality device108, various changes could be made toFIG.5. For example, the virtual reality device108could be a second mobile device106. Additionally, some communications shown inFIG.5could be excluded, or additional communications could be included. FIG.6illustrates a flow diagram of an example method600for receiving notification messages according to illustrative embodiments of the present disclosure. The method600may, for example, be performed by a virtual reality device108, such as the virtual reality device108ofFIG.1. Beginning at block602, the virtual reality device108launches a notification application. The notification application facilitates communication with, for example, a notification server104to subscribe to subscription notification services, or with a mobile device106to receive incoming P2P messages. In some embodiments, the functions of the notification application are built into the operating system361of the virtual reality device108, and a separate application is not launched. At block604, the virtual reality device108receives settings from a user for the notification application. The settings include, for example, whether or not to make the virtual reality device108discoverable for P2P communications. The settings further include, for example, whether to request information on subscription notification services upon entering an area112and communicating with a notification server104, and whether to automatically subscribe to available subscription notification services. Subscription notification settings may further include categories (or category identifiers) or sub-categories of notifications to which the user wishes to automatically subscribe. For example, categories may include emergency notifications, convenience notifications, or the like, and sub-categories may include all flight departures for a certain airline, departure time of specific flights, life-threatening emergencies, or the like. At decision block606, if the settings of the application indicate that subscription notifications for area112are desired, the method600proceeds to block608. If the settings of the application indicate that subscription notifications are not desired, the method600proceeds to decision block618, described further below. At block608, the virtual reality device108establishes a connection with a notification server104, for example as described above with respect toFIG.4. In some embodiments, the virtual reality device108may perform the functions of the notification server, for example by searching the Internet for information relevant to potential notification events (e.g., flight departure times) and determining that notification events occur, as described above with respect toFIG.4. At block610, the virtual reality device108requests a list of available subscription notification services from the notification server104. At block612, the virtual reality device108receives a list of available subscription notification services from the notification server104. The virtual reality device108may then present the received list of subscription notification services to a user of the virtual reality device108, for example via a user interface displayed on the display355of the virtual reality device108. At block614, the virtual reality device108receives, from the user, a selection of one or more subscription notification services from the received list of available subscription notification services. The selection indicates which subscription notification services the user wishes to subscribe to. In some embodiments, the virtual reality device108may make this selection automatically based on the settings received at block604. For example, the settings may indicate that the virtual reality device108should automatically subscribe to any subscription notification services that are categorized as emergency notification services. In such cases, the virtual reality device108may not present a choice to the user to subscribe to such subscription notification, but may instead subscribe automatically. At block616, the virtual reality device108transmits to the notification server104the selection of subscription notification services made in block614. The method600then moves to decision block618, which may also be reached if, at decision block606, the virtual reality device108determines that no subscriptions to subscription notification services are to be made. At decision block618, if the settings of the application indicate that the virtual reality device108is to be discoverable for P2P communications, the method600proceeds to decision block620. If at both decision blocks606and618the answer is no, then the method ends. If the answer to decision block606is yes and the answer to decision block618is no, then the method600proceeds to decision block626. At decision block620, the virtual reality device108determines whether it has received a request from a potential P2P communication partner device to enter an indication mode. For simplicity, a mobile device106will be used to enter an indication mode. If so, the method600proceeds to block622. If not, the method600proceeds to block624. At block622, the virtual reality device108enters an indication mode and provides an indication for a predetermined amount of time. For example, the virtual reality device108illuminates or blinks an external LED for a period of 3 seconds. In another example, the virtual reality device108emits a predetermined sound pattern from an external microphone for 3 seconds. At block624, the virtual reality device108pairs with a mobile device106. In some embodiments, the virtual reality device108is configured to allow for P2P communication without pairing, in which case block624is not performed. At decision block626, the virtual reality device108monitors for notifications from one or both of the notification server104and the mobile device106. In some embodiments, the virtual reality device108also monitors for predetermined environmental conditions (e.g., emergency lights or sirens) with sensors365. If no notification is received, the virtual reality device108returns to decision block626and continues to monitor for notifications. When a notification is received or an environmental signal is sensed, the method600proceeds to decision block628. At decision block628, the virtual reality device108determines whether it is set in a do not disturb (DND) mode. If not, the method600proceeds to block634. If a DND mode is set, the method600proceeds instead to decision block636. At block630, the virtual reality device108displays a notification message based on the received notification or sensed environmental condition. The notification message may be displayed on the display355of the virtual reality device108, which may include displaying the notification message in a virtual reality environment. In some embodiments, the received notification may be a plaintext message and the virtual reality device108may display the received message unaltered. In other embodiments, the received notification (or sensed environmental condition) may be encoded to represent one of a set of predetermined notification messages that are stored by the virtual reality device108, and the corresponding notification message may be displayed by the virtual reality device108. At decision block632, the virtual reality device108determines whether the user wishes to respond to the notification. For example, the virtual reality device108may prompt the user with the option to respond with one of a number of predetermined responses (e.g., “No,” or “Okay”), or the option to respond with a custom message. In an embodiment where the received message is a subscription notification or a sensed environmental condition, no response may be possible. If the user does not wish to respond, or a response is not possible, the method600ends. If the virtual reality device108receives a response from the user, the method600proceeds to block634. At block634, the virtual reality device108transmits the response to the mobile device106, and the method600ends. Returning to decision block628, if the virtual reality device108is set to a DND mode, the method600proceeds to decision block636. At decision block636, the virtual reality device108checks whether the received notification or sensed environmental condition is identified as an emergency message or condition. For example, a flag or other identifier in the message may indicate an emergency message, or an identifier associated with the sensed environmental condition may indicate an emergency condition. If the message or condition is identified as an emergency message or condition, the method600proceeds to block630the message is displayed, as described above. If the message is not identified as an emergency message, the method600proceeds to block638. At block638, the virtual reality device108automatically transmits to the mobile device106a message indicating that the virtual reality device108is in a DND mode and the received notification will not be displayed. In some embodiments, the virtual reality device108does not transmit this message, and simply does not respond to the received message. At block640, the virtual reality device108stores the received notification, for example in a memory360, for later display. At decision block642, the virtual reality device108monitors for the end of the DND mode. If the DND mode has not ended, the method600repeats block642. It is understood that the virtual reality device108may also monitor for additional received messages during this operation or any other operation of the method600. If the DND mode has ended, the method600proceeds to block630and displayed notifications that were stored in the memory360during the DND mode. AlthoughFIG.6is described in the context of a virtual reality device108, it is understood that various modifications may be made toFIG.6. For example, the method600could be performed by a mobile device106. This may be useful, for example, when a user of a mobile device106is immersed in a media experience and is likely to miss information in the environment around the mobile device106, for example emergency lights, signs containing notifications, or the like. Embodiments of the present disclosure provide systems and methods for sending and receiving messages from a virtual reality environment. For example, embodiments of the present disclosure describe setting up subscriptions to push notification services, monitoring for push notifications from the subscribed services, and displaying notification messages corresponding to received push notifications in a virtual reality environment. Embodiments of the present disclosure also provide systems and methods for receiving peer to peer communications from partner devices and displaying notification messages corresponding to the received peer to peer communications in a virtual reality environment. Embodiments of the present disclosure also provide systems and methods for sensing environmental conditions and displaying notification messages corresponding to the sensed conditions in a virtual reality environment. Although the present disclosure has been described with an exemplary embodiment, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims. None of the description in this application should be read as implying that any particular element, step, or function is an essential element that must be included in the claim scope. The scope of patented subject matter is defined only by the claims. Moreover, none of the claims is intended to invoke 35 U.S.C. § 112(f) unless the exact words “means for” are followed by a participle.
42,252
11863511
DETAILED DESCRIPTION Traditional systems and methods for prioritizing messages focus on throttling outgoing messages from senders, based on what messages customers have recently received. These systems generally effect only truly optional messages, such as marketing messages, and may not affect messages that users have opted-into receiving. This greatly decreases the effectiveness of the system, because only truly optional messages can be delayed, not every type of message. Accordingly, there is a need for improved systems and methods for prioritizing messages. Embodiments of the present disclosure are directed to this and other considerations. Examples of the present disclosure related to systems and methods for prioritizing messages. More particularly, the disclosed technology relates to assessing the importance of an assortment of messages and ranking the messages in order of importance or urgency. The systems and methods described herein utilize, in some instances, machine learning models, which are necessarily rooted in computers and technology. Machine learning models are a unique computer technology that involves training models to complete tasks and make decisions. The present disclosure details determining which messages of a group of messages are urgent. This, in some examples, may involve using message and application sender input data and one or more machine learning models, applied to determine the ranking of importance of one or more messages or determine if a message is urgent. Using a machine learning model in this way may allow the system to prioritize which messages need to be sent immediately and which messages can be delayed. Additionally, other machine learning models may be able to combine two or more messages into a single message. These are clear advantages and improvements over prior technologies that send an assortment of unranked messages to users all at once because users may ignore messages when multiple messages come at one time. This is also an improvement over systems that throttle messages based on what a customer has recently received because those systems only consider certain types of messages. The present disclosure solves this problem by sending the most important messages first and storing and sending less important messages at a later time. Furthermore, the systems and methods described herein utilize, in some instances, graphical user interfaces, which are necessarily rooted in computers and technology. Graphical user interfaces are a computer technology that allows for user interaction with computers through touch, pointing devices, or other means. This, in some examples, may involve using user inputs from a user to dynamically change the graphical user interface by influencing how the user receives messages. Using a graphical user interface in this way may allow the system to change how messages are delivered to the user based on user preferences. Additionally, examples of the present disclosure may also improve network usage by preventing spikes in computer resource load by sending out large numbers of messages to users at one time when some messages could be sent out periodically. Overall, the systems and methods disclosed have significant practical applications in the notification field because of the noteworthy improvements of the message prioritization system, which is important to solving present problems with this technology. Some implementations of the disclosed technology will be described more fully with reference to the accompanying drawings. This disclosed technology may, however, be embodied in many different forms and should not be construed as limited to the implementations set forth herein. The components described hereinafter as making up various elements of the disclosed technology are intended to be illustrative and not restrictive. Many suitable components that would perform the same or similar functions as components described herein are intended to be embraced within the scope of the disclosed electronic devices and methods. Reference will now be made in detail to example embodiments of the disclosed technology that are illustrated in the accompanying drawings and disclosed herein. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to the same or like parts. FIG.1is a flow diagram illustrating an exemplary method100for prioritizing messages, in accordance with certain embodiments of the disclosed technology. The steps of method100may be performed by one or more components of the system400(e.g., message prioritization system320or web server410of user messaging system408or user device402), as described in more detail with respect toFIGS.3and4. In block102, the message prioritization system320may receive one or more messages to be sent to a user. Messages may be email messages, short message service (SMS) messages, push notifications, instant messages, application notifications (e.g., a newsfeed or message inbox from within an application), or any other form of messaging-type communication commonly sent to, and read by, users. The messages may be received from one or more applications. Each message may contain message data (e.g., the text of the message) and application sender data (e.g., which application the message is from). The message data may include the details to be sent to the user to read (e.g., “Your card was just used in an online transaction in Bulgaria. Was this you?”). The application sender data may include information or details about the application sending the message or be a subject (e.g., “Fraud system,” “Fraud alert,” or “Fraud”) and timing data (e.g., when the message was sent and/or when a relevant event occurred). The message prioritization system320may receive several messages at one time or in quick succession. In block104, the message prioritization system320may determine a ranking of importance of the one or more messages using the message data and the application sender data. The ranking of importance may act as a queue or buffer for messages that need to be sent out to users. The message prioritization system320may accumulate and order the messages over time in the queue. The ordering of the ranking of importance may be based on tiers. The tiers may include a first tier containing messages that require immediate action by the customer to mitigate financial loss or substantially increase customer experience (e.g., fraud or security messages), a second tier containing messages related to configured messages (e.g., user selected alerts), and a third tier containing messages related to informational and/or marketing messages. Fraud messages may be messages corresponding to account concerns where the user's account may have been compromised. User-configured messages may be messages that the user has configured to receive from the business (e.g., if a user has requested to receive messages stating when any charge for more than $50 appears on their credit card account). Marketing messages may be non-essential messages, advertisements, or general account information. The message prioritization system320may give urgent messages, such as fraud messages, in a higher tier (e.g., placing messages at the front of the queue). However, the message prioritization system320may give less urgent messages, such as marketing messages, in a lower tier (e.g., placing messages at the back of the queue). As messages come into the message prioritization system320, the system constantly reviews and ranks the messages by priority. The message prioritization system320may rank messages specific to a single user. The message prioritization system320may dynamically review messages, or may review messages at fixed intervals (e.g., every 30 seconds or every minute). The fixed interval may be set by known information about the user, such as how often the user checks their email. Message prioritization system320may use a first machine learning model to determine the ranking of each message on the ranking of importance. The message prioritization system320may use the message data, application data, and timing data in order to determine the ranking of each message. The first machine learning model of message prioritization system320may be trained based on data of prior messages for the same user or for other users. The first machine learning model of message prioritization system320may also be trained through feedback to recognize messages that are known to occur in conjunction with other messages (e.g., when travelling outside the country, the user would typically receive a fraud alert, a transaction alert, and a notification about exchange rates) and known user preferences for the user. The known user preferences for the user may come from a user account. The message prioritization system320may use a tiered rules-based approach for decisioning. The message prioritization system320may modify results at a user-by-user level to achieve the best results for individual users. The message prioritization system320machine learning model may consider several key factors when ranking messages. First, the message prioritization system320may consider what application (or part of the business) the message is coming from, as certain business divisions tend to present messages for more important reasons than others. Second, the message prioritization system320may consider timeliness. The message may have a limited time frame for the user to act. For example, if one message is supposed to remind the user of a technician that is supposed to arrive for a service call at their house in 5 minutes, that particular message is going to be more important than other, less time-critical messages. Third, the message prioritization system320may consider user preferences. If user preference data is available, and there is an indication on how the user may respond to a message, the message prioritization system320may factor this into when sending messages is appropriate. For example, if a message comes from an application in the middle of the night that is not necessary to send immediately, the message prioritization system320may rank the message lower than other messages. The message prioritization system320may rank the messages in a constant or static fashion over time. The message prioritization system320may change the ranking for a message depending on how long the message has been in the queue. In the above example, message prioritization system320may rank a message originally received by the message prioritization system320from an application in the middle of the night as low so that the user is not woken up; however, the message prioritization system320may re-rank the same message as high, because the message is important enough that the user needs to address the message immediately when they wake up. The message prioritization system320may store messages in the queue for a variety of time lengths depending on the importance. In block106, the message prioritization system320may determine whether a message of the one or messages is urgent based on the ranking of importance. Once the message prioritization system320sorts the messages into their various rankings, the message prioritization system320may determine if each of the messages in the ranking is urgent, based on the ranking (e.g., a fraud message would have a higher ranking, and therefore, be more urgent than a marketing message). If the message prioritization system320ranks the message at the highest ranking (e.g., 1 out of 10), the message prioritization system320may determine that the message is urgent and needs to be sent immediately. If the message prioritization system320makes such a determination, it would follow the path to block108. Messages in this category may typically be first tier messages, for example, messages about fraud. For all other messages, such as those further down in the ranking (e.g., 7 and higher), the message prioritization system320may determine that those messages are not urgent, and, therefore, do not need to be sent immediately. If the message prioritization system320determines that a message is not urgent, the message prioritization system320would follow the path to block110. The message prioritization system320may use a second machine learning model to make the determination whether each of the messages is urgent based on the ranking. Alternatively, the message prioritization system320may use the same machine learning model described with respect to block104and/or the determination step of block106above. The features of the machine learning model of block106may be the same features of the machine learning model of block104and are not repeated herein for brevity. In block108, the message prioritization system320may send the first message to the user device. Once the message prioritization system320determines that the first message is urgent and needs to be sent, the message prioritization system320may send the message to the user device through appropriate means for the type of message used. This operation may be part of the message prioritization system320or may be a separate system, such as an outbound message dispatcher. The outbound message dispatcher may be operated on the same hardware as message prioritization system320or separate hardware, which may be operated by a 3rdparty. In some embodiments, the message prioritization system320may send the first message to the user on all available channels (e.g., email, SMS, push, text, application). In other embodiments, the message prioritization system320may determine one or more channels by which should send the first message. This may involve the use of a third machine learning model. The message prioritization system320may feed input data to a third machine learning such as user preferences, message data, application data, and timing data. From there, the third machine learning model may predict which channel the user is most likely to be using at the time of the message. For example, if the message is going out to the user at 1:30 PM on a weekday, it is likely that the user is at work, therefore, the message prioritization system320via the third machine learning model, may decide to send the message to the user's work email address. In other via a user account indicating on which channels the user would prefer to receive messages. Such user preferences may override the decisioning by the message prioritization system320via the third machine learning model, or other message prioritization system320settings. The third machine learning model may have similarities to the first machine learning model or other parts of message prioritization system320in terms of training and feedback mechanisms. In block110, the message prioritization system320may determine a set time for the first message to be sent. For messages that are determined by the message prioritization system320to not be urgent, the message prioritization system320may chose a set time at which to send the non-urgent messages. The set time may be specific to each message or may be specific to a group of messages. The set time may be based on a variety of factors. For example, the set time may depend on the type of message or what application the message was from (e.g., messages about shopping may be sent on Saturdays, when a user is more likely to be at a mall). The set time may also depend on known user preferences. For example, if the user is known to work at night, then the message priority system320may focus on sending the message at a time that the user would likely be awake and not in the middle of the day (e.g., 8 pm). The set time may also depend on regulations. For example, certain debt collection regulations require debt collection calls to be made between certain hours of the day in certain regions. The set time may also be determined by the user's engagement with a website or application (e.g., typically at a certain time of day), the time zone the user is current located in (e.g., the set time may change if the user is travelling to match the time zone of the user), and the history of the user (e.g., based on how quickly or when the user responds or takes action in response to previous messages in the past). The message prioritization system320may consider these and other factors when considering the set time. The message prioritization system320may determine, via a fourth machine learning model, a set time. The fourth machine learning model may use input data of user preference, message data, application data, and timing data. From there, the message prioritization system320via the fourth machine learning model may determine the best time to send the message to the user. The best time to send the message may be the time when the user is most likely to be impacted by the message. For example, a message about savings accounts may be best presented to the user right after the user receives a bonus a work. The fourth machine learning model may have similarities to the first machine learning model in terms of training and feedback mechanisms. In block112, the message prioritization system320may send the first message to the user device at the set time. The message prioritization system320may send the first message to the user on all available channels, channels selected by the user, using a user device, in a user preferences, or may use an additional machine learning model similar to the third machine learning model to determine which channels to use as disclosed in block108and is not repeated herein for brevity. FIG.2Ais a flow diagram illustrating an exemplary method200for prioritizing messages, in accordance with certain embodiments of the disclosed technology. The steps of method200may be performed by one or more components of the system400(e.g., message prioritization system320or web server410of user messaging system408or user device402), as described in more detail with respect toFIGS.3and4. Method200ofFIG.2Ais similar to method100ofFIG.1. Method200describes a system where messages that are not urgent may then queued to be sent at a later time. At the later time, the messages may then be analyzed and the system may then determine whether or not it is still relevant for the message to be sent, or if the message should be deleted. The descriptions of blocks202,204,206,208, and210in method200are similar to the respective descriptions of blocks102,104,106,108, and110of method100and are not repeated herein for brevity. However, block212is different from block112and is described below. Additional blocks214and216are also described below. In block212, the message prioritization system320may determine whether the first message is appropriate to send based on the message data and the application sender data. This may occur at the set time, or at a different time. This block serves as a verification to see if the set time chosen in block210was a good option. For example, the message prioritization system320may receive an advertising message based on a user device location (e.g., a store specific advertisement while a user is shopping in a store). If the set time for the message to be sent, as determined in step210, happens to be after the user device has left the store (as determined by message prioritization system320in block212), then there would be little reason to send the user a message regarding an in-store offer after the user has already left. Accordingly, the message prioritization system320may use the message data and the application sender data to determine if the message is appropriate to be sent at the set time. The message prioritization system320may also use additional data provided (e.g., user device location data in the example above). The message prioritization system320may make the determination in block212with the aid of a fifth machine learning model. The fifth machine learning model may use input data of user preference, message data, application data, timing data, and other data. From there, the message prioritization system320via the fifth machine learning model may determine if it is appropriate to send the message to the user. The message prioritization system320may determine appropriateness based on relevance (e.g., the user has returned an item that an advertisement was about), timeframes (e.g., reminders that needed to take place within a certain amount of time that no longer matter), regulations (e.g., regulations governing when certain messages may be sent), or numerical thresholds (e.g., if a threshold number of messages have been sent to the user within a threshold amount of time). The fifth machine learning model may have similarities to the first machine learning model and/or other aspects of message prioritization system320in terms of training and feedback mechanisms. Block212potentially may also occur concurrently and/or in conjunction with block210. In block214, if the message is appropriate to send, the message prioritization system320may send the first message to the user device. This may be similar to the description of block208and is not repeated herein for brevity. In block216, the message prioritization system320may delete the first message. If message prioritization system320determines in block212that the message is no longer relevant to the user (e.g., it would no longer make sense to send the message to the user), the system may delete the first message. This eliminates the possibility that the user would receive a message that was not pertinent. Over time, receiving inappropriate messages may cause users to discount or ignore future, relevant messages. This is a solution to that problem. FIG.2Bis a flow diagram illustrating an exemplary method250for prioritizing messages, in accordance with certain embodiments of the disclosed technology. The steps of method250may be performed by one or more components of the system400(e.g., message prioritization system320or web server410of user messaging system408or user device402), as described in more detail with respect toFIGS.3and4. Method250ofFIG.2Bis similar to method100ofFIG.1. Method250allows queued messages to be combined. The descriptions of blocks252,254,256,258, and260in method250are similar to the respective descriptions of blocks102,104,106,108, and110of method100and are not repeated herein for brevity. However, block262is different from block112and is described below. Additional block264is also described below. In block262, the message prioritization system320may combine the first message and other messages of the one or more messages to create a combined message. Such a combination may occur at a set time as determined in block260. The set time may be the set time for the first message. The set time may be a compromised set time based on multiple of the one or more messages (e.g., an average of set times). The set time may be based on user preferences. The message prioritization320may generate a combined with the aid of a sixth machine learning model. The sixth machine learning model may use input data of user preference, message data, application data, and timing data. The sixth machine learning model may have similarities to the first machine learning model and/or other aspects of message prioritization system320in terms of training and feedback mechanisms. The output combined message may be in the form of an email, text message, instant message, or any other type of conventional message as described above. The language contained in the combined message may be condensed or summarized. The condensed or summarized messages may be completed using hardcoded rules and/or templates or by using natural language processing models. The system may also determine, via the sixth machine learning model, the sequence of the individual messages (or the summaries of the individual messages) within the combined message. The order of the messages within the combined message may be based on the ranking of importance. In block264, the message prioritization system320may send the combined message to the user device. This may generally follow the description of block208and is not repeated herein for brevity. The combined message may be generated and transmitted to a user device as an interactive graphical user interface. The user may be able to interact with the combined message graphical user interface as part of a mobile application on a user device402. The user may be able to tap or interact with an individual piece (e.g., a snippet) of the combined message to view the entire original message. The user may also be able to select user preferences on the user device402which may be used throughout message prioritization system320(e.g., text messages preferred over email, no messages after 10 pm except fraud). FIG.3is a block diagram of an example message prioritization system320used to prioritize messages to be sent to a user according to an example implementation of the disclosed technology. According to some embodiments, the user device402and web server410, as depicted inFIG.4and described below, may have a similar structure and components that are similar to those described with respect to message prioritization system320shown inFIG.3. As shown, the message prioritization system320may include a processor310, an input/output (I/O) device370, a memory330containing an operating system (OS)340and a program350. In certain example implementations, the message prioritization system320may be a single server or may be configured as a distributed computer system including multiple servers or computers that interoperate to perform one or more of the processes and functionalities associated with the disclosed embodiments. In some embodiments message prioritization system320may be one or more servers from a serverless or scaling server system. In some embodiments, the message prioritization system320may further include a peripheral interface, a transceiver, a mobile network interface in communication with the processor310, a bus configured to facilitate communication between the various components of the message prioritization system320, and a power source configured to power one or more components of the message prioritization system320. A peripheral interface, for example, may include the hardware, firmware and/or software that enable(s) communication with various peripheral devices, such as media drives (e.g., magnetic disk, solid state, or optical disk drives), other processing devices, or any other input source used in connection with the disclosed technology. In some embodiments, a peripheral interface may include a serial port, a parallel port, a general-purpose input and output (GPIO) port, a game port, a universal serial bus (USB), a micro-USB port, a high-definition multimedia interface (HDMI) port, a video port, an audio port, a Bluetooth™ port, a near-field communication (NFC) port, another like communication interface, or any combination thereof. In some embodiments, a transceiver may be configured to communicate with compatible devices and ID tags when they are within a predetermined range. A transceiver may be compatible with one or more of: radio-frequency identification (RFID), near-field communication (NFC), Bluetooth™, low-energy Bluetooth™ (BLE), WiFi™, ZigBee™, ambient backscatter communications (ABC) protocols or similar technologies. A mobile network interface may provide access to a cellular network, the Internet, or another wide-area or local area network. In some embodiments, a mobile network interface may include hardware, firmware, and/or software that allow(s) the processor(s)310to communicate with other devices via wired or wireless networks, whether local or wide area, private or public, as known in the art. A power source may be configured to provide an appropriate alternating current (AC) or direct current (DC) to power components. The processor310may include one or more of a microprocessor, microcontroller, digital signal processor, co-processor or the like or combinations thereof capable of executing stored instructions and operating upon stored data. The memory330may include, in some implementations, one or more suitable types of memory (e.g. such as volatile or non-volatile memory, random access memory (RAM), read only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, floppy disks, hard disks, removable cartridges, flash memory, a redundant array of independent disks (RAID), and the like), for storing files including an operating system, application programs (including, for example, a web browser application, a widget or gadget engine, and or other applications, as necessary), executable instructions and data. In one embodiment, the processing techniques described herein may be implemented as a combination of executable instructions and data stored within the memory330. The processor310may be one or more known processing devices, such as, but not limited to, a microprocessor from the Core™ family manufactured by Intel™, the Ryzen™ family manufactured by AMD™, or a system-on-chip processor using an ARM™ or other similar architecture. The processor310may constitute a single core or multiple core processor that executes parallel processes simultaneously, a central processing unit (CPU), an accelerated processing unit (APU), a graphics processing unit (GPU), a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC) or another type of processing component. For example, the processor310may be a single core processor that is configured with virtual processing technologies. In certain embodiments, the processor310may use logical processors to simultaneously execute and control multiple processes. The processor310may implement virtual machine (VM) technologies, or other similar known technologies to provide the ability to execute, control, run, manipulate, store, etc. multiple software processes, applications, programs, etc. One of ordinary skill in the art would understand that other types of processor arrangements could be implemented that provide for the capabilities disclosed herein. In accordance with certain example implementations of the disclosed technology, the message prioritization system320may include one or more storage devices configured to store information used by the processor310(or other components) to perform certain functions related to the disclosed embodiments. In one example, the message prioritization system320may include the memory330that includes instructions to enable the processor310to execute one or more applications, such as server applications, network communication processes, and any other type of application or software known to be available on computer systems. Alternatively, the instructions, application programs, etc. may be stored in an external storage or available from a memory over a network. The one or more storage devices may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible computer-readable medium. The message prioritization system320may include a memory330that includes instructions that, when executed by the processor310, perform one or more processes consistent with the functionalities disclosed herein. Methods, systems, and articles of manufacture consistent with disclosed embodiments are not limited to separate programs or computers configured to perform dedicated tasks. For example, the message prioritization system320may include the memory330that may include one or more programs350to perform one or more functions of the disclosed embodiments. For example, in some embodiments, the message prioritization system320may additionally manage dialogue and/or other interactions with the customer via a program350. The processor310may execute one or more programs350located remotely from the message prioritization system320. For example, the message prioritization system320may access one or more remote programs that, when executed, perform functions related to disclosed embodiments. The memory330may include one or more memory devices that store data and instructions used to perform one or more features of the disclosed embodiments. The memory330may also include any combination of one or more databases controlled by memory controller devices (e.g., server(s), etc.) or software, such as document management systems, Microsoft™ SQL databases, SharePoint™ databases, Oracle™ databases, Sybase™ databases, or other relational or non-relational databases. The memory330may include software components that, when executed by the processor310, perform one or more processes consistent with the disclosed embodiments. In some embodiments, the memory330may include a message prioritization system database360for storing related data to enable the message prioritization system320to perform one or more of the processes and functionalities associated with the disclosed embodiments. The message prioritization system database360may include stored data relating to status data (e.g., average session duration data, location data, idle time between sessions, and/or average idle time between sessions) and historical status data. According to some embodiments, the functions provided by the message prioritization system database360may also be provided by a database that is external to the message prioritization system320, such as the database416as shown inFIG.4. The message prioritization system320may also be communicatively connected to one or more memory devices (e.g., databases) locally or through a network. The remote memory devices may be configured to store information and may be accessed and/or managed by the message prioritization system320. By way of example, the remote memory devices may be document management systems, Microsoft™ SQL database, SharePoint™ databases, Oracle™ databases, Sybase™ databases, or other relational or non-relational databases. Systems and methods consistent with disclosed embodiments, however, are not limited to separate databases or even to the use of a database. The message prioritization system320may also include one or more I/O devices370that may comprise one or more interfaces for receiving signals or input from devices and providing signals or output to one or more devices that allow data to be received and/or transmitted by the message prioritization system320. For example, the message prioritization system320may include interface components, which may provide interfaces to one or more input devices, such as one or more keyboards, mouse devices, touch screens, track pads, trackballs, scroll wheels, digital cameras, microphones, sensors, and the like, that enable the message prioritization system320to receive data from a user (such as, for example, via the user device402). In examples of the disclosed technology, the message prioritization system320may include any number of hardware and/or software applications that are executed to facilitate any of the operations. The one or more I/O interfaces may be utilized to receive or collect data and/or user instructions from a wide variety of input devices. Received data may be processed by one or more computer processors as desired in various implementations of the disclosed technology and/or stored in one or more memory devices. The message prioritization system320may contain programs that train, implement, store, receive, retrieve, and/or transmit one or more machine learning models. Machine learning models may include a neural network model, a generative adversarial model (GAN), a recurrent neural network (RNN) model, a deep learning model (e.g., a long short-term memory (LSTM) model), a random forest model, a convolutional neural network (CNN) model, a support vector machine (SVM) model, logistic regression, XGBoost, and/or another machine learning model. Models may include an ensemble model (e.g., a model comprised of a plurality of models). In some embodiments, training of a model may terminate when a training criterion is satisfied. Training criterion may include a number of epochs, a training time, a performance metric (e.g., an estimate of accuracy in reproducing test data), or the like. The message prioritization system320may be configured to adjust model parameters during training. Model parameters may include weights, coefficients, offsets, or the like. Training may be supervised or unsupervised. The message prioritization system320may be configured to train machine learning models by optimizing model parameters and/or hyperparameters (hyperparameter tuning) using an optimization technique, consistent with disclosed embodiments. Hyperparameters may include training hyperparameters, which may affect how training of the model occurs, or architectural hyperparameters, which may affect the structure of the model. An optimization technique may include a grid search, a random search, a gaussian process, a Bayesian process, a Covariance Matrix Adaptation Evolution Strategy (CMA-ES), a derivative-based search, a stochastic hill-climb, a neighborhood search, an adaptive random search, or the like. The message prioritization system320may be configured to optimize statistical models using known optimization techniques. The message prioritization system320may also contain one or more prediction models. Prediction models may include statistical algorithms that are used to determine the probability of an outcome, given a set amount of input data. For example, prediction models may include regression models that estimate the relationships among input and output variables. Prediction models may also sort elements of a dataset using one or more classifiers to determine the probability of a specific outcome. Prediction models may be parametric, non-parametric, and/or semi-parametric models. In some examples, prediction models may cluster points of data in functional groups such as “random forests.” Random Forests may comprise combinations of decision tree predictors. (Decision trees may comprise a data structure mapping observations about something, in the “branch” of the tree, to conclusions about that thing's target value, in the “leaves” of the tree.) Each tree may depend on the values of a random vector sampled independently and with the same distribution for all trees in the forest. Prediction models may also include artificial neural networks. Artificial neural networks may model input/output relationships of variables and parameters by generating a number of interconnected nodes which contain an activation function. The activation function of a node may define a resulting output of that node given an argument or a set of arguments. Artificial neural networks may generate patterns to the network via an ‘input layer’, which communicates to one or more “hidden layers” where the system determines regressions via a weighted connections. Prediction models may additionally or alternatively include classification and regression trees, or other types of models known to those skilled in the art. To generate prediction models, the message prioritization system may analyze information applying machine-learning methods. While the message prioritization system320has been described as one form for implementing the techniques described herein, other, functionally equivalent, techniques may be employed. For example, some or all of the functionality implemented via executable instructions may also be implemented using firmware and/or hardware devices such as application specific integrated circuits (ASICs), programmable logic arrays, state machines, etc. Furthermore, other implementations of the message prioritization system320may include a greater or lesser number of components than those illustrated. FIG.4is a block diagram of an example system that may be used to view and interact with user messaging system408, according to an example implementation of the disclosed technology. The components and arrangements shown inFIG.4are not intended to limit the disclosed embodiments as the components used to implement the disclosed processes and features may vary. As shown, user messaging system408may interact with a user device402via a network406. In certain example implementations, the user messaging system408may include a local network412, a message prioritization system320, a web server410, and a database416. In some embodiments, a user may operate the user device402. The user device402can include one or more of a mobile device, smart phone, general purpose computer, tablet computer, laptop computer, telephone, public switched telephone network (PSTN) landline, smart wearable device, voice command device, other mobile computing device, or any other device capable of communicating with the network406and ultimately communicating with one or more components of the user messaging system408. In some embodiments, the user device402may include or incorporate electronic communication devices for hearing or vision impaired users. Users may include individuals such as, for example, subscribers, clients, prospective clients, or customers of an entity associated with an organization, such as individuals who have obtained, will obtain, or may obtain a product, service, or consultation from or conduct a transaction in relation to an entity associated with the user messaging system408. According to some embodiments, the user device402may include an environmental sensor for obtaining audio or visual data, such as a microphone and/or digital camera, a geographic location sensor for determining the location of the device, an input/output device such as a transceiver for sending and receiving data, a display for displaying digital images, one or more processors, and a memory in communication with the one or more processors. The network406may be of any suitable type, including individual connections via the internet such as cellular or WiFi networks. In some embodiments, the network406may connect terminals, services, and mobile devices using direct connections such as radio-frequency identification (RFID), near-field communication (NFC), Bluetooth™, low-energy Bluetooth™ (BLE), WiFi™, ZigBee™, ambient backscatter communications (ABC) protocols, USB, WAN, or LAN. Because the information transmitted may be personal or confidential, security concerns may dictate one or more of these types of connections be encrypted or otherwise secured. In some embodiments, however, the information being transmitted may be less personal, and therefore the network connections may be selected for convenience over security. The network406may include any type of computer networking arrangement used to exchange data. For example, the network406may be the Internet, a private data network, virtual private network (VPN) using a public network, and/or other suitable connection(s) that enable(s) components in the system400environment to send and receive information between the components of the system400. The network406may also include a PSTN and/or a wireless network. The user messaging system408may be associated with and optionally controlled by one or more entities such as a business, corporation, individual, partnership, or any other entity that provides one or more of goods, services, and consultations to individuals such as customers. In some embodiments, the user messaging system408may be controlled by a third party on behalf of another business, corporation, individual, partnership. The user messaging system408may include one or more servers and computer systems for performing one or more functions associated with products and/or services that the organization provides. Web server410may include a computer system configured to generate and provide one or more websites accessible to customers, as well as any other individuals involved in access system408's normal operations. Web server410may include a computer system configured to receive communications from user device402via for example, a mobile application, a chat program, an instant messaging program, a voice-to-text program, an SMS message, email, or any other type or format of written or electronic communication. Web server410may have one or more processors422and one or more web server databases424, which may be any suitable repository of website data. Information stored in web server410may be accessed (e.g., retrieved, updated, and added to) via local network412and/or network406by one or more devices or systems of system400. In some embodiments, web server410may host websites or applications that may be accessed by the user device402. For example, web server410may host a financial service provider website that a user device may access by providing an attempted login that are authenticated by the message prioritization system320. According to some embodiments, web server410may include software tools, similar to those described with respect to user device402above, that may allow web server410to obtain network identification data from user device402. The web server may also be hosted by an online provider of website hosting, networking, cloud, or backup services, such as Microsoft Azure™ or Amazon Web Services™. The local network412may include any type of computer networking arrangement used to exchange data in a localized area, such as WiFi, Bluetooth™, Ethernet, and other suitable network connections that enable components of the user messaging system408to interact with one another and to connect to the network406for interacting with components in the system400environment. In some embodiments, the local network412may include an interface for communicating with or linking to the network406. In other embodiments, certain components of the user messaging system408may communicate via the network406, without a separate local network406. The user messaging system408may be hosted in a cloud computing environment (not shown). The cloud computing environment may provide software, data access, data storage, and computation. Furthermore, the cloud computing environment may include resources such as applications (apps), VMs, virtualized storage (VS), or hypervisors (HYP). User device402may be able to access user messaging system408using the cloud computing environment. User device402may be able to access user messaging system408using specialized software. The cloud computing environment may eliminate the need to install specialized software on user device402. In accordance with certain example implementations of the disclosed technology, the user messaging system408may include one or more computer systems configured to compile data from a plurality of sources the message prioritization system320, web server410, and/or the database416. The message prioritization system320may correlate compiled data, analyze the compiled data, arrange the compiled data, generate derived data based on the compiled data, and store the compiled and derived data in a database such as the database416. According to some embodiments, the database416may be a database associated with an organization and/or a related entity that stores a variety of information relating to customers, transactions, ATM, and business operations. The database416may also serve as a back-up storage device and may contain data and information that is also stored on, for example, database360, as discussed with reference toFIG.3. Embodiments consistent with the present disclosure may include datasets. Datasets may comprise actual data reflecting real-world conditions, events, and/or measurements. However, in some embodiments, disclosed systems and methods may fully or partially involve synthetic data (e.g., anonymized actual data or fake data). Datasets may involve numeric data, text data, and/or image data. For example, datasets may include transaction data, financial data, demographic data, public data, government data, environmental data, traffic data, network data, transcripts of video data, genomic data, proteomic data, and/or other data. Datasets of the embodiments may be in a variety of data formats including, but not limited to, PARQUET, AVRO, SQLITE, POSTGRESQL, MYSQL, ORACLE, HADOOP, CSV, JSON, PDF, JPG, BMP, and/or other data formats. Datasets of disclosed embodiments may have a respective data schema (e.g., structure), including a data type, key-value pair, label, metadata, field, relationship, view, index, package, procedure, function, trigger, sequence, synonym, link, directory, queue, or the like. Datasets of the embodiments may contain foreign keys, for example, data elements that appear in multiple datasets and may be used to cross-reference data and determine relationships between datasets. Foreign keys may be unique (e.g., a personal identifier) or shared (e.g., a postal code). Datasets of the embodiments may be “clustered,” for example, a group of datasets may share common features, such as overlapping data, shared statistical properties, or the like. Clustered datasets may share hierarchical relationships (e.g., data lineage). Although the preceding description describes various functions of a web server410, a message prioritization system320, and a database416, in some embodiments, some or all of these functions may be carried out by a single computing device. EXAMPLE USE CASE The following example use case describes an example of a typical user flow pattern. This section is intended solely for explanatory purposes and not in limitation. In one example based onFIG.1, Celeste is travelling to Greece. Once Celeste arrives in Greece she uses her credit card to buy $105 of food at a local grocery store at 3:05 PM. The message prioritization system320receives three messages due to this transaction from three separate server-side banking applications (block102). Message A is a notification message that Celeste set-up using her user preferences that sends a message to her phone for any charge over $50. Message B is a marketing message about currency exchange information between the Euro and the U.S. Dollar. Message C is a fraud warning message notifying Celeste that her card was used in an unexpected transaction in Greece. The message prioritization system320determines a ranking of importance for the three messages as follows (block104): Because message C is sent from a fraud application and the application is a fraud warning, it is ranked first (e.g., the highest rank). Because message A is a user-selected message, it is ranked second. Because message B is a marketing message, it is ranked third. Next, since the first message (message C) is related to fraud, the message prioritization system320determines that the message is urgent (block106). Therefore, the message prioritization system320sends the first message to the Celeste's phone (block108) immediately (3:06 PM). At block106, the second and third messages (messages A and B respectively) are both determined to be not urgent because the message A is an opt-in message and message B is a marketing message. Accordingly, the message prioritization system320determines that message A should be sent in 1 hour (4:06 PM) as that time would still be relevant, but not intrusive (block110). At 4:06 PM, message prioritization system320sends message A to Celeste's phone (block112). Message prioritization system320determines that message B should be sent at 8:00 PM as it is likely, based on the supplied location data, that Celeste is on a trip to Greece and the information in the marketing message would still be relevant at a later time, and Celeste's user preferences restrict messages 8:30 PM. (block110). At 8:00 PM, message prioritization system320sends message B to Celeste's phone (block112). In an alternative version of the above example based onFIG.2A, the message prioritization system320determines that message A is an opt-in message and is superfluous given that the fraud message already contains similar information about the charge at the set time (block212). Therefore, message prioritization system320deletes the first message (block216). The message prioritization system320determines that message B is still appropriate to send at 8:00 PM, given that Celeste is still in Greece, based on her phone's location data. Therefore, the message prioritization system320sends message B to Celeste's phone at 8:00 PM. In an alternative version of the above example based onFIG.2B, the message prioritization system320determines that message A should be sent in 1 hour (4:06 PM) as that time would still be relevant, but not intrusive) (block260). The message prioritization system320combines message A and message B to create a combined message (block262) at 4:06 PM and then sends the combined message to Celeste's phone (block264). Celeste can then open and view the messages individually on an interactive graphical user interface from the combined message. In some examples, disclosed systems or methods may involve one or more of the following clauses: Clause 1: A message prioritization system comprising: one or more processors; memory in communication with the one or more processors and storing instructions that are configured to cause the message prioritization system to: receive, from one or more applications, one or more messages to be sent to a user, each message of the one or more messages comprising message data and application sender data; determine a ranking of importance of the one or more messages using the message data and the application sender data; determine, using a first machine learning model, whether a first message of the one or more messages is urgent based on the ranking of importance; responsive to determining the first message is urgent: send the first message to the user device; responsive to determining the first message is not urgent: determine a set time for the first message to be sent; and send the first message to the user device at the set time. Clause 2: The message prioritization system of clause 1, wherein determining the ranking of importance of the one or more messages utilizes a second machine learning model. Clause 3: The message prioritization system of clause 2, wherein the second machine learning model is trained based on data from other users, recognizing messages that are known to occur in conjunction with other messages, known user preferences for each user, or combinations thereof. Clause 4: The message prioritization system of clause 1, wherein: the application sender data comprises timing data, application information about the one or more applications sending the one or more messages, or both, and determining the ranking of importance is based on the timing data, the application information, or both. Clause 5: The message prioritization system of clause 1, wherein the memory stores further instructions that are configured to cause the message prioritization system to: generate a first graphical user interface indicating for the user to select preferences regarding messages; transmit the first graphical user interface to the user device for display; receive user preferences from the user device; and wherein the first machine learning model determines the ranking of importance based on the user preferences. Clause 6: The message prioritization system of clause 1, wherein security messages are highest on the ranking of importance. Clause 7: The message prioritization system of clause 1, wherein the ranking of importance is determined dynamically. Clause 8: The message prioritization system of clause 1, wherein the system determines the ranking of importance using a tiered rules-based approach. Clause 9: The message prioritization system of clause 1, wherein the memory stores further instructions that are configured to cause the message prioritization system to: determine, using a third machine learning model, one or more channels for sending the first message; and send the first message via the one or more channels. Clause 10: The message prioritization system of clause 9, wherein the one or more channels comprise SMS, push notifications, email, or combinations thereof. Clause 11: The message prioritization system of clause 9, wherein the system uses the third machine learning model to categorize the first message based on feedback from prior messages opened by the user. Clause 12: A message prioritization system comprising: one or more processors; memory in communication with the one or more processors and storing instructions that are configured to cause the message prioritization system to: receive, from one or more applications, one or more messages to be sent to a user, each message of the one or more messages comprising message data and application sender data; determine a ranking of importance of the one or more messages using the message data and the application sender data; determine, using a first machine learning model, whether a first message of the one or more messages is urgent based on the ranking of importance; responsive to determining the first message is urgent: send the first message to a user device; responsive to determining the first message is not urgent: determine a set time for the first message to be sent; and at the set time, determine, using a second machine learning model whether the first message is appropriate to send based on the message data and the application sender data; responsive to determining that the first message is appropriate to send: send the first message to the user device; and responsive to determining that the first message is not appropriate to send: delete the first message. Clause 13: The message prioritization system of clause 12, wherein: the application sender data comprises timing data, application information about the one or more applications sending the one or more messages, or both, and determining whether the first message is appropriate to send is based on whether an appropriate timeframe for the first message to be sent has passed. Clause 14: The message prioritization system of clause 12, wherein determining whether the first message is appropriate to send is based on regulations that govern when the first message may be sent. Clause 15: The message prioritization system of clause 12, wherein: determining the ranking of importance of the one or more messages utilizes a third machine learning model, and determining the ranking of importance is based on how much time has passed since the one or more messages were sent from the one or more applications. Clause 16: The message prioritization system of clause 12, determining whether the first message is appropriate to send is based on a threshold number of messages that have been sent to the user within a threshold amount of time. Clause 17: A message prioritization system comprising: one or more processors; memory in communication with the one or more processors and storing instructions that are configured to cause the message prioritization system to: receive, from one or more applications, one or more messages to be sent to a user, each message of the one or more messages comprising message data and application sender data; determine a ranking of importance of the one or more messages using the message data and the application sender data; determine, using a first machine learning model, whether a first message of the one or more messages is urgent based on the ranking of importance; responsive to determining the first message is urgent: send the first message to the user device; responsive to determining the first message is not urgent: determine a set time for the first message to be sent; at the set time, combine, using a second machine learning model, the first message and other messages of the one or more messages to create a combined message; and send the combined message to the user device. Clause 18: The message prioritization system of clause 17, wherein the memory stores further instructions that are configured to cause the message prioritization system to: generate a first graphical user interface containing the combined message, wherein the first message and the other messages are presented on the first graphical user interface in an order of the ranking of importance; and transmit the first graphical user interface to the user device for display. Clause 19: The message prioritization system of clause 18, wherein the first graphical user interface is part of a mobile application on the user device. Clause 20: The message prioritization system of clause 17, wherein the combined message is an email. The features and other aspects and principles of the disclosed embodiments may be implemented in various environments. Such environments and related applications may be specifically constructed for performing the various processes and operations of the disclosed embodiments or they may include a general-purpose computer or computing platform selectively activated or reconfigured by program code to provide the necessary functionality. Further, the processes disclosed herein may be implemented by a suitable combination of hardware, software, and/or firmware. For example, the disclosed embodiments may implement general purpose machines configured to execute software programs that perform processes consistent with the disclosed embodiments. Alternatively, the disclosed embodiments may implement a specialized apparatus or system configured to execute software programs that perform processes consistent with the disclosed embodiments. Furthermore, although some disclosed embodiments may be implemented by general purpose machines as computer processing instructions, all or a portion of the functionality of the disclosed embodiments may be implemented instead in dedicated electronics hardware. The disclosed embodiments also relate to tangible and non-transitory computer readable media that include program instructions or program code that, when executed by one or more processors, perform one or more computer-implemented operations. The program instructions or program code may include specially designed and constructed instructions or code, and/or instructions and code well-known and available to those having ordinary skill in the computer software arts. For example, the disclosed embodiments may execute high level and/or low-level software instructions, such as machine code (e.g., such as that produced by a compiler) and/or high-level code that can be executed by a processor using an interpreter. The technology disclosed herein typically involves a high-level design effort to construct a computational system that can appropriately process unpredictable data. Mathematical algorithms may be used as building blocks for a framework, however certain implementations of the system may autonomously learn their own operation parameters, achieving better results, higher accuracy, fewer errors, fewer crashes, and greater speed. As used in this application, the terms “component,” “module,” “system,” “server,” “processor,” “memory,” and the like are intended to include one or more computer-related units, such as but not limited to hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets, such as data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal. Certain embodiments and implementations of the disclosed technology are described above with reference to block and flow diagrams of systems and methods and/or computer program products according to example embodiments or implementations of the disclosed technology. It will be understood that one or more blocks of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and flow diagrams, respectively, can be implemented by computer-executable program instructions. Likewise, some blocks of the block diagrams and flow diagrams may not necessarily need to be performed in the order presented, may be repeated, or may not necessarily need to be performed at all, according to some embodiments or implementations of the disclosed technology. These computer-executable program instructions may be loaded onto a general-purpose computer, a special-purpose computer, a processor, or other programmable data processing apparatus to produce a particular machine, such that the instructions that execute on the computer, processor, or other programmable data processing apparatus create means for implementing one or more functions specified in the flow diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means that implement one or more functions specified in the flow diagram block or blocks. As an example, embodiments or implementations of the disclosed technology may provide for a computer program product, including a computer-usable medium having a computer-readable program code or program instructions embodied therein, said computer-readable program code adapted to be executed to implement one or more functions specified in the flow diagram block or blocks. Likewise, the computer program instructions may be loaded onto a computer or other programmable data processing apparatus to cause a series of operational elements or steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions that execute on the computer or other programmable apparatus provide elements or steps for implementing the functions specified in the flow diagram block or blocks. Accordingly, blocks of the block diagrams and flow diagrams support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions, and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and flow diagrams, can be implemented by special-purpose, hardware-based computer systems that perform the specified functions, elements or steps, or combinations of special-purpose hardware and computer instructions. Certain implementations of the disclosed technology described above with reference to user devices may include mobile computing devices. Those skilled in the art recognize that there are several categories of mobile devices, generally known as portable computing devices that can run on batteries but are not usually classified as laptops. For example, mobile devices can include, but are not limited to portable computers, tablet PCs, internet tablets, PDAs, ultra-mobile PCs (UMPCs), wearable devices, and smart phones. Additionally, implementations of the disclosed technology can be utilized with internet of things (IoT) devices, smart televisions and media devices, appliances, automobiles, toys, and voice command devices, along with peripherals that interface with these devices. In this description, numerous specific details have been set forth. It is to be understood, however, that implementations of the disclosed technology may be practiced without these specific details. In other instances, well-known methods, structures, and techniques have not been shown in detail in order not to obscure an understanding of this description. References to “one embodiment,” “an embodiment,” “some embodiments,” “example embodiment,” “various embodiments,” “one implementation,” “an implementation,” “example implementation,” “various implementations,” “some implementations,” etc., indicate that the implementation(s) of the disclosed technology so described may include a particular feature, structure, or characteristic, but not every implementation necessarily includes the particular feature, structure, or characteristic. Further, repeated use of the phrase “in one implementation” does not necessarily refer to the same implementation, although it may. Throughout the specification and the claims, the following terms take at least the meanings explicitly associated herein, unless the context clearly dictates otherwise. The term “connected” means that one function, feature, structure, or characteristic is directly joined to or in communication with another function, feature, structure, or characteristic. The term “coupled” means that one function, feature, structure, or characteristic is directly or indirectly joined to or in communication with another function, feature, structure, or characteristic. The term “or” is intended to mean an inclusive “or.” Further, the terms “a,” “an,” and “the” are intended to mean one or more unless specified otherwise or clear from the context to be directed to a singular form. By “comprising” or “containing” or “including” is meant that at least the named element, or method step is present in article or method, but does not exclude the presence of other elements or method steps, even if the other such elements or method steps have the same function as what is named. It is to be understood that the mention of one or more method steps does not preclude the presence of additional method steps or intervening method steps between those steps expressly identified. Similarly, it is also to be understood that the mention of one or more components in a device or system does not preclude the presence of additional components or intervening components between those components expressly identified. Although embodiments are described herein with respect to systems or methods, it is contemplated that embodiments with identical or substantially similar features may alternatively be implemented as systems, methods and/or non-transitory computer-readable media. As used herein, unless otherwise specified, the use of the ordinal adjectives “first,” “second,” “third,” etc., to describe a common object, merely indicates that different instances of like objects are being referred to, and is not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner. While certain embodiments of this disclosure have been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that this disclosure is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. This written description uses examples to disclose certain embodiments of the technology and also to enable any person skilled in the art to practice certain embodiments of this technology, including making and using any apparatuses or systems and performing any incorporated methods. The patentable scope of certain embodiments of the technology is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.
72,724
11863512
DETAILED DESCRIPTION OF THE EMBODIMENTS A first aspect of the present specification provides a method for processing data for transmission from a first communication device to a second communication device. The method comprises detecting that the data comprises an attachment. The method further comprises determining an address of a copy of the attachment present on a storage device external to the first and second communication devices. The method further comprises substituting the attachment with the address of the copy in the data such that the copy is retrievable at the second communication device via the address. The method further comprises transmitting the data to the second communication device. The address can be embedded in the attachment, and determining the address of the copy can comprise processing the attachment to extract the address. The attachment can comprise an exchangeable image file format (EXIF), and the address can be embedded in EXIF data. The address can be stored in at least one of a database and a table in association with an identifier of the attachment, and determining the address of the copy can comprise processing at least one of the database and the table to retrieve the address via the identifier. The address can comprise a uniform resource locator (URL). The method can further comprise: determining if the storage device is accessible to second communication device; and, if not, transmitting the data to the second communication device with the attachment attached thereto in lieu of the substituting. Determining if the storage device is accessible to the second communication device can comprise determining if the second communication device and the storage device are each associated with a same communication network. Determining if the storage device is accessible to the second communication device can comprise determining if there is a firewall between the second communication device and the storage device, and if so, determining that the storage device is not accessible to the second communication device. The method can further comprise, prior to the detecting: uploading the copy of the attachment to the storage device; determining the address of the copy of the attachment; and, at least one of: storing the address in at least one of a database and a table in association with an identifier of the attachment, and embedding the address in the attachment. The data can comprise at least one of an e-mail, a text-message, a short message service message and an instant messaging message, and the attachment can comprise at least one of image data, audio data, video data and document data. A second aspect of the present specification provides a communication device for processing data for transmission from the communication device to a second communication device. The communication device comprises an interface enabled to transmit the data. The communication device further comprises a processing unit in communication with the interface. The processing unit is enabled to: detect that the data comprises an attachment; determine an address of a copy of the attachment present on a storage device external to the communication device and the second communication device; substitute the attachment with the address of the copy in the data such that the copy is retrievable at the second communication device via the address; and cause the data to be transmitted to the second communication device via the interface. The address can be embedded in the attachment, and the processing unit can be further enabled to determine the address of the copy by processing the attachment to extract the address. The attachment can comprise an exchangeable image file format (EXIF), and the address can be embedded in EXIF data. The communication device can further comprise a memory in communication with the processing unit, the memory enabled to store the address in at least one of a database and a table in association with an identifier of the attachment, and the processing unit can be further enabled to determine the address of the copy by processing at least one of the database and the table to retrieve the address via the identifier. The address comprises a uniform resource locator (URL). The processing unit can be further enabled to: determine if the storage device is accessible to the second communication device; and, if not, cause the data to be transmitted to the second communication device, via the interface, with the attachment attached thereto in lieu of the substituting. The processing unit can be further enabled to determine if the storage device is accessible to the second communication device by determining if the second communication device and the storage device are each associated with a same communication network. The processing unit can be further enabled to determine if the storage device is accessible to the second communication device by determining if there is a firewall between the second communication device and the storage device, and, if so, determine that the storage device is not accessible to the second communication device. The processing unit can be further enabled to, prior to detecting that the data comprises an attachment: upload the copy of the attachment to the storage device; determine the address of the copy of the attachment; and, at least one of: store the address in at least one of a database and a table in association with an identifier of the attachment, and embed the address in the attachment. The data can comprise at least one of an e-mail, a text-message, a short message service message and an instant messaging message, and the attachment can comprise at least one of image data, audio data, video data and document data. FIG.1depicts a system100for processing data105for transmission from a first communication device110to a second communication device120. First communication device110is generally enabled to transmit data105to second communication device120, via a communications network125. Furthermore, first communication device110is enabled to transmit data105to second communication device120. For example, data105can comprise at least one of an e-mail, a text-message, a short message service (SMS) message and an instant messaging (IM) message. Second communication device120is generally enabled to receive and process data105. System100comprises a storage device130external to first and second communication devices110,120. First and second communication device110,120are generally enabled to communicate with storage device130via communications network125. In general, first communication device110is enabled to upload data to storage device130for storage and/or backup and second communication device120is enabled to retrieve data from storage device130. First communication device110is further enabled to attach data A1 (hereafter referred to as attachment A1) to data105, attachment A1 stored in a memory132prior to being attached to data105. Attachment A1 can comprise at least one of image data, audio data, video data and document data, and the like, however it is understood that the nature of attachment A1 is not to be considered particularly limiting. In some embodiments, first and second communication devices110,120can comprise a personal computer, a laptop computer, a mobile communication device, a PDA, a cell-phone and/or a combination. Communications network125can comprise any suitable combination of wired and wireless communication networks as desired, including but not limited to the Internet, an intranet, a WiFi network, a WiMax network, a cell-phone network, and a wireless data network. First communication device110comprises a processing unit134for attaching attachment A1 to data105. Processing unit134is further enabled to process data105, according to a method described below with reference toFIG.2, such that an address of a copy of attachment A1 is substituted for attachment A1. For example, processing unit134can implement such a method by processing a data transmission application (DTA)136which can be stored in memory132, and retrieved by processing unit134. Processing unit134is further enabled to transmit a copy of attachment A1 to storage device130, for back-up and/or storage, and to further determine an address of the copy, as described below. Memory132comprises any suitable combination of random access memory (RAM) and read-only memory (ROM), as desired, and is enabled to store attachment A1, as well as applications such as DTA136. First communication device110further comprises a communication interface138. which is generally compatible with communications network125. In embodiments, where first communication device110comprises a mobile communication device, and communication network125comprises a wireless network, interface138comprises a radio140and an antenna141. Interface138is generally enabled to transmit data105to second communication device120via communication network125. Processing unit134is generally in communication with memory132and interface138, for example via a computer bus, such that attachment A1 can be retrieved from memory132and data105processed and transmitted via interface138. First communication device110further comprises a power source142. However, in other embodiments, power source142can comprise a connector for connecting first communication device110to a source of power, such as a power outlet, and/or a combination of a battery and a connector. Second communication device120comprises a memory152, a processing unit154and an interface158. Interface158is generally compatible with communication network125and is enabled to receive data105from first communication device110via communication network125. Processing unit154is enabled to process data105upon receipt and memory152is enabled to store data105and/or data attached thereto. Processing unit154is further enabled to retrieve data from storage device130, given an address of data stored at storage device130. First and second communication devices110and120can further comprise any suitable combination of input device(s) and display device(s), as desired (not depicted). Attention is now directed toFIG.2which depicts a method200for processing data105for transmission from first communication device110to second communication device120. In order to assist in the explanation of the method200, it will be assumed that the method200is performed using the system100. Furthermore, the following discussion of the method200will lead to a further understanding of the system100and its various components. However, it is to be understood that the system100and/or the method200can be varied, and need not work exactly as discussed herein in conjunction with each other, and that such variations are within the scope of present embodiments. Prior to processing data105for transmission optional steps210-220can be performed such that a copy of attachment A1 is stored in storage device130. However, it is understood that the means for storing a copy of attachment A1 in storage device130is not particularly limiting. At step210, a copy A1c of attachment A1, which is stored in memory132, is uploaded to storage device130. For example, as depicted inFIG.3(substantially similar toFIG.1with like elements having like numbers) processing unit134can create copy A1c and transmit/upload copy A1c to storage device130, via interface138. At storage device130, copy A1c is stored at an address Add1. At step215, address Add1 is determined at the processing unit134. For example, during the upload process, processing unit134can receive address Add1 from storage device130, as depicted inFIG.4, either by requesting address Add1 from storage device130during the upload process and/or storage device130can transmit address Add1 to first communication device110once copy A1c is stored. In some embodiments, address Add1 comprises a URL (Uniform Resource Locator: an address that specifies the location of a file on the Internet), and or a network address of the copy A1c. At step220, processing unit134stores address Add1, by at least one of: storing address Add1 in a table T1 (and/or a database) in association with an identifier of said attachment, and embedding address Add1 in attachment A1. In embodiments which include table T1, table T1 can be stored in memory132, as depicted inFIG.4. In some non-limiting embodiments, table T1 can comprise: TABLE T1Column1: Data IdentifierColumn 2: Address of copy of DataIdentifier of A1Add1 While the table T1 is presented in the format of rows and columns, it is understood that any suitable format can be used. In these embodiments, table T1 comprises a first data identifier column (“Data Identifier”), comprising an identifier of attachment A1. The identifier of attachment A1 can comprise any suitable identifier, including but not limited to a name of the attachment, an address of the attachment in the memory132, a version number, a file identifier number, and/or a combination. For clarity, however, in table T1, the identifier of attachment A1 comprises “Identifier of A1”. In these embodiments, table T1 further comprises a second column comprising the address of a copy of the data identified in the first column (“Address of copy of Data”), for example address Add1 of copy A1c (identified as “Add1” in table T1, for clarity). Furthermore, table T1 can comprise any suitable number of rows, each comprising storing identifiers of respective data stored in memory132, and addresses of copies associated with the respective data. For example, while present embodiments describe uploading only copy A1c to storage device130, it is understood that copies of any number of respective data stored in memory132can be uploaded to storage device130, and identifiers of respective data, along with addresses of copies associated with the respective data, can be stored in table T1. Alternatively, at step220, address Add1 can be embedded in attachment A1, as depicted inFIG.5, and stored in memory132. For example, attachment A1 can comprise metadata, which generally describes attachment A1, and address Add1 can be embedded in the metadata. In exemplary embodiments, attachment A1 comprises an exchangeable image file format (EXIF), and address Add1 can be embedded in EXIF data. A non-limiting example of an exchangeable file format includes, but is not limited to a JPEG image, as known to persons of skill in the art. In further embodiments, where attachment A1 comprises EXIF data, EXIF data can include GPS data of where data was acquired (e.g. where a photo was taken), which can also be embedded in address Add1. For example, address Add1 can comprise a URL including the GPS data, which can later (e.g. at step270described below) be tied into a mapping application (such as Google Maps™) thereby providing further detail of where data was acquired when copy A1c is later retrieved using address Add1. In any event, it is understood that the means for storing a copy A1c in storage device130is not particularly limiting. In some alternative embodiments, a copy A1c can be stored in storage device130prior to attachment A1 being stored at first communication device110. For example, attachment A1 can be transmitted to first communication device110, by a third communication device (not depicted), which has already uploaded copy A1c to storage device130, and embedded address Add1 in attachment A1, and/or transmitted address Add1 to first communication device110. In any event, at step230, processing unit134detects that data105comprises attachment A1. It is generally understood that data105is to be transmitted to second communication device120, and further that attachment A1 has been attached to data105. It is further understood that while present exemplary embodiments are directed to one attachment, the number of attachments to data105is not to be considered particularly limiting. In non-limiting embodiments, for example, an image file can be attached to an e-mail message. It is understood that data105can be generated via any suitable application, including but not limited to an e-mail application, a text message application, an SMS application and/or an IM application. It is furthermore understood that attachment A1 can be attached to data105in any suitable manner including, but not limited to, an automated e-mail application, drag and drop, file selection, etc. At step240, processing unit134determines address Add1 of copy A1c of attachment A1 present on storage device130external to first and second communication devices110,120. While in some embodiments, at step optional step215, address Add1 was previously determined; such a determination is performed during an upload/back-up process that is independent of step240. In any event, at step240, in embodiments wherein address Add1 is embedded in attachment A1, determining address Add1 of copy A1c comprises processing attachment A1 to extract address Add1. In embodiments where address Add1 is stored in table T1 (and/or a database) in association with an identifier of attachment A1, determining address Add1 of copy A1c comprises processing table T1 (and/or a database) to retrieve address Add1 via the identifier. For example, attachment A1 can be processed to determine the identifier, and the identifier can be used to look up address Add1 in table T1. At step250, and as depicted inFIG.7, processing unit134substitutes attachment A1 with address Add1 of copy A1c in data105, thereby reducing a size of data105, such that copy A1c is retrievable at second communication device120via address Add1. In embodiments where data105comprises more than one attachment, respective addresses of copies of each respective attachment can be substituted for each respective attachment. At step260, data105is transmitted to second communication device120, for example via interface138and communication network125. At step270, copy A1c is retrieved at second communication device120via address Add1. For example, when data105arrives at second communication device120, data105is processed by processing unit154to extract address Add1 (FIG.8). In response, second communication device120then retrieves copy A1c (FIG.9). In some embodiments, address Add1 is then substituted with copy A1c in data105. In other embodiments, copy A1c can be retrieved and stored in memory152without performing a substitution. In yet further embodiments, copy A1c can be retrieved only upon receipt of data from an input device (not depicted), associated with second communication device120(e.g. an input device can be used to “click” on the address, when data105is displayed at a display device (not depicted)). In embodiments where address Add1 includes GPS data, further detail of copy A1c can be retrieved using the GPS data, for example via Google Maps™, or any other suitable mapping application. Attention is now directed toFIG.10, which depicts an alternative embodiment of a system1000for processing data1005for transmission from a first communication device1010to a second communication device1020. System1000is substantially similar to system100, with like elements having like numbers, however preceded by a “10” rather than a “1”. For example first communication device1010is similar to first communication device110. However, system1000comprises a server1026, such as an e-mail server, and the like, which is enabled to manage data transmitted (and/or received) by first communication device1010, In these embodiments, server1026and first communication device1010are connected via a communication network1027, which can be a wired or wireless communication network as desired, and can comprise an intranet, such as a company intranet. For example, server1026can be enabled to manage data transmitted and/or received by any given number of communication devices connected to server1026via communication network1027. Furthermore, while not depicted, it is understood that each communication device1010and1020, and server1026, comprises at least a processing unit, and a communications interface, similar to communication devices110and120, and at least first communication device1010comprises a memory for storing an attachment A10, of which a copy A10c is stored at storage device1030, at an address Add10. Furthermore, in some embodiments, server1026comprises a table T10 (and/or a database, e.g, stored in a memory), similar to table T1, however storing addresses of copies of any given number of attachments stored at any given number of communication devices connected to server1026via communication network1027. In any event, first communication device1010is enabled to transmit data1005comprising attachment A10 to second communication device1020via server1026. Server1026is enabled to implement at least steps230-260of method200, such that data1005is detected and address Add10 is substituted for attachment A10 in data1005, as described above. Copy A10c can then be retrieved by second communication device1020. as described above. Attention is now directed toFIG.11, which depicts an alternative embodiment of a system1100for processing data1105for transmission from a first communication device1110to a second communication device1120. System1100is substantially similar to system1000, with like elements having like numbers, however preceded by an “11” rather than a “10”. For example first communication device1110is similar to first communication device1010. However, system1100further comprises a firewall1128, as known to a person of skill in the art. In these embodiments, storage device1130is in communication with first communication device1110and server1126via communication network1127: in other words, in these embodiments, storage device1130is an element of an intranet. Furthermore, first communication device1110, server1126and storage device1130are located “behind” firewall1128, relative to second communication device1120. Hence, in some of these embodiments, storage device1130is accessible to second communication device1120, while in other embodiments storage device1130is not accessible to second communication device1120. Attention is now directed toFIG.12which depicts a method1200for processing data1105for transmission from first communication device1110to second communication device1120. In order to assist in the explanation of the method1200, it will be assumed that the method1200is performed using the system1100. Furthermore, the following discussion of the method1200will lead to a further understanding of the system1100and its various components. However, it is to be understood that the system1100and/or the method1200can be varied, and need not work exactly as discussed herein in conjunction with each other, and that such variations are within the scope of present embodiments. Method1200is substantially similar to method200, with like steps having like numbers, however preceded by “12” rather than “2”. For example, step1230is similar to step230. Furthermore, steps1230-1260can be performed by first communication device1110and/or server1126. In the following description, however, it will be assumed that steps1230-1260are implemented by server1126. In any event, after step1230(detect that data1150comprises attachment A11), at step1232a determination is made as to whether storage device1130is accessible to second communication device1120. In some of these embodiments, server1126can determine whether storage device1130is accessible to second communication device1120by determining if second communication device1120and storage device1130are each associated with a same communication network: for example, in some embodiments, second communication device1120can also be located behind firewall1128(not as depicted), and an element of communication network1127. In these embodiments, second communication device1120is an element of the same intranet as first communication device1110, etc. For example, server1126can maintain a list of all elements of communication network1127. Alternatively, determining if storage device1130is accessible to second communication device1120comprises determining if firewall1128is between second communication device1120and storage device1130. For example, a query can be transmitted to the second communication device1120, and if the reply passes through firewall1128, it is determined that storage device1130is not accessible to second communication device1120. In another non-limiting alternative, determining if storage device1130is accessible to second communication device1120comprises processing the address of the second communication device1120. For example server1126can comprise (and/or have access to) a list/table/database etc. of email domains and/or e-mail addresses that have access to storage device1130. Alternatively, determining if storage device1130is accessible to second communication device1120comprises determining if second communication device1120has permission to access storage device1130via firewall1128. In these embodiments, a list of communication devices that have permission to access storage device1130, but which are external to communication network1127, can be maintained at server1126and/or firewall1128(and/or communication device1110). In any event, if storage device1130is accessible to second communication device1120, then steps1240-1260are implemented (similar to steps240-260, as described above), as depicted inFIG.12. However, if storage device1130is not accessible to second communication device1120, then data1105is transmitted to second communication device1120with attachment A11 attached thereto in lieu of substituting attachment A11 with address Add11. In any event, when an address of a copy of an attachment is substituted for an attachment, in data for transmission from an originating communication device to a receiving communication device, strain on resources at the originating communication device is reduced, as is the amount of bandwidth used in transmitting the data. This can further reduce the power used at the communication device and lengthen life of a battery, if present. Furthermore, if the receiving communication device has a limit on the size of attachments it can receive via e-mail etc., and if the data for transmission initially comprises an attachment larger than the limit, sub to which receive transmitted data can have limits on the size of attachments which can be accepted, and data which comprise an attachment that is of a size larger than the limit can be rejected, substitution of an address of a copy of the attachment can ensure that the data is not rejected. The systems, methods and apparatus described herein can also be use to control access to data. For example, in some embodiments address Add1 (and/or Add10, Add11) can be transmitted instead of attachment A1 (and/or attachment A10, A11) to ensure that only authorized individuals gain access to copy A1c (and/or A10c, A11c). When second communication device120(and/or second communication device1020,1120) attempts to retrieve copy A1c (and/or A10c, A11c), control of access to copy A1c (and/or A10c, A11c) can be enforced via pre-existing permissions to access database130and/or database1030,1130. For example, in system1100, server1126can rely on firewall1128(or another storage server with authenticated access) to enforce permissions. Those skilled in the art will appreciate that in some embodiments, the functionality of communication devices110,120,1010,1020,1110and1120, storage devices130,1030,1130, and servers1026and1126can be implemented using pre-programmed hardware or firmware elements (e.g., application specific integrated circuits (ASICs), electrically erasable programmable read-only memories (EEPROMs), etc.), or other related components. In other embodiments, the functionality of communication devices110,120,1010,1020,1110and1120, storage devices130,1030,1130, and servers1026and1126can be achieved using a computing apparatus that has access to a code memory (not shown) which stores computer-readable program code for operation of the computing apparatus. The computer-readable program code could be stored on a computer readable storage medium which is fixed, tangible and readable directly by these components, (e.g., removable diskette, CD-ROM, ROM, fixed disk, USB drive). Alternatively, the computer-readable program code could be stored remotely but transmittable to these components via a modem or other interface device connected to a network (including, without limitation, the Internet) over a transmission medium. The transmission medium can be either a non-wireless medium (e.g., optical and/or digital and/or analog communications lines) or a wireless medium (e.g., microwave, infrared, free-space optical or other transmission schemes) or a combination thereof. A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by any one of the patent document or patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyrights whatsoever. Persons skilled in the art will appreciate that there are yet more alternative implementations and modifications possible for implementing the embodiments, and that the above implementations and examples are only illustrations of one or more embodiments. The scope, therefore, is only to be limited by the claims appended hereto.
29,593
11863513
DETAILED DESCRIPTION In messaging systems, users are connected to a variety of other users with whom they have different levels and types of relationships. For example, a user can be socially connected to a group of users who are close friends, co-workers, acquaintances, as well as people the user does not know outside of the messaging system. The social connection a user can establish with another user in the messaging system may include a unilateral friendship relationship and a bilateral friendship relationship. The social networking systems are presented with the challenge of providing a user media content items without a showing of comments made by individuals unknown by the user, as such unknown individuals may post irrelevant or sometimes emotionally distressing comments to the media content items shared with the user, negatively affecting user experience. In addition, the social networking systems are also presented with the challenge of withholding a complete comment thread to a user, in view of the fact the user may experience some degree of emotional distress if she fails to receive a further comment from her friends in response to her comment in the thread. Embodiments of the present disclosure improve the functionality of electronic messaging software and systems by recognizing that a user may want to receive media content items associated with comments only coming from other users with whom the user has established a social relationship (e.g., friendship) on the messaging system. Specifically, the embodiments of the present disclosure relate to generating a playback of media content items available on the messaging system with comments created only by friends of a viewing user. The viewing user may post comments to the media content item. Each comment is associated with a timestamp representing the temporal position during the time of playback when the comment was created. Each comment is displayed during the playback at respective temporal position (e.g., timeline marker), so that the viewing user may experience the creations of the comments from friends. In some embodiments, upon selection of a comment, the messaging system may direct the user to a private messaging user interface to engage in a private conversation with the comment creator. Therefore, no comment thread is generated for media content items. It helps to advance the goal of avoiding the generation of a complete comment thread viewable by all users, inadvertently causing emotional distress to certain affected users. The present disclosure also relates to generating notifications of a comment created by a friend of the user associated with a media content item. A pre-determined time period (e.g., a cool-down period) is determined and assessed between the generation of notifications on a client device. In some embodiments, the user may post comments at any time during the display of a media content item. The user may mention other friends in the comments, which may be shared by the messaging system with friends mentioned in the comments. In some embodiments, the media content items are created by commercial content creators or designated users whose user profile is not connected with the viewing user in an entity graph stored in the messaging system. For example, a designated user is not a friend with the user who requests to view the media content item that the designated user has created. Networked Computing Environment FIG.1is a block diagram showing an example messaging system100for exchanging data (e.g., messages and associated content) over a network. The messaging system100includes multiple instances of a client device102, each of which hosts a number of applications, including a messaging client104. Each messaging client104is communicatively coupled to other instances of the messaging client104and a messaging server system108via a network106(e.g., the Internet). A messaging client104is able to communicate and exchange data with another messaging client104and with the messaging server system108via the network106. The data exchanged between messaging client104, and between a messaging client104and the messaging server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video or other multimedia data). The messaging server system108provides server-side functionality via the network106to a particular messaging client104. While certain functions of the messaging system100are described herein as being performed by either a messaging client104or by the messaging server system108, the location of certain functionality either within the messaging client104or the messaging server system108may be a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system108but to later migrate this technology and functionality to the messaging client104where a client device102has sufficient processing capacity. The messaging server system108supports various services and operations that are provided to the messaging client104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client104. This data may include message content, client device information, geolocation information, media augmentation and overlays, message content persistence conditions, social network information, and live event information, as examples. Data exchanges within the messaging system100are invoked and controlled through functions available via user interfaces (UIs) of the messaging client104. Turning now specifically to the messaging server system108, an Application Program Interface (API) server110is coupled to, and provides a programmatic interface to, application servers112. The application servers112are communicatively coupled to a database server118, which facilitates access to a database120that stores data associated with messages processed by the application servers112. Similarly, a web server124is coupled to the application servers112and provides web-based interfaces to the application servers112. To this end, the web server124processes incoming network requests over the Hypertext Transfer Protocol (HTTP) and several other related protocols. The Application Program Interface (API) server110receives and transmits message data (e.g., commands and message payloads) between the client device102and the application servers112. Specifically, the Application Program Interface (API) server110provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client104in order to invoke functionality of the application servers112. The Application Program Interface (API) server110exposes various functions supported by the application servers112, including account registration, login functionality, the sending of messages, via the application servers112, from a particular messaging client104to another messaging client104, the sending of media files (e.g., images or video) from a messaging client104to a messaging server114, and for possible access by another messaging client104, the settings of a collection of media data (e.g., story), the retrieval of a list of friends of a user of a client device102, the retrieval of such collections, the retrieval of messages and content, the addition and deletion of entities (e.g., friends) to an entity graph (e.g., a social graph), the location of friends within a social graph, and opening an application event (e.g., relating to the messaging client104). The application servers112host a number of server applications and subsystems, including for example a messaging server114, an image processing server116, and a social network server122. The messaging server114implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content) included in messages received from multiple instances of the messaging client104. As will be described in further detail, the text and media content from multiple sources may be aggregated into collections of content (e.g., called stories or galleries). These collections are then made available to the messaging client104. Other processor and memory intensive processing of data may also be performed server-side by the messaging server114, in view of the hardware requirements for such processing. The application servers112also include an image processing server116that is dedicated to performing various image processing operations, typically with respect to images or video within the payload of a message sent from or received at the messaging server114. The social network server122supports various social networking functions and services and makes these functions and services available to the messaging server114. To this end, the social network server122maintains and accesses an entity graph306(as shown inFIG.3) within the database120. Examples of functions and services supported by the social network server122include the identification of other users of the messaging system100with which a particular user has relationships or is “following,” and also the identification of other entities and interests of a particular user. System Architecture FIG.2is a block diagram illustrating further details regarding the messaging system100, according to some examples. Specifically, the messaging system100is shown to comprise the messaging client104and the application servers112. The messaging system100embodies a number of subsystems, which are supported on the client-side by the messaging client104and on the sever-side by the application servers112. These subsystems include, for example, an ephemeral timer system202, a collection management system204, an augmentation system206, a map system208, and a game system210. The ephemeral timer system202is responsible for enforcing the temporary or time-limited access to content by the messaging client104and the messaging server114. The ephemeral timer system202incorporates a number of timers that, based on duration and display parameters associated with a message, or collection of messages (e.g., a story), selectively enable access (e.g., for presentation and display) to messages and associated content via the messaging client104. Further details regarding the operation of the ephemeral timer system202are provided below. In one embodiment, the ephemeral timer system202is also responsible for determining a pre-determined duration of time for playback of media content items, such as the media content item702as shown inFIG.7. In one embodiment, the ephemeral timer system202is further responsible for determining a pre-determined time duration of the display of a comment during the playback of a media content item. The collection management system204is responsible for managing sets or collections of media (e.g., collections of text, image video, and audio data). A collection of content (e.g., messages, including images, video, text, and audio) may be organized into an “event gallery” or an “event story.” Such a collection may be made available for a specified time period, such as the duration of an event to which the content relates. For example, content relating to a music concert may be made available for the duration of that music concert. The collection management system204may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of the messaging client104. The collection management system204furthermore includes a curation interface212that allows a collection manager to manage and curate a particular collection of content. For example, the curation interface212enables an event organizer to curate a collection of content relating to a specific event (e.g., delete inappropriate content or redundant messages). Additionally, the collection management system204employs machine vision (or image recognition technology) and content rules to automatically curate a content collection. In certain examples, compensation may be paid to a user for the inclusion of user-generated content into a collection. In such cases, the collection management system204operates to automatically make payments to such users for the use of their content. In one embodiment, collection management system204is responsible for managing a collection of media content items that can be viewed and commented by users in the messaging server system108. The collection of media content items may include media content items created by commercial content creators or designated users. The commercial content creator may be a third party publisher, such as New York Times, Vice, etc. The designated users may include users who have a large number of followers. User profiles may be stored as profile data308in the entity table304in the database120. The number of followers is determined by a number of user profiles being unilaterally connected to the designated user profile in the entity table304. Specifically, the type of connections between two user profiles may include a bilateral connection and a unilateral connection, respectively represented by a bilateral connection identifier and a unilateral connection identifier associated with each user profile in the entity table304. The bilateral connection indicates the connected users have each responded to a friendship request sent from the other user via a client device102. The unilateral connection indicates only one of the two connected users has requested friendship connection, but the requested user has not responded to or has denied such request. The number of followers of the designated user is determined by the number of unilaterally connected user profiles associated with users who have requested friendship connection with the designated user, but the designated user has not responded to or has denied the request. A number of friends of a user may be determined by a number of bilaterally connected user profiles in the entity table304. In one embodiment, the designated user is determined by an administrator of the messaging server system108. The user profile associated with the designated user is absent from the set of connected profiles. Specifically, the designated user has not established a bilateral connection with the user who requests to view the media content item the designated user has created. In one embodiment, the collection management system204is responsible for causing a display of only the comments created by friends of a user who requests to view a media content item. In one embodiment, when the collection management system204causes a client device102to playback a media content item (e.g., a video or an image), an image is displayed within a pre-determined duration of time, such as five seconds. A video displays for the duration the video lasts. Each comment created by a user for a media content item is associated with a user profile and a timestamp representing a time within the duration of time displaying a media content item in which each comment was created. The augmentation system206provides various functions that enable a user to augment (e.g., annotate or otherwise modify or edit) media content associated with a message. For example, the augmentation system206provides functions related to the generation and publishing of media overlays for messages processed by the messaging system100. The augmentation system206operatively supplies a media overlay or augmentation (e.g., an image filter) to the messaging client104based on a geolocation of the client device102. In another example, the augmentation system206operatively supplies a media overlay to the messaging client104based on other information, such as social network information of the user of the client device102. A media overlay may include audio and visual content and visual effects. Examples of audio and visual content include pictures, texts, logos, animations, and sound effects. An example of a visual effect includes color overlaying. The audio and visual content or the visual effects can be applied to a media content item (e.g., a photo) at the client device102. For example, the media overlay may include text or image that can be overlaid on top of a photograph taken by the client device102. In another example, the media overlay includes an identification of a location overlay (e.g., Venice beach), a name of a live event, or a name of a merchant overlay (e.g., Beach Coffee House). In another example, the augmentation system206uses the geolocation of the client device102to identify a media overlay that includes the name of a merchant at the geolocation of the client device102. The media overlay may include other indicia associated with the merchant. The media overlays may be stored in the database120and accessed through the database server118. In some examples, the augmentation system206provides a user-based publication platform that enables users to select a geolocation on a map and upload content associated with the selected geolocation. The user may also specify circumstances under which a particular media overlay should be offered to other users. The augmentation system206generates a media overlay that includes the uploaded content and associates the uploaded content with the selected geolocation. In other examples, the augmentation system206provides a merchant-based publication platform that enables merchants to select a particular media overlay associated with a geolocation via a bidding process. For example, the augmentation system206associates the media overlay of the highest bidding merchant with a corresponding geolocation for a predefined amount of time. The map system208provides various geographic location functions and supports the presentation of map-based media content and messages by the messaging client104. For example, the map system208enables the display of user icons or avatars (e.g., stored in profile data308) on a map to indicate a current or past location of “friends” of a user, as well as media content (e.g., collections of messages including photographs and videos) generated by such friends, within the context of a map. For example, a message posted by a user to the messaging system100from a specific geographic location may be displayed within the context of a map at that particular location to “friends” of a specific user on a map interface of the messaging client104. A user can furthermore share his or her location and status information (e.g., using an appropriate status avatar) with other users of the messaging system100via the messaging client104, with this location and status information being similarly displayed within the context of a map interface of the messaging client104to selected users. The game system210provides various gaming functions within the context of the messaging client104. The messaging client104provides a game interface providing a list of available games that can be launched by a user within the context of the messaging client104, and played with other users of the messaging system100. The messaging system100further enables a particular user to invite other users to participate in the play of a specific game, by issuing invitations to such other users from the messaging client104. The messaging client104also supports both the voice and text messaging (e.g., chats) within the context of gameplay, provides a leaderboard for the games, and also supports the provision of in-game rewards (e.g., coins and items). Data Architecture FIG.3is a schematic diagram illustrating data structures300, which may be stored in the database120of the messaging server system108, according to certain examples. While the content of the database120is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database). The database120includes message data stored within a message table302. This message data includes, for any particular one message, at least message sender data, message recipient (or receiver) data, and a payload. Further details regarding information that may be included in a message, and included within the message data stored in the message table302is described below with reference toFIG.4. An entity table304stores entity data, and is linked (e.g., referentially) to an entity graph306and profile data308. Entities for which records are maintained within the entity table304may include individuals, corporate entities, organizations, objects, places, events, and so forth. Regardless of entity type, any entity regarding which the messaging server system108stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown). The entity graph306stores information regarding relationships and associations between entities. Such relationships may be social, professional (e.g., work at a common corporation or organization) interested-based or activity-based, merely for example. The profile data308stores multiple types of profile data about a particular entity. The profile data308may be selectively used and presented to other users of the messaging system100, based on privacy settings specified by a particular entity. Where the entity is an individual, the profile data308includes, for example, a user name, telephone number, address, settings (e.g., notification and privacy settings), as well as a user-selected avatar representation (or collection of such avatar representations). A particular user may then selectively include one or more of these avatar representations within the content of messages communicated via the messaging system100, and on map interfaces displayed by messaging clients104to other users. The collection of avatar representations may include “status avatars,” which present a graphical representation of a status or activity that the user may select to communicate at a particular time. Where the entity is a group, the profile data308for the group may similarly include one or more avatar representations associated with the group, in addition to the group name, members, and various settings (e.g., notifications) for the relevant group. The database120also stores augmentation data, such as overlays or filters, in an augmentation table310. The augmentation data is associated with and applied to videos (for which data is stored in a video table314) and images (for which data is stored in an image table316). Filters, in one example, are overlays that are displayed as overlaid on an image or video during presentation to a recipient user. Filters may be of various types, including user-selected filters from a set of filters presented to a sending user by the messaging client104when the sending user is composing a message. Other types of filters include geolocation filters (also known as geo-filters), which may be presented to a sending user based on geographic location. For example, geolocation filters specific to a neighborhood or special location may be presented within a user interface by the messaging client104, based on geolocation information determined by a Global Positioning System (GPS) unit of the client device102. Another type of filter is a data filter, which may be selectively presented to a sending user by the messaging client104, based on other inputs or information gathered by the client device102during the message creation process. Examples of data filters include current temperature at a specific location, a current speed at which a sending user is traveling, battery life for a client device102, or the current time. Other augmentation data that may be stored within the image table316includes augmented reality content items (e.g., corresponding to applying Lenses or augmented reality experiences). An augmented reality content item may be a real-time special effect and sound that may be added to an image or a video. As described above, augmentation data includes augmented reality content items, overlays, image transformations, AR images, and similar terms refer to modifications that may be applied to image data (e.g., videos or images). This includes real-time modifications, which modify an image as it is captured using device sensors (e.g., one or multiple cameras) of a client device102and then displayed on a screen of the client device102with the modifications. This also includes modifications to stored content, such as video clips in a gallery that may be modified. For example, in a client device102with access to multiple augmented reality content items, a user can use a single video clip with multiple augmented reality content items to see how the different augmented reality content items will modify the stored clip. For example, multiple augmented reality content items that apply different pseudorandom movement models can be applied to the same content by selecting different augmented reality content items for the content. Similarly, real-time video capture may be used with an illustrated modification to show how video images currently being captured by sensors of a client device102would modify the captured data. Such data may simply be displayed on the screen and not stored in memory, or the content captured by the device sensors may be recorded and stored in memory with or without the modifications (or both). In some systems, a preview feature can show how different augmented reality content items will look within different windows in a display at the same time. This can, for example, enable multiple windows with different pseudorandom animations to be viewed on a display at the same time. Data and various systems using augmented reality content items or other such transformation systems to modify content using this data can thus involve detection of objects (e.g., faces, hands, bodies, cats, dogs, surfaces, objects, etc.), tracking of such objects as they leave, enter, and move around the field of view in video frames, and the modification or transformation of such objects as they are tracked. In various embodiments, different methods for achieving such transformations may be used. Some examples may involve generating a three-dimensional mesh model of the object or objects, and using transformations and animated textures of the model within the video to achieve the transformation. In other examples, tracking of points on an object may be used to place an image or texture (which may be two dimensional or three dimensional) at the tracked position. In still further examples, neural network analysis of video frames may be used to place images, models, or textures in content (e.g., images or frames of video). Augmented reality content items thus refer both to the images, models, and textures used to create transformations in content, as well as to additional modeling and analysis information needed to achieve such transformations with object detection, tracking, and placement. Real-time video processing can be performed with any kind of video data (e.g., video streams, video files, etc.) saved in a memory of a computerized system of any kind. For example, a user can load video files and save them in a memory of a device, or can generate a video stream using sensors of the device. Additionally, any objects can be processed using a computer animation model, such as a human's face and parts of a human body, animals, or non-living things such as chairs, cars, or other objects. In some examples, when a particular modification is selected along with content to be transformed, elements to be transformed are identified by the computing device, and then detected and tracked if they are present in the frames of the video. The elements of the object are modified according to the request for modification, thus transforming the frames of the video stream. Transformation of frames of a video stream can be performed by different methods for different kinds of transformation. For example, for transformations of frames mostly referring to changing forms of object's elements characteristic points for each element of an object are calculated (e.g., using an Active Shape Model (ASM) or other known methods). Then, a mesh based on the characteristic points is generated for each of the at least one element of the object. This mesh used in the following stage of tracking the elements of the object in the video stream. In the process of tracking, the mentioned mesh for each element is aligned with a position of each element. Then, additional points are generated on the mesh. A first set of first points is generated for each element based on a request for modification, and a set of second points is generated for each element based on the set of first points and the request for modification. Then, the frames of the video stream can be transformed by modifying the elements of the object on the basis of the sets of first and second points and the mesh. In such method, a background of the modified object can be changed or distorted as well by tracking and modifying the background. In some examples, transformations changing some areas of an object using its elements can be performed by calculating characteristic points for each element of an object and generating a mesh based on the calculated characteristic points. Points are generated on the mesh, and then various areas based on the points are generated. The elements of the object are then tracked by aligning the area for each element with a position for each of the at least one element, and properties of the areas can be modified based on the request for modification, thus transforming the frames of the video stream. Depending on the specific request for modification properties of the mentioned areas can be transformed in different ways. Such modifications may involve changing color of areas; removing at least some part of areas from the frames of the video stream; including one or more new objects into areas which are based on a request for modification; and modifying or distorting the elements of an area or object. In various embodiments, any combination of such modifications or other similar modifications may be used. For certain models to be animated, some characteristic points can be selected as control points to be used in determining the entire state-space of options for the model animation. In some examples of a computer animation model to transform image data using face detection, the face is detected on an image with use of a specific face detection algorithm (e.g., Viola-Jones). Then, an Active Shape Model (ASM) algorithm is applied to the face region of an image to detect facial feature reference points. In other examples, other methods and algorithms suitable for face detection can be used. For example, in some embodiments, features are located using a landmark, which represents a distinguishable point present in most of the images under consideration. For facial landmarks, for example, the location of the left eye pupil may be used. If an initial landmark is not identifiable (e.g., if a person has an eyepatch), secondary landmarks may be used. Such landmark identification procedures may be used for any such objects. In some examples, a set of landmarks forms a shape. Shapes can be represented as vectors using the coordinates of the points in the shape. One shape is aligned to another with a similarity transform (allowing translation, scaling, and rotation) that minimizes the average Euclidean distance between shape points. The mean shape is the mean of the aligned training shapes. In some examples, a search for landmarks from the mean shape aligned to the position and size of the face determined by a global face detector is started. Such a search then repeats the steps of suggesting a tentative shape by adjusting the locations of shape points by template matching of the image texture around each point and then conforming the tentative shape to a global shape model until convergence occurs. In some systems, individual template matches are unreliable, and the shape model pools the results of the weak template matches to form a stronger overall classifier. The entire search is repeated at each level in an image pyramid, from coarse to fine resolution. A transformation system can capture an image or video stream on a client device (e.g., the client device102) and perform complex image manipulations locally on the client device102while maintaining a suitable user experience, computation time, and power consumption. The complex image manipulations may include size and shape changes, emotion transfers (e.g., changing a face from a frown to a smile), state transfers (e.g., aging a subject, reducing apparent age, changing gender), style transfers, graphical element application, and any other suitable image or video manipulation implemented by a convolutional neural network that has been configured to execute efficiently on the client device102. In some examples, a computer animation model to transform image data can be used by a system where a user may capture an image or video stream of the user (e.g., a selfie) using a client device102having a neural network operating as part of a messaging client application104operating on the client device102. The transformation system operating within the messaging client104determines the presence of a face within the image or video stream and provides modification icons associated with a computer animation model to transform image data, or the computer animation model can be present as associated with an interface described herein. The modification icons include changes that may be the basis for modifying the user's face within the image or video stream as part of the modification operation. Once a modification icon is selected, the transformation system initiates a process to convert the image of the user to reflect the selected modification icon (e.g., generate a smiling face on the user). A modified image or video stream may be presented in a graphical user interface displayed on the client device102as soon as the image or video stream is captured, and a specified modification is selected. The transformation system may implement a complex convolutional neural network on a portion of the image or video stream to generate and apply the selected modification. That is, the user may capture the image or video stream and be presented with a modified result in real-time or near real-time once a modification icon has been selected. Further, the modification may be persistent while the video stream is being captured, and the selected modification icon remains toggled. Machine taught neural networks may be used to enable such modifications. The graphical user interface, presenting the modification performed by the transformation system, may supply the user with additional interaction options. Such options may be based on the interface used to initiate the content capture and selection of a particular computer animation model (e.g., initiation from a content creator user interface). In various embodiments, a modification may be persistent after an initial selection of a modification icon. The user may toggle the modification on or off by tapping or otherwise selecting the face being modified by the transformation system and store it for later viewing or browse to other areas of the imaging application. Where multiple faces are modified by the transformation system, the user may toggle the modification on or off globally by tapping or selecting a single face modified and displayed within a graphical user interface. In some embodiments, individual faces, among a group of multiple faces, may be individually modified, or such modifications may be individually toggled by tapping or selecting the individual face or a series of individual faces displayed within the graphical user interface. A story table312stores data regarding collections of messages and associated image, video, or audio data, which are compiled into a collection (e.g., a story or a gallery). The creation of a particular collection may be initiated by a particular user (e.g., each user for which a record is maintained in the entity table304). A user may create a “personal story” in the form of a collection of content that has been created and sent/broadcast by that user. To this end, the user interface of the messaging client104may include an icon that is user-selectable to enable a sending user to add specific content to his or her personal story. A collection may also constitute a “live story,” which is a collection of content from multiple users that is created manually, automatically, or using a combination of manual and automatic techniques. For example, a “live story” may constitute a curated stream of user-submitted content from various locations and events. Users whose client devices have location services enabled and are at a common location event at a particular time may, for example, be presented with an option, via a user interface of the messaging client104, to contribute content to a particular live story. The live story may be identified to the user by the messaging client104, based on his or her location. The end result is a “live story” told from a community perspective. A further type of content collection is known as a “location story,” which enables a user whose client device102is located within a specific geographic location (e.g., on a college or university campus) to contribute to a particular collection. In some examples, a contribution to a location story may require a second degree of authentication to verify that the end user belongs to a specific organization or other entity (e.g., is a student on the university campus). As mentioned above, the video table314stores video data that, in one example, is associated with messages for which records are maintained within the message table302. Similarly, the image table316stores image data associated with messages for which message data is stored in the entity table304. The entity table304may associate various augmentations from the augmentation table310with various images and videos stored in the image table316and the video table314. In one embodiment, the image table316stores image data associated with an image-type media content item and the comments associated with the image. The video table314stores video data associated with a video-type media content item and the comments associated with the video. Data Communications Architecture FIG.4is a schematic diagram illustrating a structure of a message400, according to some examples, generated by a messaging client104for communication to a further messaging client104or the messaging server114. The content of a particular message400is used to populate the message table302stored within the database120, accessible by the messaging server114. Similarly, the content of a message400is stored in memory as “in-transit” or “in-flight” data of the client device102or the application servers112. A message400is shown to include the following example components:message identifier402: a unique identifier that identifies the message400.message text payload404: text, to be generated by a user via a user interface of the client device102, and that is included in the message400.message image payload406: image data, captured by a camera component of a client device102or retrieved from a memory component of a client device102, and that is included in the message400. Image data for a sent or received message400may be stored in the image table316.message video payload408: video data, captured by a camera component or retrieved from a memory component of the client device102, and that is included in the message400. Video data for a sent or received message400may be stored in the video table314.message audio payload410: audio data, captured by a microphone or retrieved from a memory component of the client device102, and that is included in the message400.message augmentation data412: augmentation data (e.g., filters, stickers, or other annotations or enhancements) that represents augmentations to be applied to message image payload406, message video payload408, or message audio payload410of the message400. Augmentation data for a sent or received message400may be stored in the augmentation table310.message duration parameter414: parameter value indicating, in seconds, the amount of time for which content of the message (e.g., the message image payload406, message video payload408, message audio payload410) is to be presented or made accessible to a user via the messaging client104.message geolocation parameter416: geolocation data (e.g., latitudinal and longitudinal coordinates) associated with the content payload of the message. Multiple message geolocation parameter416values may be included in the payload, each of these parameter values being associated with respect to content items included in the content (e.g., a specific image into within the message image payload406, or a specific video in the message video payload408).message story identifier418: identifier values identifying one or more content collections (e.g., “stories” identified in the story table312) with which a particular content item in the message image payload406of the message400is associated. For example, multiple images within the message image payload406may each be associated with multiple content collections using identifier values.message tag420: each message400may be tagged with multiple tags, each of which is indicative of the subject matter of content included in the message payload. For example, where a particular image included in the message image payload406depicts an animal (e.g., a lion), a tag value may be included within the message tag420that is indicative of the relevant animal. Tag values may be generated manually, based on user input, or may be automatically generated using, for example, image recognition.message sender identifier422: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the client device102on which the message400was generated and from which the message400was sent.message receiver identifier424: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the client device102to which the message400is addressed. The contents (e.g., values) of the various components of message400may be pointers to locations in tables within which content data values are stored. For example, an image value in the message image payload406may be a pointer to (or address of) a location within an image table316. Similarly, values within the message video payload408may point to data stored within a video table314, values stored within the message augmentations data412may point to data stored in an augmentation table310, values stored within the message story identifier418may point to data stored in a story table312, and values stored within the message sender identifier422and the message receiver identifier424may point to user records stored within an entity table304. Time-Based Access Limitation Architecture FIG.5is a schematic diagram illustrating an access-limiting process500, in terms of which access to content (e.g., an ephemeral message502, and associated multimedia payload of data) or a content collection (e.g., an ephemeral message group504) may be time-limited (e.g., made ephemeral). An ephemeral message502is shown to be associated with a message duration parameter506, the value of which determines an amount of time that the ephemeral message502will be displayed to a receiving user of the ephemeral message502by the messaging client104. In one example, an ephemeral message502is viewable by a receiving user for up to a maximum of 10 seconds, depending on the amount of time that the sending user specifies using the message duration parameter506. In one embodiment, the ephemeral message502may include a media content item, such as the media content item702as shown inFIG.7. The ephemeral message502may include a comment displayed during the playback of a media content item. The message duration parameter506and the message receiver identifier424are shown to be inputs to a message timer512, which is responsible for determining the amount of time that the ephemeral message502is shown to a particular receiving user identified by the message receiver identifier424. In particular, the ephemeral message502will only be shown to the relevant receiving user for a time period determined by the value of the message duration parameter506. The message timer512is shown to provide output to a more generalized ephemeral timer system202, which is responsible for the overall timing of display of content (e.g., an ephemeral message502) to a receiving user. In one embodiment, the message duration parameter506includes a pre-determined duration of time of media content playback, a pre-defined time duration for a display of a comment during a media content playback. The ephemeral message502is shown inFIG.5to be included within an ephemeral message group504(e.g., a collection of messages in a personal story, or an event story). The ephemeral message group504has an associated group duration parameter508, a value of which determines a time duration for which the ephemeral message group504is presented and accessible to users of the messaging system100. The group duration parameter508, for example, may be the duration of a music concert, where the ephemeral message group504is a collection of content pertaining to that concert. Alternatively, a user (either the owning user or a curator user) may specify the value for the group duration parameter508when performing the setup and creation of the ephemeral message group504. Additionally, each ephemeral message502within the ephemeral message group504has an associated group participation parameter510, a value of which determines the duration of time for which the ephemeral message502will be accessible within the context of the ephemeral message group504. Accordingly, a particular ephemeral message group504may “expire” and become inaccessible within the context of the ephemeral message group504, prior to the ephemeral message group504itself expiring in terms of the group duration parameter508. The group duration parameter508, group participation parameter510, and message receiver identifier424each provide input to a group timer514, which operationally determines, firstly, whether a particular ephemeral message502of the ephemeral message group504will be displayed to a particular receiving user and, if so, for how long. Note that the ephemeral message group504is also aware of the identity of the particular receiving user as a result of the message receiver identifier424. Accordingly, the group timer514operationally controls the overall lifespan of an associated ephemeral message group504, as well as an individual ephemeral message502included in the ephemeral message group504. In one example, each and every ephemeral message502within the ephemeral message group504remains viewable and accessible for a time period specified by the group duration parameter508. In a further example, a certain ephemeral message502may expire, within the context of ephemeral message group504, based on a group participation parameter510. Note that a message duration parameter506may still determine the duration of time for which a particular ephemeral message502is displayed to a receiving user, even within the context of the ephemeral message group504. Accordingly, the message duration parameter506determines the duration of time that a particular ephemeral message502is displayed to a receiving user, regardless of whether the receiving user is viewing that ephemeral message502inside or outside the context of an ephemeral message group504. The ephemeral timer system202may furthermore operationally remove a particular ephemeral message502from the ephemeral message group504based on a determination that it has exceeded an associated group participation parameter510. For example, when a sending user has established a group participation parameter510of 24 hours from posting, the ephemeral timer system202will remove the relevant ephemeral message502from the ephemeral message group504after the specified 24 hours. The ephemeral timer system202also operates to remove an ephemeral message group504when either the group participation parameter510for each and every ephemeral message502within the ephemeral message group504has expired, or when the ephemeral message group504itself has expired in terms of the group duration parameter508. In certain use cases, a creator of a particular ephemeral message group504may specify an indefinite group duration parameter508. In this case, the expiration of the group participation parameter510for the last remaining ephemeral message502within the ephemeral message group504will determine when the ephemeral message group504itself expires. In this case, a new ephemeral message502, added to the ephemeral message group504, with a new group participation parameter510, effectively extends the life of an ephemeral message group504to equal the value of the group participation parameter510. Responsive to the ephemeral timer system202determining that an ephemeral message group504has expired (e.g., is no longer accessible), the ephemeral timer system202communicates with the messaging system100(and, for example, specifically the messaging client104) to cause an indicium (e.g., an icon) associated with the relevant ephemeral message group504to no longer be displayed within a user interface of the messaging client104. Similarly, when the ephemeral timer system202determines that the message duration parameter506for a particular ephemeral message502has expired, the ephemeral timer system202causes the messaging client104to no longer display an indicium (e.g., an icon or textual identification) associated with the ephemeral message502. Media Content Playback and Comments Management In one embodiment, a user associated with the first client device102may send a request to the messaging server system108to view a media content item. The messaging server system108determines, based on connections of user profiles in entity graph306, at least one comment from a connected user profile (e.g., friends) at a particular time in which the one comment was created during the playback of the media content item. FIG.6illustrates a process600of generating a playback of a media content item in accordance with one embodiment. The operations of process600may be performed by any number of different systems, such as the messaging server114or the messaging client104described herein, or any portion thereof, such as a processor included in any of the systems. At operation602, the processor receives a request from a client device102to view a playback of a media content item. The media content item can be images, pictures, videos, text, or any combination thereof.FIG.7illustrates user interface700that can be displayed on the first client device102. The media content item702as shown inFIG.7is an image or a video of a person. The user interface700also includes a summary comments selectable item704, a playback progress bar706, and a plurality of timeline markers708. The creator name item718indicates the entity (e.g., XYZ) who created the media content item702. In one embodiment, the entity XYZ is either a commercial content creator or a designated user. The client device102is associated with a user. The user is associated with a viewer profile in the entity graph306. The media content item is associated with a set of comments represented by the summary comments selectable item704. The media content item702has a duration of time for playback. The duration of time is represented by the playback progress bar706. The duration of playback for image type media content may be five seconds, for example. The duration for video type media content is however long the video lasts. Each comment associated with a media content item is associated with a user profile and a timestamp representing a particular time within the duration of playback in which each comment was created. The plurality of timeline markers708represent the points in time when the comments included by the summary comments selectable item704was created. A user profile may be associated with an entity identifier representing a user. The viewer profile of the user is associated with a set of connected profiles representing connected users in the entity graph306inFIG.3. In one embodiment, the connections between the viewer profile and the connected profiles are bilateral connections (e.g., friendship connections), such that the summary comments selectable item704is generated only based on comments that were created by a friend or friends of the user who requests to view the media content item702. At operation604, the processor determines at least one comment associated with a respective user profile from the set of connected profiles, that the at least one comment associated with a particular time within the duration of time in which the at least one comment was created. For example, the first user created the first comment at the temporal position represented by the timeline marker708. When a requesting user views the playback of the media content item702, the first comment is displayed only at the timeline marker708during the playback of the media content item702. In one embodiment, the temporal position at the timeline marker708is identified by a timestamp associated with the comment stored in the video table314or the image table316in database120. In one embodiment, the display of a particular comment lasts for a pre-defined time duration, or up to a display of an immediate later created comment, provided that the later created comment was created before the pre-defined time duration elapses. In one embodiment, in response to detecting a user selection of comments creation button712as shown in user interface700, the processor may cause a display of text input item714for the user to enter a comment for the media content item702. After the user inputs a comment in the text input item714, the user may post or upload the comment to the messaging server system108by selecting the post button716. The comment can be added to the comments display802as shown inFIG.8. In one embodiment, the user may input the text of a user's name associated with the user profile as a portion of a comment or as a comment. The processor may generate a notification on the client device102associated with the user being mentioned. At operation606, the processor generates a summary comments selectable item704in the user interface700based at least in part on the user profile of the first user. The summary comments selectable item704includes at least one profile icon or avatars corresponding to the connected user profiles. As shown inFIG.7, the summary comments selectable item704includes two profile icons710, each profile icon includes an avatar of the user who created a comment for the media content item702. A profile icon can be an image of a silhouette of the user. The summary comment selectable item704, once selected by a user, is expandable into a comments display802as shown inFIG.8. In one embodiment, if the summary comment selectable item704is generated based on more than a threshold number of comments, the comments display802only displays the threshold number of comments at a time. The user may interact with the user interface800using hand gestures (e.g., scrolling up and down) to locate additional comments not shown in the comments display802. The threshold number of comments may be determined base on a plurality of factors, including the allowable length of each comment, the number of comments, etc. At operation608, in response to the request from the user to view the media content item702, the processor causes a display of playback of the media content item702and a display of the summary comments selectable item704in the user interface700. In one embodiment, prior to the display of the playback of the media content item702and the display of the summary comments selectable item704, the user interface700may briefly display (e.g., for two seconds) at least one profile icon above each corresponding timeline marker708to indicate which user has created comments in which temporal position. At operation610, the processor causes a display of the at least one comment at the particular time or temporal position represented by the timeline marker708during the playback of the media content item702. As shown inFIG.7, if there is more than one comment during the playback, each comment is associated with a timeline marker708distributed based on the temporal positions on the progress bar706. The comments are displayed in chronological order as they were created during the playback of the media content item702. At operation612, the processor receives a selection of the first comment from the at least one comment. In one embodiment, the selection of a comment can be made from choosing a comment in the list of comments display802, or the selection can be made from choosing a comment chronologically displayed during the playback of the media content item702. For example, as shown inFIG.8, a user may select the first comment804to activate a private messaging interface to engage in a private conversation with the creator of the first comment804, e.g., Bella. At operation614, the processor causes a display of a private messaging user interface, such as the user interface900as shown inFIG.9. The private messaging user interface900may include a quotation comments selectable item902corresponding to the selected first comment804and the media content item702. The private messaging user interface900is a one-on-one conversation user interface with the creator of the selected comment804. As shown inFIG.8, the creator of the comment is Bella, also indicated by a text display906“replying to Bella's comment.” The viewing user with whom Bella engages in a private conversation is “Evan,” as indicated by the name item910. This way, all replies to comments are conducted in a private conversation with the comment creator, the messaging server system108may avoid generating a complete comment thread for a display that may inadvertently cause emotional distress, such as embarrassment, to certain users. In one embodiment, the processor may generate a user interface1000as shown inFIG.10. The user interface1000includes a summary comments display1002, a text display1004, and a plurality of media content items1006,1008, and1010. Each media content item is associated with a collection of profile icons that are associated with comments created by users who are connected to the viewing user. In one embodiment, the profile icons are displayed in an order based on a ranking of a score of relationship affinity. For example, among the bilaterally connected users (e.g., friends), the viewing user may identify certain users as “close friends.” The identified users may be associated with an affinity identifier in the respective user profile that indicates a score of relationship affinity. The affinity identifier can be stored in entity graph306. The scores of relationship affinity may also be determined by other factors, such as the number of messages and the associated content exchanged between users, the amount of time of audio or video communication, the amount of media content items exchanged or referred between users, or other interactive activities conducted between users on the respective client device102. The processor ranks the scores of the relationship affinity associated with each connected user who created comments. A profile icon of a connected user with the highest score or may be displayed at the first place in the summary comments display1002or in the summary comments selectable item704. In one embodiment, as shown inFIG.11, an “in-app” notification1102may be displayed when a viewing user received comments from connected users who made comments to a media content item. The notification1102includes a profile icon and the name of the connected user, and a title or caption of the media content item. In one embodiment, if the processor determines there are multiple comments created by multiple connected users, the notification1102may only display the profile icon of the connected user with the highest score of relationship affinity. In one embodiment, the notification1102displays a number of connected users who created comments associated with a media content item. For example, as shown in user interface1100, the number of connected users or friends associated with the media content item titled “What is it like to adopt puppies” is three, including “Amy” and two other comments indicated by the text display “+2.” In one embodiment, a media content item, the processor determines if a pre-determined time period has elapsed since a previous notification associated with the media content item has been generated on the client device102, the previous notification corresponding to a second comment. Upon detecting the pre-determined time period has elapsed, the processor generates a subsequent notification on the client device in response to receiving a third comment for the media content item. The pre-determined time period represents a cool-down period, such as three hours. The second comment is associated with a second user profile from the set of connected profiles, and the third comment is associated with a third user profile from the set of connected profiles. In one embodiment, if the second comment and the third comments are both generated by the same user from the set of the connected profiles, and this same user is determined to be associated with an affinity identifier corresponding to a high score, the processor may generate the subsequent notification before the pre-determined time period has elapsed. FIG.7illustrates a user interface700displayed on a client device in accordance with one embodiment. The user interface700is caused to be displayed on a client device102when a user requests to view a playback of a media content item702. A user may select the summary comments selectable item704to activate the comments display802inFIG.8. A user may create comments by selecting comments creation button712, and input comments in the text input item714, and posts or upload the comments by selecting post button716. The uploaded comments will be subsequently displayed in the comments display802. The user interface700includes a progress bar706and a plurality of timeline markers708. The progress bar706represents the duration of time for playback of the media content item702. The plurality of timeline markers708represent the points in time when the comments included by the summary comments selectable item704was created. FIG.8illustrates a user interface800displayed on a client device in accordance with one embodiment. The user interface800includes the comments display802. A user may view all comments referred by the summary comments selectable item704. The user may also select any of the comments, such as the first comment804, to activate a private messaging user interface900, and engage in a private conversation with the creator of the selected comment. FIG.9illustrates a user interface900displayed on a client device in accordance with one embodiment. The user interface900is a private messaging user interface. In response to detecting a user selection of the first comment804, the user may respond individually to the creator (e.g., Bella) of the first comment. The private messaging user interface900may include a quotation comments selectable item902corresponding to the selected first comment804and the media content item702. The quotation comments selectable item902includes a quotation of the selected first comment804, and a pictorial overview of the media content item702. The quotation comments selectable item902is associated with an HTTP link that may direct the user back to the user interface700once the link is activated by a user selection of item902. FIG.10illustrates a user interface1000displayed on a client device in accordance with one embodiment. The user interface1000includes a summary comments display1002, a text display1004, and a plurality of media content items1006,1008, and1010. Each media content item is associated with a collection of profile icons that are associated with comments created by users who are connected to the viewing user. In an embodiment, each media content item is embedded with an HTTP link that once activated upon user selection, may direct the user to a media content playback user interface, such as the user interface700as shown inFIG.7. FIG.11illustrates a user interface1100displayed on a client device in accordance with one embodiment. The user interface1100includes an “in-app” notification1102. The notification1102includes a profile icon and the name of the connected user, the name of the media content creator, and a title or caption of the media content item. In an embodiment, the notification1102is embedded with an HTTP link that, once activated upon user selection, may direct the user to a media content playback user interface associated with the media content item referred by in the notification, such as the media content item1104created by content creator “DEF” as shown inFIG.11. Machine Architecture FIG.12is a diagrammatic representation of the machine1200within which instructions1208(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1200to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions1208may cause the machine1200to execute any one or more of the methods described herein. The instructions1208transform the general, non-programmed machine1200into a particular machine1200programmed to carry out the described and illustrated functions in the manner described. The machine1200may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine1200may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1200may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smartphone, a mobile device, a wearable device (e.g., a smartwatch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1208, sequentially or otherwise, that specify actions to be taken by the machine1200. Further, while only a single machine1200is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions1208to perform any one or more of the methodologies discussed herein. The machine1200, for example, may comprise the client device102or any one of a number of server devices forming part of the messaging server system108. In some examples, the machine1200may also comprise both client and server systems, with certain operations of a particular method or algorithm being performed on the server-side and with certain operations of the particular method or algorithm being performed on the client-side. The machine1200may include processors1202, memory1204, and input/output I/O components1238, which may be configured to communicate with each other via a bus1240. In an example, the processors1202(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor1206and a processor1210that execute the instructions1208. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.12shows multiple processors1202, the machine1200may include a single processor with a single-core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory1204includes a main memory1212, a static memory1214, and a storage unit1216, both accessible to the processors1202via the bus1240. The main memory1204, the static memory1214, and storage unit1216store the instructions1208embodying any one or more of the methodologies or functions described herein. The instructions1208may also reside, completely or partially, within the main memory1212, within the static memory1214, within machine-readable medium1218within the storage unit1216, within at least one of the processors1202(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1200. The I/O components1238may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components1238that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components1238may include many other components that are not shown inFIG.12. In various examples, the I/O components1238may include user output components1224and user input components1226. The user output components1224may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The user input components1226may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further examples, the I/O components1238may include biometric components1228, motion components1230, environmental components1232, or position components1234, among a wide array of other components. For example, the biometric components1228include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye-tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components1230include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope). The environmental components1232include, for example, one or cameras (with still image/photograph and video capabilities), illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. With respect to cameras, the client device102may have a camera system comprising, for example, front cameras on a front surface of the client device102and rear cameras on a rear surface of the client device102. The front cameras may, for example, be used to capture still images and video of a user of the client device102(e.g., “selfies”), which may then be augmented with augmentation data (e.g., filters) described above. The rear cameras may, for example, be used to capture still images and videos in a more traditional camera mode, with these images similarly being augmented with augmentation data. In addition to front and rear cameras, the client device102may also include a 360° camera for capturing 360° photographs and videos. Further, the camera system of a client device102may include dual rear cameras (e.g., a primary camera as well as a depth-sensing camera), or even triple, quad or penta rear camera configurations on the front and rear sides of the client device102. These multiple cameras systems may include a wide camera, an ultra-wide camera, a telephoto camera, a macro camera and a depth sensor, for example. The position components1234include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components1238further include communication components1236operable to couple the machine1200to a network1220or devices1222via respective coupling or connections. For example, the communication components1236may include a network interface component or another suitable device to interface with the network1220. In further examples, the communication components1236may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices1222may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components1236may detect identifiers or include components operable to detect identifiers. For example, the communication components1236may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components1236, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (e.g., main memory1212, static memory1214, and memory of the processors1202) and storage unit1216may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions1208), when executed by processors1202, cause various operations to implement the disclosed examples. The instructions1208may be transmitted or received over the network1220, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components1236) and using any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions1208may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to the devices1222. Software Architecture FIG.13is a block diagram1300illustrating a software architecture1304, which can be installed on any one or more of the devices described herein. The software architecture1304is supported by hardware such as a machine1302that includes processors1320, memory1326, and I/O components1338. In this example, the software architecture1304can be conceptualized as a stack of layers, where each layer provides a particular functionality. The software architecture1304includes layers such as an operating system1312, libraries1310, frameworks1308, and applications1306. Operationally, the applications1306invoke API calls1350through the software stack and receive messages1352in response to the API calls1350. The operating system1312manages hardware resources and provides common services. The operating system1312includes, for example, a kernel1314, services1316, and drivers1322. The kernel1314acts as an abstraction layer between the hardware and the other software layers. For example, the kernel1314provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services1316can provide other common services for the other software layers. The drivers1322are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers1322can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., USB drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. The libraries1310provide a common low-level infrastructure used by the applications1306. The libraries1310can include system libraries1318(e.g., C standard library) that provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries1310can include API libraries1324such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries1310can also include a wide variety of other libraries1328to provide many other APIs to the applications1306. The frameworks1308provide a common high-level infrastructure that is used by the applications1306. For example, the frameworks1308provide various graphical user interface (GUI) functions, high-level resource management, and high-level location services. The frameworks1308can provide a broad spectrum of other APIs that can be used by the applications1306, some of which may be specific to a particular operating system or platform. In an example, the applications1306may include a home application1336, a contacts application1330, a browser application1332, a book reader application1334, a location application1342, a media application1344, a messaging application1346, a game application1348, and a broad assortment of other applications such as a third-party application1340. The applications1306are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications1306, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application1340(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application1340can invoke the API calls1350provided by the operating system1312to facilitate functionality described herein. Processing Components Turning now toFIG.14, there is shown a diagrammatic representation of a processing environment1400, which includes a processor1402, a processor1406, and a processor1408(e.g., a GPU, CPU or combination thereof). The processor1402is shown to be coupled to a power source1404, and to include (either permanently configured or temporarily instantiated) modules, namely a collection management component1410and an ephemeral timer component1412. The collection management component1410operationally generates media content items and comments, manage the playback of the media content items and the display of the associated comments, and generates private messaging user interfaces in response to detecting a user selection of comments. The ephemeral timer component1412operationally manages the pre-determined duration of time of media content playback, and a pre-defined time duration for a display of a comment during a media content playback. As illustrated, the processor1402is communicatively coupled to both the processor1406and the processor1408. Glossary “Carrier signal” refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such instructions. Instructions may be transmitted or received over a network using a transmission medium via a network interface device. “Client device” refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smartphones, tablets, ultrabooks, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network. “Communication network” refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other types of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. “Component” refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software), may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In embodiments in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors1406or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented components may be distributed across a number of geographic locations. “Computer-readable storage medium” refers to both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. “Ephemeral message” refers to a message that is accessible for a time-limited duration. An ephemeral message may be a text, an image, a video and the like. The access time for the ephemeral message may be set by the message sender. Alternatively, the access time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory. “Machine storage medium” refers to a single or multiple storage devices and media (e.g., a centralized or distributed database, and associated caches and servers) that store executable instructions, routines and data. The term shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks The terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium.” “Non-transitory computer-readable storage medium” refers to a tangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine. “Signal medium” refers to any intangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine and includes digital or analog communications signals or other intangible media to facilitate communication of software or data. The term “signal medium” shall be taken to include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure.
94,723
11863514
DETAILED DESCRIPTION In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed. Some embodiments of the invention provide a method of load balancing data message flows across multiple secure connections (e.g., multiple IPsec security associations (SAs)), each of which handles a first set of connections formatted according to a first protocol (e.g., IPv4) and a second set of connections formatted according to a second protocol (e.g., IPv6). When a data message formatted according to either of the protocols is received and identified for secure transmission, the method selects one of the multiple secure connections (e.g., using a load balancing technique), securely encapsulates the data message, and forwards the encapsulated data message onto a network towards its destination. The encapsulation, in some embodiments, includes an identifier for the selected secure connection (e.g., a security parameter index (SPI)). In some embodiments, the method is performed by a first gateway device (also referred to herein as the initiator) that is local to a source machine from which the data message originated. Before receiving the data message, in some embodiments, the first gateway device and a second gateway device (also referred to herein as the responder) that is local to a destination machine of the data message engage in an Internet key exchange (IKE) session. During the IKE session, a group object (e.g., an SA group) that points to the multiple secure connections is created at the first gateway device. Additionally, the multiple secure connections are grouped, and a mixed mode is enabled for each of these secure connections such that each secure connection securely encapsulates data messages of both first and second traffic types associated with the first and second protocols. During this negotiation, some embodiments determine whether network address translation traversal (NAT-T) should be enabled (e.g., based on whether a network address and port translation device is identified within the path between the first and second gateway device). In some embodiments, the first gateway device enables the mixed mode for the secure connections when NAT-T is enabled. An SA is the establishment of shared security attributes between two network entities (e.g., between a pair of gateways of different datacenters, or between two network endpoints) to support secure communication (e.g., a virtual private network (VPN) connection/tunnel). An SA may correspond to a one-way or simplex connection. An SA may include attributes such as cryptographic algorithm and mode, traffic encryption key, and parameters for the network data to be passed over the connection. An SA is a form of contract between the two network entities detailing how to exchange and protect information among each other, including indicating how to encrypt/decrypt data. Each SA may include a mutually agreed-upon key, one or more secure protocols, and an SPI value identifying the SA, among other data. FIG.1conceptually illustrates an IKE session of some embodiments between an initiator at a first datacenter and a responder at a second datacenter for establishing secure communications between the datacenters. In some embodiments, the IKE session100is used for establishing a virtual private network (VPN) session between the first and second datacenters120and125. The initiator110and responder115are gateway devices, according to some embodiments. During the IKE session, the initiator110and responder115establish an IPsec tunnel150using the IKE protocol. The IPsec tunnel150is then used by the initiator110and responder115to negotiate encryption, authentication, and other protocols and other parameters (e.g., SAs), as illustrated by the control messages140and145. The network105, in some embodiments, is implemented by an underlying physical infrastructure of wired and/or wireless communications mediums, routers, switches, etc., and, in some embodiments, may include the Internet, as well as any direct connections between the initiator110and responder115. In some embodiments, the direct connections may refer to interconnections between network endpoints within a same datacenter and/or a same physical device, or other proprietary network connection interconnecting the initiator110and responder115. During the IKE session between the initiator110and responder115, an SA group object (not shown) that points to multiple SAs is created within the initiator110. As mentioned above, the negotiations between the initiator110and responder115include negotiations regarding parameters, such as the SAs. As a result of these SA negotiations during the IKE session, the multiple SAs are grouped, and mixed-mode is enabled such that each SA securely encapsulates data messages associated with the IPv4 and IPv6 protocols. In some embodiments, the grouping type for the multiple SAs is defined as an equal-cost multipath (ECMP) type grouping. As mentioned, during this negotiation, some embodiments determine whether NAT-T should be enabled based on whether a network address and port translation (NAPT) device is identified within the path between the first and second gateway device. If a NAPT device is identified, then NAT-T should be enabled, which means that the source and destination ports of the encapsulating UDP header of the securely encapsulated data messages will always have the same value (e.g., 4500). This prevents the use of the source port as an entropy field, thereby preventing any load balancing of the securely encapsulated data messages from using this source port field. Thus, in some embodiments, the first gateway device enables mixed mode SAs and uses the SA group object when NAT-T is enabled so that identifiers for the different SAs can be used for this load balancing (described further below). FIG.2conceptually illustrates a VPN session200, in some embodiments, between the initiator110and responder115to send data across the network105using multiple paths in multiple uplinks or tunnels. In this example, the VPN session200uses two SAs to send data across the network105. The SA1is used to encrypt and authenticate IPsec data for a VPN tunnel (or uplink)230, which is associated with a source IP 10.10.10.1 and a destination IP 20.20.20.2, and the SA2is used to encrypt and authenticate IPsec data for a VPN tunnel (or uplink)235, which is associated with a source IP 10.10.11.1 and the destination IP 20.20.20.2. Specifically, any flows communicated from endpoints in the datacenter120to endpoints in the datacenter125may be encrypted at the first datacenter using SA1and sent over the VPN tunnel230, such as the flows240, or using SA2and sent over the VPN tunnel235, such as the flows245. Each of these VPN tunnels is a specific path through the network to which the SAs are pinned, represented by dashed lines between the initiator110and responder115. As shown, additional paths255are available, to which neither of the SAs is pinned. In some embodiments, the gateway110is configured with a VTI to handle data traffic to and from a VPN tunnel. A VTI is a logical routing layer interface configured at an end of a VPN tunnel to support route-based VPN with IPsec profiles attached to the end of the tunnel. Egressing traffic from the VTI is encrypted and sent to the VPN peer, and the SA associated with the tunnel decrypts the ingress traffic to the VTI. In the embodiments described herein, the VTI is a dual-stack VTI that supports both IPv4 and IPv6 traffic, and each of the SAs also supports both IPv4 and IPv6 traffic. In some embodiments, one single VTI is configured at the source gateway for a bundle of multiple different SAs. The destination gateway is similarly configured with a single corresponding VTI for the bundle of different SAs. Each SA has a different SPI value associated therewith, and the tuples of header values of packets communicated across the different VPN tunnels may hash to different CPUs at the destination gateway for processing, as will be described further below. FIG.3illustrates a block diagram of a system300that is implemented in a gateway or edge appliance of a datacenter, in some embodiments, such as the initiator gateway110. The system300may be implemented by a bare metal computing device or a host machine running virtualization software that operates the gateway in one or more virtual machines. In some embodiments, the system300represents VPN control plane. Also, in some embodiments, the system300is utilized by the initiator (i.e., source) of a communications session, while in other embodiments, both the initiator and responder (i.e., destination) utilize the system300. As illustrated, the system300implements an IKE-control stack310and IPsec tunnels datapath350. In some embodiments, the IKE-control stack310is a submodule of the VPN control plane, while the IPsec tunnels datapath350represents the VPN dataplane. In some embodiments, the modules310and350are modules of software instructions being executed by one or more processing units (e.g., a processor) of a computing device. In some embodiments, the modules310and350are modules of hardware circuits implemented by one or more integrated circuits (ICs) of an electronic apparatus. Though the modules310and350are illustrated as being separate modules, some of the modules can be combined into a single module. The IKE control stack310controls the operations of IPsec, including establishing and maintaining VPN session and SAs. The IKE control stack provides the necessary key data to IPsec tunnels datapath350for authenticating and encrypting payloads (e.g., SA information, SA group object information, and port information for encapsulation). The IPsec tunnels datapath350performs the operations of the individual VPN tunnels, in some embodiments, and is responsible for path selection. In some embodiments, The IPsec tunnels datapath350may include various VPN data plane modules. The IPsec tunnels datapath350also performs encryption and authentication of payload based on the SA information provided by the IKE control stack310based on SA selections performed by the SA group object360of the IPsec tunnels datapath350. The IPsec tunnels datapath also encapsulates the encrypted payload in a UDP header, according to some embodiments. When an application uses the gateway to send certain application data in a VPN session, the IPsec tunnels datapath350receives the application data at the dual-stack routing interface VTI355. The application data is then packaged as an inner packet365. The dual-stack VTI355calculates a hash value using a five-tuple identifier based on the inner packet365. An SA group object360created during an initial IKE session (e.g., IKE session100) then performs a load balancing operation based on the calculated hash value to select an SA for the data message. An encryption module370encrypts the inner packet into an IPsec encrypted packet375according to the encryption parameters of the SA information provided by the IKE control stack310and associated with the SA selected by the SA group object360. The encryption module370also appends other IPsec related fields based on the SA information (e.g., ESP header (encapsulating security payload header), ESP trailer, ESP authentication, new IP, etc.). An encapsulation module380encapsulates the IPsec encrypted packet375as UDP encapsulated packet385with a UDP encapsulation header, which may include an SPI associated with the selected SA. A data plane routing module390then sends the UDP encapsulated packet385. FIG.4illustrates a process for processing data messages at a gateway for which mixed-mode, grouped SAs have been enabled, in some embodiments. The process400is performed, in some embodiments, by an initiator of a secure communications session following IKE negotiations between the initiator and a responder (e.g., following the IKE session100). The process400will be described below with reference toFIG.5, which conceptually illustrates a more detailed work flow within an initiator (e.g., gateway device) of a secure communications session (e.g., a VPN session), according to some embodiments. The process400starts by receiving (at410) a data message. The data message, in some embodiments, has source and destination addresses formatted according to a first or second protocol. In some embodiments, the first protocol is IPv4 and the second protocol is IPv6. The protocol, in some embodiments, depends on the intervening network. The process identifies (at420) the data message for secure encapsulation based on the appropriate forwarding table for the data message's protocol. For example, the work-flow diagram500illustrates a dual-stack VTI515that receives IPv4 data messages505according to IPv4 routing entries520of an IPv4 forwarding table, and receives IPv6 data messages510according to IPv6 routing entries525of an IPv6 forwarding table. The dual-stack VTI, in some embodiments, is associated with the multiple secure connections and points to the SA group object. The process calculates (at430) a hash value based on the data message's header fields. In some embodiments, the dual-stack VTI is responsible for calculating the hash value. The dual-stack VTI, in some embodiments, calculates the hash value using a five-tuple identifier (i.e., source and destination IP addresses, source and destination port addresses, and protocol) identified from the data message's header fields. Based on the calculated hash value, the process selects (at440) one of the SAs from the multiple mixed-mode, grouped SAs for the data message. For instance, the SA group object530uses the five-tuple hash value535to select one of the mixed-mode SAs540and545. In some embodiments, using the five-tuple hash value535allows the SA group object530to load balance across the multiple mixed-mode SAs540and545to select one for the data message. As a result, data messages are evenly distributed between the mixed-mode SAs, in some embodiments. It should be noted that in some embodiments, typically only the first data message in a data message requires a full processing of the data message and lookup of the mixed-mode SAs (referred to as slow path processing). This result can be cached and used for (fast path) processing of subsequent data messages in the data message flow in some embodiments. The process securely encapsulates (at450) the data message with an SPI for the selected SA. During fast path processing, when the data message belongs to a flow that includes data messages that have already been processed, the SPI is retrieved from the cached results associated with the data message flow. Each of the mixed-mode SAs encapsulates data messages using network addresses formatted according to the first protocol, according to some embodiments. In the work-flow diagram500, the securely encapsulated data message550is illustrated as having an outer IP header, an ESP header that includes an identifier SPI-1 indicating its association with the SA540, while the inner packet can be either a v4 or v6 inner packet. Similarly, the data message555is illustrated as having an outer IP header, an ESP header that includes an identifier SPI-2 indicating its association with the SA545, while the inner packet can also be either a v4 or v6 inner packet. The process then forwards (at460) the encapsulated data message onto a network (e.g., to an identified next hop) for delivery to its destination, then ends. For instance, the initiator gateway110in the VPN session200described above can forward encapsulated data messages onto the network105for delivery to the responder gateway115via either the tunnel230or235, depending on which SA has been selected for the data messages. Additionally, the work-flow diagram500illustrates a route entry for the outer destination IP570from a forwarding table used to forward data messages onto a network. Based on the route entry570, the data message is forwarded on the network via one of the virtual network interfaces (VNICs)560and565. In some embodiments, before forwarding the data message to the next hop, the process also performs next hop selection (i.e., selection of an output interface). Some embodiments determine whether NAT-T is turned on in IPsec (e.g., whether the data message has UDP source and destination ports both set to 4500). If NAT-T is in use, then the UDP header is skipped and load balancing is performed using the SPI. In some embodiments, the gateway processes multiple data messages using either IPv4 or IPv6 network addresses because machines (either executing on the gateway device or behind the gateway device) use a combination of IPv4 and IPv6 addresses. As a result of the load balancing operation performed by the SA group object, which does not depend on whether the inner packet is IPv4 or IPv6, the totality of data messages associated with the first and second protocols (i.e., using either IPv4 or IPv6 network addresses) processed by the gateway are evenly distributed between the multiple mixed-mode SAs, according to some embodiments, which in some embodiments also leads to even distribution across paths between the initiator and responder. In some embodiments, when encapsulating and forwarding the encapsulated data message, the sender behaves in different manners depending on whether NAT-T is enabled. As described above, in some embodiments the mixed-mode SAs are enabled during IKE negotiations following a determination that NAT-T should be enabled (e.g., based on detection of a NAPT device in the path between a source and destination). When NAT-T is enabled, mixed-mode SAs and use of an SA group object (e.g., as described above) can help achieve better load distribution, especially if multi-homing is in use (i.e., when the sending gateway device has multiple network address interfaces, because it is connected to multiple different service providers or for another reason). FIG.6conceptually illustrates a process of some embodiments for encapsulating a data message and selecting an output interface for the data message. It should be understood that this is a conceptual process representing various different options, and that a sending device would not necessarily go through the process of making an actual determination as to whether NAT-T is enabled each time a data message is sent out, but rather would be configured differently based on whether or not NAT-T is enabled. As shown, the process600begins by determining (at610) whether NAT-T is enabled. During IKE negotiations to set up the SA(s) of some embodiments, the IKE control stack determines whether an intermediate NAPT device is situated in the path to be taken by the encrypted data messages. In this case, the IKE control stack enables NAT-T for the SA. In some embodiments, whether NAT-T is enabled dictates whether or not the UDP source port of the outer header will be used as an entropy field (i.e., whether this field will be changed between data message flows as a mechanism to differentiate these flows). When NAT-T is not enabled, the process uses (at620) UDP encapsulation with the source port as an entropy field. That is, when NAT-T is not enabled, the use of a fixed source port is not required and the source port can be varied. Because the source port can be varied, the process uses (at630) the UDP and IP headers (e.g., the outer header 5-tuple) for load balancing between the output interfaces. In such cases, there is presumably not a NAPT device in the path so there is no need to use the fixed NAT-T source port. In the diagram500described above, for instance, the process600would use the outer IP headers of the encapsulated data messages550and555, in some embodiments, to load balance between the VNICs560and565if NAT-T is not enabled. Following630, the process600ends. When NAT-T is enabled, the process600uses (at640) UDP encapsulation with fixed source and destination ports (e.g., the fixed port number4500designated for NAT-T). For load balancing, the process600skips the UDP header and instead uses (at650) the SPI to perform load balancing for the data message. In this situation, the use of multiple mixed-mode SAs (and therefore different SPIs) allows for better load distribution between the different output interfaces. For example, in the diagram500, the process600would use the SPIs specified in the ESP headers of the encapsulated data messages550and555to load balance between the VNICs560and565if NAT-T is enabled, according to some embodiments. Following650, the process600ends. In some embodiments, the data message that is securely encapsulated by the selected SA has an outer destination address (i.e., outer IP header) of the destination gateway device that receives the securely encapsulated data message.FIG.7illustrates a process performed by the destination gateway, in some embodiments, upon receiving a securely encapsulated packet. The process700starts by receiving (at710) a securely encapsulated data message forwarded by a gateway local to the source of the data message. For instance, in the VPN session200described above, the responder gateway115receives data from the initiator gateway110via any of paths associated with VPN tunnels230and235. The process determines (at720) that the data message is associated with a particular SA based on an identifier used to encapsulate the data message. As described above, data messages are securely encapsulated by the initiator gateway (i.e., by a process executing within the initiator gateway) with an SPI for the selected SA. For example, the securely encapsulated data message550of the work-flow diagram500includes an ESP header specifying an identifier SPI-1 to indicate the data message is associated with the SA540, and the securely encapsulated data message555has an ESP header specifying an identifier SPI-2 to indicate the data message is associated with the SA545. Based on the identifier, the process assigns (at730) the data message to a particular processing core in a set of processing cores of the gateway for further processing. As described above, the responder gateway is configured similarly to the initiator gateway with a single corresponding VTI for a bundle of different SAs each having a different associated SPI value, and the tuples of header values of data messages communicated across different VPN tunnels may hash to different CPUs at the responder gateway for processing, according to some embodiments. Additionally, if the UDP source and destination ports are the same (i.e.,4500), then the UDP header is skipped and a core is assigned using the SPI. Following730, the process700ends. In some embodiments, the destination gateway only uses the SPI to assign data messages to a particular processing core if the UDP encapsulation header of the data message has the same source and destination port (e.g.,4500, because NAT-T is in use). That is, when the sending gateway uses the source port of the UDP encapsulation header as an entropy field, this source port can be used to assign data messages to different processing cores at the receiving gateway. However, if the UDP header is the same for all of the data messages (because NAT-T is enabled), then the SPI is used instead. In some embodiments, as a result of the initiator gateway selecting among the multiple mixed-mode SAs by load balancing across the SAs, the data messages received at the responder gateway device are load balanced among the processing cores of the second gateway device. As a result, the responder gateway in some embodiments experiences better central processing unit (CPU) utilization and improved performance. In some embodiments, the operations shown inFIG.5are implemented by a virtual machine, container, or other data compute node that operates as the sending gateway in a virtualized environment.FIG.8illustrates a block diagram of a host computer800of some embodiments in a virtualized networking environment. As illustrated, the host computer800includes virtualization software810, a PNIC814(physical network interface card), and multiple virtual machines (VMs)820,822, and824. In some embodiments, the host computer800is a physical general-purpose computer (e.g., a server, workstation, etc.) and includes one or more physical central processing units (CPUs), a system memory, and non-volatile data storage. The host computer800also includes one or more physical network interfaces, such as PNIC814, for communicating with other hardware computing platforms, entities, or host computers on a physical network accessible through PNIC814. In some embodiments, the host computer800may provide part of the computing infrastructure in a virtualized computing environment distributed among multiple host computers. Though certain embodiments are described herein with respect to VMs, the same principals and techniques may also apply to other appropriate virtualized data compute nodes (e.g., virtual machine, container, pod, data compute node, isolated user space instance) as well as physical computing devices. The virtualization software810(e.g., a hypervisor) serves as an interface between VMs820-824and the PNIC814, as well as other physical resources (e.g., CPUs, memory, etc.) available on host computer800, in some embodiments. Each of the VMs820-824is shown including a VNIC860-864respectively, which is responsible for exchanging packets between each respective VM and the virtualization software810. The architecture of the virtualization software810may vary across different embodiments of the invention. In some embodiments, the virtualization software810can be installed as system-level software directly on the host computer800(i.e., a “bare metal” installation) and be conceptually interposed between the physical hardware and the guest operating systems executing in the VMs. In other embodiments, the virtualization software810may conceptually run “on top of” a conventional host operating system in the server. In some embodiments, the virtualization software810includes both system-level software and a privileged VM (not shown) configured to have access to the physical hardware resources (e.g., CPUs, physical interfaces, etc.) of the host computer810. While the VNICs860-864are shown as included in the VMs820-824, it should be understood that VNICs860-864may be implemented by code (e.g., VM monitor code) associated with virtualization software810in some embodiments, while in other embodiments, the VNICs860-864may be software implementations of PNICs. Each of the VMs820-824is connected to a virtual port (also referred to herein as a vport or virtual interface) provided by a virtual switch812through the VNICs860-864associated with the VMs. In some embodiments, the virtual switch812serves as physical network switch (i.e., serves as an edge device on the physical network, but is implemented in software). The virtual switch812is connected to the PNIC814in order to allow network traffic to be exchanged between the VMs820-824executing on host computer800and destinations on an external physical network. In some embodiments, a VM executing on the host computer800is configured to perform the functions of a gateway. For instance, the VM820in this example is configured as a gateway, such as the initiator gateway110, and includes a gateway layer or component830that logically represents a set of instructions for implementing gateway functions. The gateway VM820is also configured with an IKE control stack840(also referred to as an IKE daemon) similar to the IKE control stack310described above. In some embodiments, the IKE control stack840logically represents a set of instructions for performing a two-phase IKE negotiation with an IKE control stack of a peer gateway (e.g., responder gateway115) in order to establish an IKE tunnel and one or more IPSec tunnels. The IKE control stack840of some embodiments is also configured with one or more dead peer detection (DPD) techniques for determining whether the IKE control stack of the peer gateway is “dead” or “alive.” For example, IKE control stack840may be configured to transmit one or more trigger messages to the IKE control stack of the peer gateway to determine its liveliness. Two IKE control stacks that have established an IKE tunnel among themselves are referred to as IKE peers. The gateway VM820of some embodiments is also configured to implement IPsec protocols and functionality using an IPsec tunnels datapath850. Like the IPsec tunnels datapath350described above, the IPsec tunnels datapath850of some embodiments encrypts outgoing packets destined for a particular destination gateway, such as the responder gateway115, by encapsulating the outgoing packets with, e.g., ESP headers based on a corresponding outbound SA. In each packet's ESP header, IPsec tunnels datapath850also includes an SPI value, associated with the outbound SA. IPsec tunnels datapath850is also configured to decrypt incoming encapsulated ESP encrypted packets received from a source gateway, such as responder gateway115. In some embodiments, another VM executing on host computer800, or on another host computer, may be configured as an endpoint associated with the gateway VM820. For instance, the VM822in this example is an endpoint VM822associated with gateway VM820. In some embodiments, a source endpoint at a first site may generate a packet to send to a destination endpoint at a second site. For instance, in the VPN session200described above, a source endpoint operating in the datacenter120may want to send a packet to a destination endpoint operating in the datacenter125. To do so, the source endpoint in the datacenter120may forward the packet to initiator gateway110, which performs a process such as the process400described above to prepare and forward the packet onto a network for delivery to its destination. When a packet is received at the host computer800, in some embodiments, the packet is provided to the virtual switch812of host computer800via the PNIC814. In some embodiments, the virtual switch812sends the encapsulated encrypted packet to VNIC860of gateway VM820. Subsequently, the gateway VM820performs a process such as the process700described above on the received packet. It should be noted that while thatFIG.8illustrates only one example of a gateway, other embodiments may include other virtual computing instances for performing the functions of a gateway. In still other embodiments, the gateway may be a physical computing device. Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer-readable storage medium (also referred to as computer-readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer-readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer-readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections. In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs. FIG.9conceptually illustrates a computer system900with which some embodiments of the invention are implemented. The computer system900can be used to implement any of the above-described hosts, controllers, gateway, and edge forwarding elements. As such, it can be used to execute any of the above described processes. This computer system900includes various types of non-transitory machine-readable media and interfaces for various other types of machine-readable media. Computer system900includes a bus905, processing unit(s)910, a system memory925, a read-only memory930, a permanent storage device935, input devices940, and output devices945. The bus905collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system900. For instance, the bus905communicatively connects the processing unit(s)910with the read-only memory930, the system memory925, and the permanent storage device935. From these various memory units, the processing unit(s)910retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s)910may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM)930stores static data and instructions that are needed by the processing unit(s)910and other modules of the computer system900. The permanent storage device935, on the other hand, is a read-and-write memory device. This device935is a non-volatile memory unit that stores instructions and data even when the computer system900is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device935. Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device935, the system memory925is a read-and-write memory device. However, unlike storage device935, the system memory925is a volatile read-and-write memory, such as random access memory. The system memory925stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory925, the permanent storage device935, and/or the read-only memory930. From these various memory units, the processing unit(s)910retrieve instructions to execute and data to process in order to execute the processes of some embodiments. The bus905also connects to the input and output devices940and945. The input devices940enable the user to communicate information and select commands to the computer system900. The input devices940include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices945display images generated by the computer system900. The output devices945include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as touchscreens that function as both input and output devices940and945. Finally, as shown inFIG.9, bus905also couples computer system900to a network965through a network adapter (not shown). In this manner, the computer900can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet), or a network of networks (such as the Internet). Any or all components of computer system900may be used in conjunction with the invention. Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter. While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself. As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” mean displaying on an electronic device. As used in this specification, the terms “computer-readable medium,” “computer-readable media,” and “machine-readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals. While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.
38,660
11863515
DESCRIPTION OF EXAMPLE EMBODIMENTS Overview According to an embodiment, a router includes one or more processors and one or more computer-readable non-transitory storage media coupled to the one or more processors and including instructions that, when executed by the one or more processors, cause the router to perform operations. The operations include determining a common prefix pool from a transport interface associated with a transport virtual private network (VPN). The operations also include identifying a prefix associated with a service VPN and generating an IPv6-to-IPv6 Network Address Translation (NAT66) prefix translation using the common prefix pool and the prefix. The NAT66 prefix translation includes a predetermined prefix length. The operations further include automatically installing the NAT66 prefix translation into a translation table. In certain embodiments, the router is an SD-WAN edge router. In some embodiments, the transport interface is a NAT66 DIA interface. In certain embodiments, the operations include receiving a packet from a branch router. The packet may include the prefix associated with the service VPN. The operations may also include translating the prefix using the NAT66 prefix translation. The operations may further include supporting IPv6 Path Maximum Transmission Unit (IPv6 PMTU) Discovery natively and/or performing an inside-to-outside and an outside-to-inside translation for a payload of the packet. In some embodiments, the operations include refreshing a predetermined session time period associated with the NAT66 prefix translation each time a packet is translated using the NAT66 prefix translation within the predetermined session time period, expiring the NAT66 prefix translation after in response to inactivity of the NAT66 prefix translation for the predetermined session time period, and/or reusing an entry for the NAT66 prefix translation after a predetermined expiration time period. In certain embodiments, the operations include dynamically embedding an identifier of the service VPN into a header of the NAT66 prefix translation. In some embodiments, the operations include directing incoming traffic to the transport interface in accordance with a centralized data policy. In certain embodiments, the operations include assigning an IPv6 address prefix to the transport interface using an IPv6 generic prefix from an IPv6 neighbor discovery (ND) advertisement and/or performing IPv6 duplicate address detection (DAD). According to another embodiment, a method includes determining, by a router, a common prefix pool from a transport interface associated with a transport VPN. The method also includes identifying, by the router, a prefix associated with a service VPN and generating, by the router, a NAT66 prefix translation using the common prefix pool and the prefix. The NAT66 prefix translation includes a predetermined prefix length. The method further includes automatically installing, by the router, the NAT66 prefix translation into a translation table. According to yet another embodiment, one or more computer-readable non-transitory storage media embody instructions that, when executed by a processor, cause the processor to perform operations. The operations include determining a common prefix pool from a transport interface associated with a transport VPN. The operations also include identifying a prefix associated with a service VPN and generating a NAT66 prefix translation using the common prefix pool and the prefix. The NAT66 prefix translation includes a predetermined prefix length. The operations further include automatically installing the NAT66 prefix translation into a translation table. Technical advantages of certain embodiments of this disclosure may include one or more of the following. NAT66 may be extended to achieve IPv6 NAT66 DIA with a flexible, scalable, and secure NAT66 prefix translation mechanism. In certain embodiments, this extension provides the capability and benefits via dynamic NAT66 prefix translation mapping overload based on a transport-side interface address pool assigned by IPv6 DHCPv6 prefix delegation or IPv6 Route Distinguisher (RD) prefix advertisement. This concept may significantly improve operational simplicity and scalability and provide more secure NAT66 DIA access. Certain embodiments of this disclosure translate service VPN IPv6 packets to IPV DIA algorithmically without creating a state. In some embodiments, public network prefixes are shared among multiple service VPNs. In certain embodiments, devices dynamically participate in IPv6 ND to reserve public IPs for translated hosts in the IPv6 DAD process. Some embodiments of this disclosure allow Box2Box redundancy for the IPv6 DIA solution without creating translation state. In certain embodiments, IPv6 routes are distributed for hosts in the public Internet into a service VPN. Some embodiments described herein allow traffic from the service VPN to the Internet and from the Internet to the service VPN. Certain embodiments of this disclosure allow traffic flow from either direction. Some embodiments have the ability to scale to a larger number of translations. Certain embodiments allow prefix delegation for a prefix range in the subnet of the WAN interface. Some embodiments allow RA prefix to be used with NAT. Certain embodiments allow NAT participation in IPv6 ND, thereby detecting any duplicate address assignments. In some embodiments, routing is simplified by not requiring route additions to the upstream router for private prefixes or NAT prefixes. In certain embodiments, multiple DIA routers are allowed in the WAN segment. Certain embodiments of this disclosure utilize DIA, which may reduce bandwidth consumption, latency, and/or costs on WAN links by offloading Internet traffic from the private WAN circuit. Certain embodiments improve branch office user experience by providing DIA for employees at remote site locations. Other technical advantages will be readily apparent to one skilled in the art from the following figures, descriptions, and claims. Moreover, while specific advantages have been enumerated above, various embodiments may include all, some, or none of the enumerated advantages. EXAMPLE EMBODIMENTS This disclosure describes systems and methods for translating IPv6 packets for DIA in an SD-WAN environment. Packets sent from a branch router to a service provider network are required to have the source address of the WAN interface. The prefix assigned to the WAN interface cannot be shared with other interfaces on the router. This disclosure allows the other interfaces to use a private address. The packets are translated when they pass through the WAN interface. A general method of translating IPv6 packets based on a prefix is described in RFC 6296 (IPv6-to-IPv6 Network Prefix Translation (NPTv6)). While RFC 6296 describes a general way to translate packets, it does not address specific problems involved in DIA in an SD-WAN environment. Certain embodiments of this disclosure describe an efficient way to translate IPv6 packets for DIA. FIG.1illustrates an example system100for translating IPv6 packets for DIA in an SD-WAN environment. System100or portions thereof may be associated with an entity, which may include any entity, such as a business, company, or enterprise, that translates IPv6 packets in an SD-WAN environment. In certain embodiments, the entity may be a service provider that provides translation services for IPv6 packets. The components of system100may include any suitable combination of hardware, firmware, and software. For example, the components of system100may use one or more elements of the computer system ofFIG.4. In the illustrated embodiment ofFIG.1, system100includes a network110, a branch120, a user device130, a Domain Name System (DNS) server140, an SD-WAN edge router150, a service interface152, a service VPN X (where X represents any suitable integer), a transport interface154, a transport VPN 0, centralized data policies156, a prefix160, a common prefix pool162, a NAT66 prefix translation164, a session time period166, an expiration time period168, an Internet170, a public cloud172, a data center180, and a management node182. Network110of system100is any type of network that facilitates communication between components of system100. Network110may connect one or more components of system100. One or more portions of network110may include an ad-hoc network, the Internet, an intranet, an extranet, a VPN, an Ethernet VPN (EVPN), a local area network (LAN), a wireless LAN (WLAN), a virtual LAN (VLAN), a WAN, a wireless WAN (WWAN), an SD-WAN, a metropolitan area network (MAN), a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, a Digital Subscriber Line (DSL), an Multiprotocol Label Switching (MPLS) network, a 3G/4G/5G network, a Long Term Evolution (LTE) network, a cloud network, a combination of two or more of these, or other suitable types of networks. Network110may include one or more different types of networks. Network110may be any communications network, such as a private network, a public network, a connection through the Internet, a mobile network, a WI-FI network, etc. Network110may include a core network, an access network of a service provider, an Internet service provider (ISP) network, and the like. One or more components of system100may communicate over network110. In the illustrated embodiment ofFIG.1, network110is an SD-WAN. Branch120of system100is a part of an enterprise network infrastructure that provides users at a geographically disperse remote site access to the same network services as users in the enterprise campus. Branch120may include one or more buildings, offices, stores, homes, and the like. In the illustrated embodiment ofFIG.1, branch120includes user device130, DNS server140, and SD-WAN edge router150. User device130of system100includes any user equipment that can receive, create, process, store, and/or communicate information. User device130may include one or more workstations, desktop computers, laptop computers, mobile phones (e.g., smartphones), tablets, personal digital assistants (PDAs), wearable devices, and the like. In certain embodiments, user device130includes a liquid crystal display (LCD), an organic light-emitting diode (OLED) flat screen interface, digital buttons, a digital keyboard, physical buttons, a physical keyboard, one or more touch screen components, a graphical user interface (GUI), and/or the like. User device130may be located in any suitable location to receive and communicate information to user132of system100. User132of system100is a person or group of persons who utilizes user device130of system100. User132may be associated with one or more accounts. User132may be a local user, a remote user, an administrator, a customer, a company, a combination thereof, and the like. User132may be associated with a username, a password, a user profile, etc. In certain embodiments, user132initiates the communication of traffic from user device130to DNS server140and/or SD-WAN edge router150. DNS server140of system100is computer hardware or software (e.g., a computer program) that includes a database that maps hostnames to IP addresses through the DNS protocol. Each unique IP address may have an associated hostname. The software may maintain a cache of hostname-to-address mappings for use by the connect, telnet, and ping EXEC commands, and related Telnet support operations. In certain embodiments, DNS server140includes one or more name servers. Name servers are programs that have complete information about their namespace portion of the domain tree. Name servers may include pointers to other name servers that can be used to lead to information from any other part of the domain tree. In some embodiments, DNS server140includes one or more name resolvers. Name resolvers are programs that extract information from name servers in response to client requests. For example, DNS server140may extract information from a name server in response to a request received from user device130. SD-WAN edge router150of system100is a specialized router that resides at an edge or a boundary of the network (e.g., a LAN network) of branch120. In certain embodiments, WAN edge router150uses static and/or dynamic routing to send and/or receive data to other nodes of network110. WAN edge router150may include one or more hardware devices, one or more servers that include routing software, and the like. In the illustrated embodiment ofFIG.1, WAN edge router150resides within branch120. In certain embodiments, the infrastructure of network110provides connectivity to WAN edge router150to access the network of branch120, Internet170, public cloud172, and data center180. In certain embodiments, SD-WAN edge router150is configured to implement DIA. DIA provides branch120the capability to send traffic directly to Internet170transport instead of carrying the traffic all the way back to data center180to be inspected. When DIA is implemented in SD-WAN edge router150, traffic from branch120that is bound for Internet170and/or public cloud172is routed directly to Internet170. In some embodiments, SD-WAN edge router150is configured to implement NAT. NAT allows private IP networks that use unregistered IP addresses to connect to Internet170. In certain embodiments, NAT connects two networks together by translating the private addresses in the internal network of branch120into legal addresses before forwarding the traffic to Internet170. For DIA, NAT translation for packets exiting from SD-WAN edge router150into Internet170may be enabled on SD-WAN edge router150via NAT overload. NAT overload is the mapping of multiple unregistered IP addresses to a single registered IP address by using different prefixes. To achieve this functionality on SD-WAN edge router150, NAT is configured on transport interface154(e.g., NAT DIA interface), which faces Internet170. In the illustrated embodiment ofFIG.1, transport interface154is associated with VPN 0. VPN 0 includes transport (or underlay) network-facing interfaces, such as Internet170and MPLS. Service VPN X (e.g., service VPN 1, service VPN 2, etc.) is associated with service interface152, which is a user-facing interface of SD-WAN edge router150. In certain embodiments, the NAT operation on outgoing traffic is performed in VPN 0. The connection of SD-WAN edge router150to Internet170is in VPN 0. For DIA, NAT overload may be configured on transport interface154connecting to the Internet Service Provider's network. The source IP address of internal traffic destined for Internet170is translated to the IP address of transport interface154and exits directly to Internet170. The rest of the traffic remains within the overlay network and travels between two nodes on secure IPsec tunnels. In some embodiments, data policies influence the flow of data traffic through network110based on fields in the IP packet headers and VPN membership. Centralized data policies156may be used in configuring application firewalls, service chaining, traffic engineering, Quality of Service (QOS), and Cflowd. Some centralized data policies156(e.g., app-router policies or a QoS classification policy) may affect handling on SD-WAN edge router150. Data traffic may be routed to a specific DIA interface of SD-WAN edge router150by setting a path preference using traffic data policies within centralized data policies156. While configuring a NAT DIA route, direct local Internet traffic may be configured to exit directly to Internet170from service VPN X (e.g., a VPN other than VPN 0 and VPN 512) through the next hop transport VPN 0. In certain embodiments, traffic from user device130is routed to VPN 0 (e.g., a NAT-enabled WAN transport VPN) from service VPN X based on the destination prefix in the NAT DIA route. The source IP address of the packet may be translated to the IP address of transport interface154using NAT and forwarded to the destination prefix. In this scenario, traffic flowing from branch120(e.g., the LAN side) is not filtered, but sent directly to the interface IP address that has been translated using NAT. Internet170of system100is a global system of interconnected computer networks that uses the Internet protocol suite (Transmission Control Protocol/Internet Protocol (TCP/IP)) to communicate between networks and devices. In certain embodiments, users132of branch120are allowed direct access to Internet170for cloud-based applications, user web access, and the like. Public cloud172of system100is a combination of hardware, software, and supporting infrastructure that is owned and managed by a service provider. Cloud services offered via public cloud172are delivered exclusively over Internet170. Public cloud172may include Google Cloud Platform (GCP), Amazon Elastic Compute Cloud (EC2), Microsoft Azure, IBM's Blue Cloud, Sun Cloud, and/or the like. Data center180of system100is a physical facility that organizations use to house their critical applications and data. Data center180may include routers, switches, firewalls, storage systems, servers, application-delivery controllers, and the like. These components of data center180may store and/or manage business-critical data, applications, and the like. Data center180may be an enterprise data center, a managed services data center, a colocation data center, a cloud data center, a combination thereof, or any other suitable type of data center. In the illustrated embodiment ofFIG.1, data center180includes management node182. Management node182of system100is a centralized network management system that allows a user to configure and/or manage the entire overlay network from a graphical dashboard. In certain embodiments, management node182includes a dashboard (e.g., a graphical dashboard). The dashboard of management node182may provide a visual window into network110that allows a user to configure and/or manage the edge nodes (e.g., SD-WAN edge router150). In certain embodiments, management node182is software that runs on one or more servers of network110. This server may be situated in a centralized location. For example, as illustrated inFIG.1, this server may be situated in data center180. In certain embodiments, the software of management node182may run on the same physical server as the software of one or more controllers. In certain embodiments, to access management node182of system100, user device130(having an IPv6 address utilizes VPN 0 of SD-WAN edge router150by using the subdomain of the tenant Uniform Resource Locator (URL) of management node182. For example, user132may request a DNS query142from DNS server140to resolve the IPv6 address of management node182. User132communicates the IPv6 packet to SD-WAN edge router150, and SD-WAN edge router150redirects the IPv6 packet received from user device132via service VPN X to transport VPN 0. VPN 0 on SD-WAN edge router150performs NAT66 translation. In some embodiments, the Source IP (SRC-IP) needs to be in the IPv6 prefix range delegated by IPv6 WAN. When the traffic returns from management node182, SD-WAN edge router150performs a look-up of the NAT entry and forwards the traffic to the IPv6 address of user device130. In certain embodiments, NPTv6 is enabled and NAT66 DIA is disabled on transport interface154of SD-WAN edge router150. However, there are few challenges with this solution. NPTv6 is static and involves a manual assignment of inside and outside prefix mapping with the same prefix length, which is a significant effort as the IPv6 address is diverse and is has a length of 128 bits. Also, the IPv6 address of transport interface154may change with current popular address assignment approaches like IPv6 Dynamic Host Configuration Protocol (DHCP) prefix delegation and/or IPv6 Router Advertisement (RA) Stateless Address Autoconfiguration (SLAAC) autoconfiguration. Additionally, this solution may present a security concern as static mapping may allow a ping hole from direct access from the outside. Also, scalability may present an issue when multiple VPNs are involved. Additionally, IPv6 routing may become complex when the outside prefix pool is selected. Certain embodiments of this disclosure propose a different approach to achieve NAT66 DIA with a flexible, scalable, and secure NPT66 prefix translation mechanism. In some embodiments, the workflow of the prefix translation overload mechanism for NAT66 DIA use case is combined with static NAT44 and session stateful overload NAT translations. The embodiments of this disclosure may overcome the current challenges while keeping IPv6 prefix level translation simple and flexible. In certain embodiments, security and scalability are improved for the overall feature functionality. In certain embodiments, SD-WAN edge router150of system100generates NAT66 prefix mapping dynamically based on inside-to-outside traffic hitting NAT DIA routes or centralized data policies156with NAT66 DIA. The IPv6 address prefix assigned for transport interface154may be either from IPv6 DHCP prefix delegation or IPv6 RA Prefix update from an upstream router. In certain embodiments, the prefix translation granularity is pre-defined based on customer requirements. In some embodiments, the prefix translation granularity is determined dynamically based on prefix pool length availability. In certain embodiments, in an IPv6 address configuration, SD-WAN edge router150defines a NAT66 overload configuration. SD-WAN edge router150may derive common prefix pool162(e.g., 2001:A1:F::/64) from the IPv6 address (e.g., 2001:A1:F::F/64) of transport interface154. Instead of defining a 1:1 prefix level translation with an inside and outside prefix, NAT66 prefix translation164may be generated based on IPv6 DIA traffic from service VPN X to transport VPN 0 dynamically. For example, consider host (e.g., user device130) within prefix160(e.g., 2001:380:1::/80 or 2001:A14:18::/80), which may be an access Internet host from service VPN X (e.g., VPN 10 or VPN 20, respectively). NAT66 prefix translation164is generated and installed automatically with a predetermined prefix length (e.g., 80 bits) and session time period166(e.g., default session time of 30 minutes). NAT66 prefix translation164is generated using prefix160and common prefix pool162(e.g., “nat66 prefix inside 2001:380:1::/80 outside 2001:A1:F:0:1:180 vrf 10” or “nat66 prefix inside 2001:A14:18::/80 outside 2001:A1:F::/80 vrf 20,” respectively). In certain embodiments, SD-WAN edge router150stores NAT66 prefix translation164in a lookup table. In certain embodiments, the entry by SD-WAN edge router150of NAT66 prefix translation164will hold, and the session may be refreshed. For example, if the incoming packet from user device130hits for existing NAT66 prefix translation164, SD-WAN edge router150refreshes and extends session time period166(e.g., 30 minutes) unless the session is idle/inactive for session time period166(e.g., 30 minutes continuously). SD-WAN edge router150may reuse any outside prefix pool that has expired after another idle/inactive cycle of predetermined expiration time period168(e.g., 60 minutes). In certain embodiments, with a pre-defined prefix length (e.g., 80 bits) and common prefix pool162of a predetermined mask length (e.g., 64 bits), NAT66 prefix translations (e.g., 16 bits or integers between 0 and 65,535) are available that can be reused for transport interface154. These available NAT66 prefix translations (e.g., NAT66 prefix translation164) may be shared across VPNs based on a demand basis. At the same time, external access from outside to inside may only be allowed for NAT66 prefix translations and may be established for specific NAT66 rule access from inside to outside. This may improve the overall security of system100since static mapping allows uncontrolled access from the outside, which may impose a security risk. In certain embodiments, common prefix pool162of NAT66 prefix translations is derived from a general prefix (e.g., prefix160) from a DHCPv6 prefix delegation other than the IPv6 generic prefix from an IPv6 ND advertisement. In some embodiments, NAT66 prefix translations may be allocated across the service VPN instances. In this embodiment, a more generic common prefix pool162is associated with transport interface154, and prefix granularity may be extended to pool range mappings defined by service interface152in VPN X with a predetermined prefix length (e.g., 64 bits). With the following DHCPv6 prefix delegation pool, a predetermined number of NAT66 prefix translations (e.g., 16 bits or integers between 0 and 65,535) may be used for IPv6 NAT DIA prefix translations from the server (e.g., “IPv6 local pool client-prefix-pool 2001:A1F:/48 64”). In certain embodiments, when an update to common prefix pool162occurs from the outside, NAT66 implemented by SD-WAN edge router150of system100destroys and regenerates NAT66 translation rules dynamically based on new common prefix pool162. Since NAT66 prefix translations from common prefix pool162of transport interface154are reused, address duplication may occur after inside-to-outside prefix transition, and NAT66 may utilize IPv6 DAD to avoid potential conflict. IPv6 PMTU may be supported natively, and inside-to-outside and outside-to-inside translation may be performed for IPv6 Internet Control Message Protocol (ICMPv6) header and payload. In certain embodiments, SD-WAN edge router150embeds an identifier of a service VPN into a header of a translation, which allows algorithmic translation for IPv6 DIA between private networks and public Internet170. For example, SD-WAN edge router150may dynamically embed an identifier for service VPN X into a header of NAT66 prefix translation164. In some embodiments, SD-WAN edge router150participates in IPv6 neighbor discovery if prefix160is used as a public network address for DIA translations. This allows traffic from service VPN X to Internet170and from Internet170to service VPN X. In operation, SD-WAN edge router150of system100determines common prefix pool162from transport interface154of SD-WAN edge router150associated with transport VPN 0 and prefix160associated with service VPN X. SD-WAN edge router150generates NAT66 prefix translation164using common prefix pool162and prefix160. SD-WAN edge router150installs NAT prefix translation164into a translation table. SD-WAN edge router150determines whether NAT prefix translation164has been used to perform any translations within predetermined session time period166(e.g., 30 minutes). If SD-WAN edge router150determines that NAT prefix translation164has been used within the predetermined session time period166, SD-WAN edge router150refreshes predetermined session time period166for NAT prefix translation164. Upon determining that that NAT prefix translation164has not been used within the predetermined session time period166, SD-WAN edge router150expires NAT prefix translation164and sends the entry for NAT prefix translation164back to common prefix pool162. Upon determining that NAT prefix translation164has been expired for predetermined expiration time period168, SD-WAN edge router150may reuse the entry for NAT prefix translation164. AlthoughFIG.1illustrates a particular number of networks110, branches120, user devices130, DNS servers140, SD-WAN edge routers150, service interfaces152, service VPNs X (where X represents any suitable integer), transport interfaces154, transport VPNs 0, centralized data policies156, prefixes160, common prefix pools162, NAT66 prefix translations164, Internets170, public clouds172, data centers180, and management nodes182, this disclosure contemplates any suitable number of networks110, branches120, user devices130, DNS servers140, SD-WAN edge routers150, service interfaces152, service VPNs X (where X represents any suitable integer), transport interfaces154, transport VPNs 0, centralized data policies156, prefixes160, common prefix pools162, NAT66 prefix translations164, Internets170, public clouds172, data centers180, and management nodes182. AlthoughFIG.1illustrates a particular arrangement of network110, branch120, user device130, DNS server140, SD-WAN edge router150, service interface152, service VPN X (where X represents any suitable integer), transport interface154, transport VPN 0, centralized data policies156, prefix160, common prefix pool162, NAT66 prefix translation164, Internet170, public cloud172, data center180, and management node182, this disclosure contemplates any suitable arrangement of network110, branch120, user device130, DNS server140, SD-WAN edge router150, service interface152, service VPN X (where X represents any suitable integer), transport interface154, transport VPN 0, centralized data policies156, prefix160, common prefix pool162, NAT66 prefix translation164, Internet170, public cloud172, data center180, and management node182. Furthermore, althoughFIG.1describes and illustrates particular components, devices, or systems carrying out particular actions, this disclosure contemplates any suitable combination of any suitable components, devices, or systems carrying out any suitable actions. FIG.2illustrates another example system200for translating IPv6 packets for DIA in an SD-WAN environment. System200or portions thereof may be associated with an entity, which may include any entity, such as a business, company, or enterprise, that translates packets in an SD-WAN environment. In certain embodiments, the entity may be a service provider that provides packet translating services. The components of system200may include any suitable combination of hardware, firmware, and software. For example, the components of system200may use one or more elements of the computer system ofFIG.4. In the illustrated embodiment ofFIG.2, system200includes a network210, branches220(e.g., branch220a, branch220b, and branch220c), branch routers230(e.g., branch router230a, branch router230b, and branch router230c), servers240(e.g., server240a, server240b, server240c, and server240d), aggregation routers250(e.g., aggregation router250a, aggregation router250b, and aggregation router250c), Plug and Plays (PnPs)260(e.g., PnP260aand PnP260b), cloud services routers270(e.g., cloud services router270aand cloud services router270b), domains280(e.g., domain280aand domain280b, and domain280c), a management node282, a controller284, and orchestrator nodes286(e.g., orchestrator node286aand orchestrator node286b). Network210of system100is similar to network110ofFIG.1. In the illustrated embodiment ofFIG.1, network110is an SD-WAN. Branches220(e.g., branch220a, branch220b, and branch220c) of system200are similar to branch120ofFIG.1. In the illustrated embodiment ofFIG.2, branch220ais associated with an IPv6 closed network (e.g., NTT West), branch220bis associated with a different IPv6 closed network (e.g., NTT East), and branch220cis associated with an IPv4 network (e.g., a mobile network). Branch routers230(e.g., branch router230a, branch router230b, and branch router230c) of system200are network nodes that use static and/or dynamic routing to send data to and/or receive data from one or more nodes of system200. Branch routers230may include one or more hardware devices, one or more servers that include routing software, and the like. Branch router230ais located in branch220a, branch router230bis located in branch220b, and branch router230cis located in branch220c. Servers240(e.g., server240a, server240b, server240c, and server240d) of system200are computer hardware or software (e.g., a computer program) that provide functionality for other programs or devices within network210. Servers240may be DNS servers (e.g., DNS server140ofFIG.1), a DHCPv6 server, and the like. In the illustrated embodiment ofFIG.2, server240a(e.g., a DHCPv6 or DNS server) is located in branch220a, server240b(e.g., a DHCPv6 or DNS server) is located in branch220b, and server240c(e.g., a DNS server) is located in branch220c, and server240d(e.g., a DNS server) is located in data center180b. Aggregation routers250(e.g., aggregation router250a, aggregation router250b, and aggregation router250c) are similar to SD-WAN edge router150ofFIG.1. In the illustrated embodiment ofFIG.2, aggregation router250aconnects to branch220avia eBGP connection290a(e.g., an external Border Gateway Protocol (eBGP) connection), aggregation router250bconnects to branch220bvia eBGP connection290b(e.g., an eBGP connection), and aggregation router250cconnects to branch220cvia eBGP connection290c(e.g., an eBGP connection). Aggregation router250afacilitates the connection between branch220aand domain280a, aggregation router250bfacilitates the connection between branch220band domain280b, and aggregation router250cfacilitates the connection between branch220cand domain280c. PnPs260(e.g., PnP260aand PnP260b) are agents that are embedded in network devices. PnPs260may communicate to a plug and play application using an open plug and play protocol over Hypertext Transfer Protocol Secure (HTTPS) during device deployments. In certain embodiments, PnPs260use DHCP, DNS, or other suitable methods in an attempt to acquire the IP address of the PnP server with which it wants to communicate. After a server is found and a connection has been established, the agent may communicate with the PnP server to perform deployment-related activities. PnP260ais associated with domain280a, and PnP260bis associated with domain280b. Cloud services routers270(e.g., cloud services router270aand cloud services router270b) are software routers that an enterprise or a cloud provider deploys as virtual machines. In the illustrated embodiment ofFIG.2, cloud services router270ais a virtual customer premises equipment (vCPE), and cloud services router270bperforms NAT66. Domains280(e.g., domain280a, domain280b, and domain280c) of system200are logical groupings of network nodes within the same infrastructure. In certain embodiments, domains280are identified using a domain name. Domains280that are accessible from the public Internet may be assigned a globally unique name within the DNS. Domain280ais associated with branch220a. Domain280aincludes PnP260aand orchestrator node286a, cloud services router270a, and cloud services router270b. In certain embodiments, domain280ais associated with a data center. Domain280bis associated with branch220b. Domain280bincludes server240d(e.g., a DNS server), PnP260b, cloud services router270a, cloud services router270b, management node282, controller284, and orchestrator node286b. In certain embodiments, domain280bis associated with a data center. Cloud services router270aand cloud services router270bare associated with both domains280(e.g., domain280aand domain280b). Domain280cis associated with branch220c. Domain280cconnects to PnP260a, PnP260b, cloud services router270a, and cloud services router270b, management node282, controller284, orchestrator node286a, and orchestrator node286b. Management node282of system200is a centralized network management system that allows a user to configure and/or manage the entire overlay network from a graphical dashboard. In certain embodiments, management node282includes a dashboard (e.g., a graphical dashboard). The dashboard of management node282may provide a visual window into network210that allows a user to configure and/or manage the edge nodes. In certain embodiments, management node282is software that runs on one or more servers of network210. This server may be situated in a centralized location. For example, as illustrated inFIG.2, this server may be situated in domain280b. In certain embodiments, the software of management node282may run on the same physical server as the software of one or more controllers. Controller284of system200monitors, operates, manages, troubleshoots, and/or maintains services related to network210. Controller284may manage provisioning, maintenance, and/or security for network210. In some embodiments, controller284is primarily involved in control plane communication and does not handle data traffic. However, controller284may control the flow of data traffic throughout network210. In certain embodiments, controller284works with orchestrator node286of system200to authenticate the edge nodes as they join network210and to orchestrate connectivity among the edge nodes. In the illustrated embodiment ofFIG.2, controller284is located in domain280b. Orchestrator nodes286(e.g., orchestrator node286aand orchestrator node286b) of system100automatically orchestrate connectivity between the edge nodes and a controller of system200. In certain embodiments, orchestrator nodes286are software that runs as processes (e.g., daemon) on one or more edge nodes. In certain embodiments, orchestrator nodes286have a persistent control plane connection (e.g., a Datagram Transport Layer Security (DTLS) tunnel connection) with a controller. If the controller and/or the edge node of system200is behind a NAT, orchestrator nodes286may perform the initial NAT-traversal. In the illustrated embodiment ofFIG.2, orchestrator node286ais associated with branch220a, and orchestrator node286bis associated with branch220b. In certain embodiments, one or more network components of data centers180(e.g., data center180aand data center180b) are located in transport VPN 0. Branch router220ais associated with a first service VPN (e.g., VPN 1), branch router220bis associated with a second service VPN (e.g., VPN 2), and branch router220cis associated with a third service VPN (e.g., VPN 3). Branch router120a, branch router120b, and branch router120cin service VPN 1, service VPN 2, and service VPN 3, respectively, may need to reach nodes (e.g., management node282and/or controller284) in transport VPN 0. Routes in VPN 0 (e.g., the Internet) are not available in the service VPNs (e.g., service VPN 1, service VPN 2, and service VPN 3), and therefore the packets cannot be routed from the service VPNs to the Internet. To address this issue, IPv6 routes that are in VPN 0 (e.g., the Internet) are advertised in the service VPNs. The IPv6 routes may be specified using a NAT66 route command, which allows these specific routes to be re-distributed in other routing tables. For example, a management node282may have a management address192a, and controller284may have a controller address194a. Management node282may use fully qualified domain names (FQDN) for the addresses of orchestrator node286aand orchestrator node286b. Cloud services router270b, which is connected to both data center180aand data center180b, may use NAT66 to translate management address192aand/or controller address194ain data center180a(e.g., 2001:DC:A::/64) to a management address192band controller address194b, respectively, in data center180b(e.g., 2001:DC:B::/64) using 1:1 NAT. Branch router230auses orchestrator node286aas its orchestrator, and branch router230buses orchestrator node286bas its orchestrator. The data plane of system200uses cloud services router270a(e.g., a vCPE) in data center180ato route the traffic. As such, aggregation routers250(e.g., aggregation router250a, aggregation router250b, and aggregation router250c) can communicate traffic from service VPN (e.g., VPN 1, VPN 2, or VPN 3) to transport VPN (e.g., VPN 0 or the Internet) and from transport VPN to service VPN. AlthoughFIG.2illustrates a particular number of networks210, branches220(e.g., branch220a, branch220b, and branch220c), branch routers230(e.g., branch router230a, branch router230b, and branch router230c), servers240(e.g., server240a, server240b, server240c, and server240d), aggregation routers250(e.g., aggregation router250a, aggregation router250b, and aggregation router250c), PnPs260(e.g., PnP260aand PnP260b), cloud services routers270(e.g., cloud services router270aand cloud services router270b), domains280(e.g., domain280a, domain280b, and domain280c), management nodes282, a controllers284, and orchestrator nodes286(e.g., orchestrator node286aand orchestrator node286b), this disclosure contemplates any suitable number of networks210, branches220, branch routers230, servers240, aggregation routers250, PnPs260, cloud services routers270, domains280, management nodes282, a controllers284, and orchestrator nodes286. AlthoughFIG.2illustrates a particular arrangement of network210, branches220(e.g., branch220a, branch220b, and branch220c), branch routers230(e.g., branch router230a, branch router230b, and branch router230c), servers240(e.g., server240a, server240b, server240c, and server240d), aggregation routers250(e.g., aggregation router250a, aggregation router250b, and aggregation router250c), PnPs260(e.g., PnP260aand PnP260b), cloud services routers270(e.g., cloud services router270aand cloud services router270b), domains280(e.g., domain280a, domain280b, and domain280c), management node282, a controller284, and orchestrator nodes286(e.g., orchestrator node286aand orchestrator node286b), this disclosure contemplates any suitable arrangement of network210, branches220, branch routers230, servers240, aggregation routers250, PnPs260, cloud services routers270, domains280, management node282, a controller284, and orchestrator nodes286. Furthermore, althoughFIG.2describes and illustrates particular components, devices, or systems carrying out particular actions, this disclosure contemplates any suitable combination of any suitable components, devices, or systems carrying out any suitable actions. FIG.3illustrates an example method for translating IPv6 packets for DIA in an SD-WAN environment. Method300begins at step305. At step310of method300, a router determines a common prefix pool from a transport interface associated with a transport VPN. For example, referring toFIG.1, SD-WAN edge router150of system100may determine common prefix pool162(e.g., 2001:A1:F::/64) using the IPv6 address (e.g., 2001:A1:F::F/64) assigned to transport interface154in transport VPN 0 of SD-WAN edge router150. Method300then moves from step310to step315, where the router identifies a prefix associated with a service VPN. For example, referring toFIG.1, SD-WAN edge router150may determine prefix160(e.g., 2001:380:1::/80) associated with service VPN X. Method300then moves from step315to step320. At step320of method300, the router generates a NAT66 prefix translation using the common prefix pool and the prefix. For example, referring toFIG.1, SD-WAN edge router150of system100may generate NAT66 prefix translation164(e.g., nat66 prefix inside 2001:380:1::/80 outside 2001:A1:F:0:1:180 vrf 10) using prefix160and common prefix pool162. Method300then moves from step320to step325, where the router installs the NAT prefix translation into a translation table. For example, SD-WAN edge router150ofFIG.1may install NAT prefix translation164into a translation table stored in SD-WAN edge router150. Method300then moves from step325to step330. At step330of method300, the router determines whether the NAT prefix translation has been used within a predetermined session time period. For example, referring toFIG.1, SD-WAN edge router150of system100may determine whether NAT prefix translation164has been used to perform any translations within predetermined session time period166(e.g., 30 minutes). If the router determines that the NAT prefix translation has been used within the predetermined session time period, method300moves from step330to step335, where the router refreshes the predetermined time period. For example, referring toFIG.1, SD-WAN edge router150of system100may refresh predetermined time period166for NAT prefix translation164. Method300then loops back to step330until the router determines that the NAT prefix translation has not been used within the predetermined session time period. Upon determining that that the NAT prefix translation has not been used within the predetermined session time period, method300advances from step330to step340, where the router expires the NAT prefix translation. For example, referring toFIG.1, SD-WAN edge router150of system100may expire NAT prefix translation164and send the entry for NAT prefix translation164back to common prefix pool162. Method300then moves from step340to step345. At step345of method300, the router determines whether the NAT prefix translation has been expired for a predetermined expiration time period. For example, referring toFIG.1, SD-WAN edge router150of system100may determine whether NAT prefix translation164has been expired for predetermined expiration time period168(e.g., 60 minutes). If the router determines that the NAT prefix translation has not been expired for the predetermined expiration time period, method300moves from step345to step350, where the router holds the entry for the NAT prefix translation. For example, referring toFIG.1, SD-WAN edge router150may hold the entry for NAT prefix translation164in common prefix pool162. Method300then loops back from step350to step345until the router determines that the NAT prefix translation has been expired for the predetermined expiration time period. Once the router determines that the NAT prefix translation has been expired for the predetermined expiration time period, method300advances from step345to step355, where the router reuses the entry for the NAT prefix translation. For example, referring toFIG.1, SD-WAN edge router150may reuse the entry for NAT prefix translation164in common prefix pool162. Method300then moves from step355to step360, where method300ends. Although this disclosure describes and illustrates particular steps of method300ofFIG.3as occurring in a particular order, this disclosure contemplates any suitable steps of method300ofFIG.3occurring in any suitable order. Although this disclosure describes and illustrates an example method for translating IPv6 packets for DIA in an SD-WAN environment including the particular steps of the method ofFIG.3, this disclosure contemplates any suitable method for translating IPv6 packets, which may include all, some, or none of the steps of the method ofFIG.3, where appropriate. AlthoughFIG.3describes and illustrates particular components, devices, or systems carrying out particular actions, this disclosure contemplates any suitable combination of any suitable components, devices, or systems carrying out any suitable actions. FIG.4illustrates an example computer system400. In particular embodiments, one or more computer system400perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer system400provide functionality described or illustrated herein. In particular embodiments, software running on one or more computer system400performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated herein. Particular embodiments include one or more portions of one or more computer system400. Herein, reference to a computer system may encompass a computing device, and vice versa, where appropriate. Moreover, reference to a computer system may encompass one or more computer systems, where appropriate. This disclosure contemplates any suitable number of computer system400. This disclosure contemplates computer system400taking any suitable physical form. As example and not by way of limitation, computer system400may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these. Where appropriate, computer system400may include one or more computer system400; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer system400may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer system400may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer system400may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate. In particular embodiments, computer system400includes a processor402, memory404, storage406, an input/output (I/O) interface408, a communication interface410, and a bus412. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement. In particular embodiments, processor402includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor402may retrieve (or fetch) the instructions from an internal register, an internal cache, memory404, or storage406; decode and execute them; and then write one or more results to an internal register, an internal cache, memory404, or storage406. In particular embodiments, processor402may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor402including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor402may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory404or storage406, and the instruction caches may speed up retrieval of those instructions by processor402. Data in the data caches may be copies of data in memory404or storage406for instructions executing at processor402to operate on; the results of previous instructions executed at processor402for access by subsequent instructions executing at processor402or for writing to memory404or storage406; or other suitable data. The data caches may speed up read or write operations by processor402. The TLBs may speed up virtual-address translation for processor402. In particular embodiments, processor402may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor402including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor402may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors402. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor. In particular embodiments, memory404includes main memory for storing instructions for processor402to execute or data for processor402to operate on. As an example and not by way of limitation, computer system400may load instructions from storage406or another source (such as, for example, another computer system400) to memory404. Processor402may then load the instructions from memory404to an internal register or internal cache. To execute the instructions, processor402may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor402may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor402may then write one or more of those results to memory404. In particular embodiments, processor402executes only instructions in one or more internal registers or internal caches or in memory404(as opposed to storage406or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory404(as opposed to storage406or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor402to memory404. Bus412may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside between processor402and memory404and facilitate accesses to memory404requested by processor402. In particular embodiments, memory404includes random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory404may include one or more memories404, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory. In particular embodiments, storage406includes mass storage for data or instructions. As an example and not by way of limitation, storage406may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or universal serial bus (USB) drive or a combination of two or more of these. Storage406may include removable or non-removable (or fixed) media, where appropriate. Storage406may be internal or external to computer system400, where appropriate. In particular embodiments, storage406is non-volatile, solid-state memory. In particular embodiments, storage406includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage406taking any suitable physical form. Storage406may include one or more storage control units facilitating communication between processor402and storage406, where appropriate. Where appropriate, storage406may include one or more storages406. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage. In particular embodiments, I/O interface408includes hardware, software, or both, providing one or more interfaces for communication between computer system400and one or more I/O devices. Computer system400may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system400. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces408for them. Where appropriate, I/O interface408may include one or more device or software drivers enabling processor402to drive one or more of these I/O devices. I/O interface408may include one or more I/O interfaces408, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface. In particular embodiments, communication interface410includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system400and one or more other computer system400or one or more networks. As an example and not by way of limitation, communication interface410may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface410for it. As an example and not by way of limitation, computer system400may communicate with an ad hoc network, a personal area network (PAN), a LAN, a WAN, a MAN, or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system400may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network, a 3G network, a 4G network, a 5G network, an LTE network, or other suitable wireless network or a combination of two or more of these. Computer system400may include any suitable communication interface410for any of these networks, where appropriate. Communication interface410may include one or more communication interfaces410, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface. In particular embodiments, bus412includes hardware, software, or both coupling components of computer system400to each other. As an example and not by way of limitation, bus412may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local bus (VLB), or another suitable bus or a combination of two or more of these. Bus412may include one or more buses412, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect. Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate. Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context. The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages.
61,235
11863516
DETAIL DESCRIPTION OF EMBODIMENTS To improve understanding of the technical solutions of the present disclosure for those skilled in the art, the method, apparatus and system for implementing carrier grade network address translation, the electronic device, and the computer-readable storage medium of the present disclosure will be described below in detail in conjunction with the accompanying drawings. Example embodiments will be described more sufficiently below with reference to the accompanying drawings, but which may be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the present disclosure to those skilled in the art. Embodiments of the present disclosure and features of the embodiments may be combined with each other without conflict. As used in herein, the term “and/or” includes any and all combinations of at least one of the associated listed items. The terminology used herein is for the purpose of describing specific embodiments only and is not intended to limit the present disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that as used herein, the terms “comprise” and/or “consist of . . . ” specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of at least one further feature, integers, steps, operations, elements, components, and/or groups thereof. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the existing art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. FIG.1is a flowchart of a method for implementing carrier grade network address translation according to the present disclosure. In a first aspect, referring toFIG.1, the present disclosure provides a method for implementing carrier grade network address translation and applied to a forwarding plane of a forwarding and control separated broadband access system in which the forwarding plane and the control plane are disposed in different electronic devices. The method includes the following operations100to102. At operation100, transmitting a first request to a control plane of a forwarding and control separated broadband access system, where the first request is used for applying to the control plane for a public network address range; and receiving a first response returned by the control plane, where the first response includes allocated public network address range information. In present disclosure, the public network address range information may be a public network address range or an address mask. In present disclosure, the first request may be a public network address range allocate request, and the first response may be a public network address range allocate response. Apparently, the first request and the first response are not limited to the names given above, as long as the names with the above functions are within the scope of the present disclosure, which are not described in detail here. At operation101, receiving a public network address allocated to a user by the control plane according to the public network address range information; and receiving a private network address allocated to the user by the control plane. In present disclosure, the public network address may be a public network IPv4 address, and the private network address may be an IPv4 or IPv6 address. At operation102, performing, according to the public network address and the private network address, public and private network address translation on received service traffic of the user. In present disclosure, the public and private network address translation may be a translation between an IPv4 address and a public network IPv4 address, namely NAT44; or may be a translation between an IPv6 address and a public network IPv4 address, namely NAT64. In present disclosure, after receiving uplink service traffic, a private network address in a source address of the uplink service traffic is translated into a public network address; and after receiving downlink service traffic, a public network address in a destination address of the downlink service traffic is translated into a private network address. In present disclosure, the method further includes: receiving a static port range allocated by the control plane to the user or a dynamic port allocated by the control plane for a specific service of the user; and forwarding, according to the port range or the port, the service traffic after the public and private network address translation. In present disclosure, the method further includes: uploading user identity tracing information to the control plane; or uploading the user identity tracing information to a third-party legal monitoring system, where the user identity tracing information includes: the public network address, the private network address, and the port range; or the user identity tracing information includes: the public network address, the private network address, and the port. In present disclosure, the method further includes: receiving a second request transmitted from the control plane, where the second request is used for querying a state of the public network address range; and returning a second response to the control plane, where the second response includes the state of the public network address range. In present disclosure, the second request may be a public network address range state query request, and the second response may be a public network address range state query response. Apparently, the second request and the second response are not limited to the names given above, as long as the names with the above functions are within the scope of the present disclosure, which are not described in detail here. In present disclosure, the public network address range is in an idle state, and the method further includes: transmitting a third request to the control plane, where the third request is used for requesting to release the public network address range. In present disclosure, the third request may be a public network address range release request. Apparently, the third request is not limited to the name given above, as long as the name with the above function is within the scope of the present disclosure, which is not described in detail here. In present disclosure, the method further includes: receiving a fourth request transmitted from the control plane, where the fourth request is used for querying a state of at least one public network address in the public network address range; and returning a fourth response to the control plane, where the fourth response includes the state of the at least one public network address in the public network address range. In present disclosure, the fourth request may be a public network address state query request. Apparently, the fourth request is not limited to the name given above, as long as the name with the above function is within the scope of the present disclosure, which is not described in detail here. In present disclosure, all public network addresses in the public network address range are in a used state, and the method further includes: re-transmitting the first request to the control plane. In present disclosure, a usage right of the public network address range expires, and the method further includes: transmitting a fifth request to the control plane, where the fifth request is used for requesting to update the usage right of the public network address range. In present disclosure, the fifth request may be a public network address range usage right update request. Apparently, the fifth request is not limited to the name given above, as long as the name with the above function is within the scope of the present disclosure, which is not described in detail here. FIG.2is another flowchart of a method for implementing carrier grade network address translation according to the present disclosure. In a second aspect, referring toFIG.2, the present disclosure provides another method for implementing carrier grade network address translation and applied to a control plane of a forwarding and control separated broadband access system in which the control plane and the forwarding plane are disposed in different electronic devices. The method includes the following operations200to201. At operation200, receiving a first request transmitted from a forwarding plane of a forwarding and control separated broadband access system, where the first request is used for applying for a public network address range; allocating public network address range information to the forwarding plane, and returning a first response to the forwarding plane, where the first response includes allocated public network address range information. In present disclosure, the public network address range information may be a public network address range or an address mask. In present disclosure, the first request may be a public network address range allocate request, and the first response may be a public network address range allocate response. Apparently, the first request and the first response are not limited to the names given above, as long as the names with the above functions are within the scope of the present disclosure, which are not described in detail here. At operation201, allocating a public network address to a user according to the public network address range information, and transmitting the public network address to the forwarding plane; and allocating a private network address to the user, and transmitting the private network address to the forwarding plane. In present disclosure, the public network address may be a public network IPv4 address, and the private network address may be an IPv4 or IPv6 address. In present disclosure, the method further includes: allocating a static port range to the user, or allocating a dynamic port for a specific service of the user, and transmitting the port range or port to the forwarding plane. In present disclosure, the method further includes: receiving user identity tracing information transmitted from the forwarding plane; and forwarding the user identity tracing information to an authentication authorization accounting system. The user identity tracing information includes: the public network address, the private network address, and the port range; or the user identity tracing information includes: the public network address, the private network address, and the port. In present disclosure, the method further includes: transmitting a second request to the forwarding plane, where the second request is used for querying a state of the public network address range; and receiving a second response returned by the forwarding plane, where the second response includes the state of the public network address range. In present disclosure, the second request may be a public network address range state query request, and the second response may be a public network address range state query response. Apparently, the second request and the second response are not limited to the names given above, as long as the names with the above functions are within the scope of the present disclosure, which are not described in detail here. In present disclosure, the public network address range is in an idle state, and the method further includes: receiving a third request transmitted from the forwarding plane, where the third request is used for requesting to release the public network address range; and releasing the public network address range. In present disclosure, the third request may be a public network address range release request. Apparently, the third request is not limited to the name given above, as long as the name with the above function is within the scope of the present disclosure, which is not described in detail here. In present disclosure, the method further includes: transmitting a fourth request to the forwarding plane, where the fourth request is used for querying a state of at least one public network address in the public network address range; and receiving a fourth response returned by the forwarding plane, where the fourth response includes the state of the at least one public network address in the public network address range. In present disclosure, the fourth request may be a public network address state query request. Apparently, the fourth request is not limited to the name given above, as long as the name with the above function is within the scope of the present disclosure, which is not described in detail here. In present disclosure, a usage right of the public network address range expires, and the method further includes: receiving a fifth request transmitted from the forwarding plane, where the fifth request is used for requesting to update the usage right of the public network address range. In present disclosure, the fifth request may be a public network address range usage right update request. Apparently, the fifth request is not limited to the name given above, as long as the name with the above function is within the scope of the present disclosure, which is not described in detail here. In present disclosure, the first request, the first response, the second request, the second response, the third request, the fourth request, and the fifth request may be transmitted and received via a control interface channel between the forwarding plane and the control plane. In a third aspect, the present disclosure provides an electronic device, including: at least one processor; and a memory having at least one program stored thereon which, when executed by the at least one processor, causes the at least one processor to implement any method for implementing carrier grade network address translation as described above. The processor is a device with a data processing capability, including but not limited to a central processing unit (CPU) or the like; and the memory is a device with a data storage capability, including but not limited to a random access memory (RAM, more specifically SDRAM, DDR, etc.), a read only memory (ROM), an electrically erasable programmable read only memory (EEPROM) or a flash memory (FLASH). In present disclosure, the processor and the memory are interconnected via bus, and thus connected to other components of the electronic device. In a fourth aspect, the present disclosure provides a computer-readable storage medium having a computer program stored thereon which, when executed by a processor, causes any method for implementing carrier grade network address translation as described above to be implemented. The following describes specific implementations of the embodiments of the present disclosure in conjunction with specific examples, but the examples listed are merely for convenience of description and do not intend to limit the scope of the present disclosure. Example 1 Referring toFIG.3, the method includes the following process. (1) (non-user flow) A forwarding plane of a broadband access system (vBRAS or DBNG) transmits a public network address range allocate request to a control plane via a control interface channel between the forwarding plane and the control plane, where the public network address range allocate request is used for applying to the control plane for a public network address range (or address mask). (2) (non-user flow) The control plane of the broadband access system (vBRAS or DBNG) allocates a public network address range (or address mask) to the forwarding plane, and delivers, via the control interface channel between the forwarding plane and the control plane, a public network address range allocate response containing the public network address range (or address mask) to the forwarding plane. (3) (user flow) The control plane of the broadband access system (vBRAS or DBNG) allocates a public network address and port range to a user, and delivers the public network address and port range to the forwarding plane via the control interface channel between the forwarding plane and the control plane, while the user uses the public network address and port range statically within a specified service cycle, and performs service access; and the control plane allocates a private network address to the user, and delivers the private network address to the forwarding plane via the control interface channel between the forwarding plane and the control plane. (4) (user flow) The forwarding plane of the broadband access system (vBRAS or DBNG) performs public and private network address translation on and forwards uplink and downlink service traffic of the user. (5) (user flow) The forwarding plane of the broadband access system (vBRAS or DBNG) uploads user identity tracing information to the control plane (the user identity tracing information includes, but is not limited to, the public network address, the private network address, and the port range). (6) (user flow) The control plane of the broadband access system (vBRAS or DBNG) forwards the user identity tracing information received in (5) to an authentication authorization accounting (AAA) system (the user identity tracing information includes, but is not limited to, the public network address, the private network address, and the port range). (7) (non-user flow) The control plane of the broadband access system (vBRAS or DBNG) transmits a public network address range state query request to the forwarding plane via the control interface channel between the forwarding plane and the control plane (the public network address range may be in an idle or used state). (8) (non-user flow) The forwarding plane of the broadband access system (vBRAS or DBNG) detects the state of the public network address range queried by the control plane in (7), and transmits a public network address range state query response to the control plane via the control interface channel between the forwarding plane and the control plane, where the public network address range state query response includes the state of the public network address range, and if the public network address range is in the idle state, a public network address range release request is initiated at the same time. (9) (non-user flow) If all public network addresses in the public network address range applied for by the forwarding plane of the broadband access system (vBRAS or DBNG) are in the used state, the public network address range allocate request is re-initiated to the control plane via the control interface channel between the forwarding plane and the control plane, to apply to the control plane for a new public network address range. (10) (non-user flow) If a usage right of a public network address in the public network address range applied for by the forwarding plane of the broadband access system (vBRAS or DBNG) expires, a public network address range usage right update request is initiated to the control plane via the control interface channel between the forwarding plane and the control plane. Example 2 Referring toFIG.4, the method includes the following process. (1) (non-user flow) A forwarding plane of a broadband access system (vBRAS or DBNG) transmits a public network address range allocate request to a control plane via a control interface channel between the forwarding plane and the control plane, where the public network address range allocate request is used for applying to the control plane for a public network address range (or address mask). (2) (non-user flow) The control plane of the broadband access system (vBRAS or DBNG) allocates a public network address range (or address mask) to the forwarding plane, and delivers, via the control interface channel between the forwarding plane and the control plane, a public network address range allocate response containing the public network address range (or address mask) to the forwarding plane. (3) (user flow) The control plane of the broadband access system (vBRAS or DBNG) allocates a public network address and port range to a user, and delivers the public network address and port range to the forwarding plane via the control interface channel between the forwarding plane and the control plane, while the user uses the public network address and port range statically within a specified service cycle, and performs service access; and the control plane allocates a private network address to the user, and delivers the private network address to the forwarding plane via the control interface channel between the forwarding plane and the control plane. (4) (user flow) The forwarding plane of the broadband access system (vBRAS or DBNG) performs public and private network address translation on and forwards uplink and downlink service traffic of the user. (5) (user flow) The forwarding plane of the broadband access system (vBRAS or DBNG) uploads user identity tracing information to a third-party legal monitoring system (the user identity tracing information includes, but is not limited to, the public network address, the private network address, and the port range). (6) (non-user flow) The control plane of the broadband access system (vBRAS or DBNG) transmits a public network address range state query request to the forwarding plane via the control interface channel between the forwarding plane and the control plane (the public network address range may be in an idle or used state). (7) (non-user flow) The forwarding plane of the broadband access system (vBRAS or DBNG) detects the state of the public network address range queried by the control plane in (6), and transmits a public network address range state query response to the control plane via the control interface channel between the forwarding plane and the control plane, where the public network address range state query response includes the state of the public network address range, and if the public network address range is in the idle state, a public network address range release request is initiated at the same time. (8) (non-user flow) If all public network addresses in the public network address range applied for by the forwarding plane of the broadband access system (vBRAS or DBNG) are in the used state, the public network address range allocate request is re-initiated to the control plane via the control interface channel between the forwarding plane and the control plane, to apply to the control plane for a new public network address range. (9) (non-user flow) If a usage right of a public network address in the public network address range applied for by the forwarding plane of the broadband access system (vBRAS or DBNG) expires, a public network address range usage right update request is initiated to the control plane via the control interface channel between the forwarding plane and the control plane. Example 3 Referring toFIG.5, the method includes the following process. (1) (non-user flow) A forwarding plane of a broadband access system (vBRAS or DBNG) transmits a public network address range allocate request to a control plane via a control interface channel between the forwarding plane and the control plane, where the public network address range allocate request is used for applying to the control plane for a public network address range (or address mask). (2) (non-user flow) The control plane of the broadband access system (vBRAS or DBNG) allocates a public network address range (or address mask) to the forwarding plane, and delivers, via the control interface channel between the forwarding plane and the control plane, a public network address range allocate response containing the public network address range (or address mask) to the forwarding plane. (3) (user flow) The control plane of the broadband access system (vBRAS or DBNG) allocates a public network address and port to a user, and delivers the public network address and port to the forwarding plane via the control interface channel between the forwarding plane and the control plane, while the user uses the public network address and port dynamically for service access; and the control plane allocates a private network address to the user, and delivers the private network address to the forwarding plane via the control interface channel between the forwarding plane and the control plane. (4) (user flow) The forwarding plane of the broadband access system (vBRAS or DBNG) performs public and private network address translation on and forwards uplink and downlink service traffic of the user. (5) (user flow) The forwarding plane of the broadband access system (vBRAS or DBNG) uploads user identity tracing information to the control plane (the user identity tracing information includes, but is not limited to, the public network address, the private network address, and the port). (6) (user flow) The control plane of the broadband access system (vBRAS or DBNG) forwards the user identity tracing information received in (5) to an AAA system (the user identity tracing information includes, but is not limited to, the public network address, the private network address, and the port). (7) (non-user flow) The control plane of the broadband access system (vBRAS or DBNG) transmits a public network address range state query request to the forwarding plane via the control interface channel between the forwarding plane and the control plane (the public network address range may be in an idle or used state). (8) (non-user flow) The forwarding plane of the broadband access system (vBRAS or DBNG) detects the state of the public network address range queried by the control plane in (7), and transmits a public network address range state query response to the control plane via the control interface channel between the forwarding plane and the control plane, where the public network address range state query response includes the state of the public network address range, and if the public network address range is in the idle state, a public network address range release request is initiated at the same time. (9) (non-user flow) If all public network addresses in the public network address range applied for by the forwarding plane of the broadband access system (vBRAS or DBNG) are in the used state, the public network address range allocate request is re-initiated to the control plane via the control interface channel between the forwarding plane and the control plane, to apply to the control plane for a new public network address range. (10) (non-user flow) If a usage right of a public network address in the public network address range applied for by the forwarding plane of the broadband access system (vBRAS or DBNG) expires, a public network address range usage right update request is initiated to the control plane via the control interface channel between the forwarding plane and the control plane. Example 4 Referring toFIG.6, the method includes the following process. (1) (non-user flow) A forwarding plane of a broadband access system (vBRAS or DBNG) transmits a public network address range allocate request to a control plane via a control interface channel between the forwarding plane and the control plane, where the public network address range allocate request is used for applying to the control plane for a public network address range (or address mask). (2) (non-user flow) The control plane of the broadband access system (vBRAS or DBNG) allocates a public network address range (or address mask) to the forwarding plane, and delivers, via the control interface channel between the forwarding plane and the control plane, a public network address range allocate response containing the public network address range (or address mask) to the forwarding plane. (3) (user flow) The control plane of the broadband access system (vBRAS or DBNG) allocates a public network address and port to a user, and delivers the public network address and port to the forwarding plane via the control interface channel between the forwarding plane and the control plane, while the user uses the public network address and port dynamically for service access; and the control plane allocates a private network address to the user, and delivers the private network address to the forwarding plane via the control interface channel between the forwarding plane and the control plane. (4) (user flow) The forwarding plane of the broadband access system (vBRAS or DBNG) performs public and private network address translation on and forwards uplink and downlink service traffic of the user. (5) (user flow) The forwarding plane of the broadband access system (vBRAS or DBNG) uploads user identity tracing information to a third-party legal monitoring system (the user identity tracing information includes, but is not limited to, the public network address, the private network address, and the port). (6) (non-user flow) The control plane of the broadband access system (vBRAS or DBNG) transmits a public network address range state query request to the forwarding plane via the control interface channel between the forwarding plane and the control plane (the public network address range may be in an idle or used state). (7) (non-user flow) The forwarding plane of the broadband access system (vBRAS or DBNG) detects the state of the public network address range queried by the control plane in (6), and transmits a public network address range state query response to the control plane via the control interface channel between the forwarding plane and the control plane, where the public network address range state query response includes the state of the public network address range, and if the public network address range is in the idle state, a public network address range release request is initiated at the same time. (8) (non-user flow) If all public network addresses in the public network address range applied for by the forwarding plane of the broadband access system (vBRAS or DBNG) are in the used state, the public network address range allocate request is re-initiated to the control plane via the control interface channel between the forwarding plane and the control plane, to apply to the control plane for a new public network address range. (9) (non-user flow) If a usage right of a public network address in the public network address range applied for by the forwarding plane of the broadband access system (vBRAS or DBNG) expires, a public network address range usage right update request is initiated to the control plane via the control interface channel between the forwarding plane and the control plane. FIG.7is a block diagram of an apparatus for implementing carrier grade network address translation according to the present disclosure. In a fifth aspect, referring toFIG.7, the present disclosure provides an apparatus for implementing carrier grade network address translation, including: a public network address range application module701configured to transmit a first request to a control plane of a forwarding and control separated broadband access system, where the first request is used for applying to the control plane for a public network address range, and receive a first response returned by the control plane, where the first response includes allocated public network address range information; a user address acquisition module702configured to receive a public network address allocated to a user by the control plane according to the public network address range information, and receive a private network address allocated to the user by the control plane; and a service traffic processing module703configured to perform, according to the public network address and the private network address, public and private network address translation on received service traffic of the user. In present disclosure, the public network address range information includes: a public network address range or an address mask. In present disclosure, the user address acquisition module702is further configured to: receive a static port range allocated by the control plane to the user or a dynamic port allocated by the control plane for a specific service of the user; and the service traffic processing module703is further configured to: forward, according to the port range or the port, the service traffic after the public and private network address translation. In present disclosure, the user address acquisition module702is further configured to: upload user identity tracing information to the control plane; or upload the user identity tracing information to a third-party legal monitoring system. The user identity tracing information includes: the public network address, the private network address, and the port range; or the user identity tracing information includes: the public network address, the private network address, and the port. In present disclosure, the public network address range application module701is further configured to: receive a second request transmitted from the control plane, where the second request is used for querying a state of the public network address range; and return a second response to the control plane, where the second response includes the state of the public network address range. In present disclosure, the public network address range is in an idle state, and the public network address range application module701is further configured to: transmit a third request to the control plane, where the third request is used for requesting to release the public network address range. In present disclosure, the public network address range application module701is further configured to: receive a fourth request transmitted from the control plane, where the fourth request is used for querying a state of at least one public network address in the public network address range; and return a fourth response to the control plane, where the fourth response includes the state of the at least one public network address in the public network address range. In present disclosure, all public network addresses in the public network address range are in a used state, and the public network address range application module701is further configured to: re-transmit the first request to the control plane. In present disclosure, a usage right of the public network address range expires, and the public network address range application module701is further configured to: transmit a fifth request to the control plane, where the fifth request is used for requesting to update the usage right of the public network address range. The specific implementation process of the apparatus for implementing carrier grade network address translation is the same as the specific implementation process of the method for implementing carrier grade network address translation described in the foregoing embodiments, and thus is not repeated here. FIG.8is another block diagram of an apparatus for implementing carrier grade network address translation according to the present disclosure. In a sixth aspect, referring toFIG.8, the present disclosure provides another apparatus for implementing carrier grade network address translation, including: a public and private network address management module801configured to receive a first request transmitted from a forwarding plane of a forwarding and control separated broadband access system, where the first request is used for applying for a public network address range; allocate public network address range information to the forwarding plane, and return a first response to the forwarding plane, where the first response includes allocated public network address range information; allocate a public network address to the user according to the public network address range information, and transmit the public network address to the forwarding plane; and allocate a private network address to the user, and transmit the private network address to the forwarding plane. In present disclosure, the public network address range information includes: a public network address range or an address mask. In present disclosure, the public and private network address management module801is further configured to: allocate a static port range to the user, or allocate a dynamic port for a specific service of the user, and transmit the port range or port to the forwarding plane. In present disclosure, the public and private network address management module801is further configured to: receive user identity tracing information transmitted from the forwarding plane; and forward the user identity tracing information to an authentication authorization accounting system. The user identity tracing information includes: the public network address, the private network address, and the port range; or the user identity tracing information includes: the public network address, the private network address, and the port. In present disclosure, the public and private network address management module801is further configured to: transmit a second request to the forwarding plane, where the second request is used for querying a state of the public network address range; and receive a second response returned by the forwarding plane, where the second response includes the state of the public network address range. In present disclosure, the public network address range is in an idle state, and the public and private network address management module801is further configured to: receive a third request transmitted from the forwarding plane, where the third request is used for requesting to release the public network address range; and release the public network address range. In present disclosure, the public and private network address management module801is further configured to: transmit a fourth request to the forwarding plane, where the fourth request is used for querying a state of at least one public network address in the public network address range; and receive a fourth response returned by the forwarding plane, where the fourth response includes the state of the at least one public network address in the public network address range. In present disclosure, a usage right of the public network address range expires, and the public and private network address management module801is further configured to: receive a fifth request transmitted from the forwarding plane, where the fifth request is used for requesting to update the usage right of the public network address range. The specific implementation process of the apparatus for implementing carrier grade network address translation is the same as the specific implementation process of the method for implementing carrier grade network address translation described in the foregoing embodiments, and thus is not repeated here. FIG.9is a block diagram of a system for implementing carrier grade network address translation according to the present disclosure. In a seventh aspect, referring toFIG.9, the present disclosure provides a system for implementing carrier grade network address translation, including: a forwarding plane901and a control plane902. The forwarding plane901and the control plane902are disposed in different electronic devices. The forwarding plane901is configured to: transmit a first request to the control plane, where the first request is used for applying to the control plane for a public network address range; receive a first response returned by the control plane; where the first response includes allocated public network address range information; receive a public network address allocated to a user by the control plane according to the public network address range information; receive a private network address allocated to the user by the control plane; and perform, according to the public network address and the private network address, public and private network address translation on received service traffic of the user. The control plane902is configured to: receive the first request transmitted from the forwarding plane, allocate the public network address range information to the forwarding plane, and return the first response to the forwarding plane; allocate the public network address to the user according to the public network address range information, and transmit the public network address to the forwarding plane; and allocate the private network address to the user, and transmit the private network address to the forwarding plane. In present disclosure, the public network address range information includes: a public network address range or an address mask. In present disclosure, the forwarding plane901is further configured to: receive a static port range allocated by the control plane to the user or a dynamic port allocated by the control plane for a specific service of the user; and forward, according to the port range or the port, the service traffic after the public and private network address translation. The control plane902is further configured to: allocate the static port range to the user, or allocate the dynamic port for the specific service of the user, and transmit the port range or port to the forwarding plane. In present disclosure, the forwarding plane901is further configured to: upload user identity tracing information to the control plane; or upload the user identity tracing information to a third-party legal monitoring system. The control plane902is further configured to: receive the user identity tracing information transmitted from the forwarding plane; and forward the user identity tracing information to an authentication authorization accounting system. The user identity tracing information includes: the public network address, the private network address, and the port range; or the user identity tracing information includes: the public network address, the private network address, and the port. In present disclosure, the forwarding plane901is further configured to: receive a second request transmitted from the control plane, where the second request is used for querying a state of the public network address range; and return a second response to the control plane, where the second response includes the state of the public network address range. The control plane902is further configured to: transmit the second request to the forwarding plane, and receive the second response returned by the forwarding plane. In present disclosure, the public network address range is in an idle state, and the forwarding plane901is further configured to: transmit a third request to the control plane, where the third request is used for requesting to release the public network address range; and the control plane902is further configured to: receive the third request transmitted from the forwarding plane. In present disclosure, the forwarding plane901is further configured to: receive a fourth request transmitted from the control plane, where the fourth request is used for querying a state of at least one public network address in the public network address range; and return a fourth response to the control plane, where the fourth response includes the state of the at least one public network address in the public network address range. The control plane902is further configured to: transmit the fourth request to the forwarding plane, and receive the fourth response returned by the forwarding plane. In present disclosure, all public network addresses in the public network address range are in a used state, and the forwarding plane901is further configured to: re-transmit the first request to the control plane. In present disclosure, a usage right of the public network address range expires, and the forwarding plane901is further configured to: transmit a fifth request to the control plane, where the fifth request is used for requesting to update the usage right of the public network address range; and the control plane902is further configured to: receiving the fifth request transmitted from the forwarding plane. In the embodiments of the present disclosure, the control plane may be deployed in a centralized manner, and responsible for centralized management of users and addresses; while the forwarding plane may be deployed in a distributed manner and close to the user, and perform uplink and downlink forwarding of user service traffic nearby. The specific implementation process of the system for implementing carrier grade network address translation is the same as the specific implementation process of the method for implementing carrier grade network address translation described in the foregoing embodiments, and thus is not repeated here. Those of ordinary skill in the art will appreciate that all or some operations of the above described method, functional modules/units in the system and apparatus may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or operation may be performed cooperatively by several physical components. Some or all physical components may be implemented as software executed by a processor, such as a CPU, a digital signal processor or microprocessor, or implemented as hardware, or implemented as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on a computer-readable medium which may include a computer storage medium (or non-transitory medium) and communication medium (or transitory medium). As is well known to those of ordinary skill in the art, the term computer storage medium includes volatile and nonvolatile, removable and non-removable medium implemented in any method or technology for storing information, such as computer-readable instructions, data structures, program modules or other data. A computer storage medium includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical disc storage, magnetic cartridge, magnetic tape, magnetic disk storage or other magnetic storage devices, or may be any other medium used for storing the desired information and accessible by a computer. Moreover, it is well known to one of ordinary skill in the art that a communication medium typically includes a computer-readable instruction, a data structure, a program module, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery medium. The present disclosure has disclosed example embodiments, and although specific terms are employed, they are used and should be interpreted in a generic and descriptive sense only and not for purposes of limitation. In some instances, features, characteristics and/or elements described in connection with a particular embodiment may be used alone or in combination with features, characteristics and/or elements described in connection with other embodiments, unless expressly stated otherwise, as would be apparent to one skilled in the art. It will, therefore, be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the present disclosure as set forth in the appended claims. INDUSTRIAL APPLICABILITY With the method for implementing carrier grade network address translation in the embodiments of the present disclosure, the CGN in a forwarding and control separated broadband access system (e.g., a forwarding and control separated virtual broadband remote access server (vBRAS) or disaggregated broadband network gateway (DBNG)) is implemented.
48,639
11863518
DETAILED DESCRIPTION FIG.1is a block diagram illustrating an exemplary 5G system network architecture. The architecture inFIG.1includes NRF100and SCP101, which may be located in the same home public land mobile network (HPLMN). As described above, NRF100may maintain profiles of available producer NF service instances and their supported services and allow consumer NFs or SCPs to subscribe to and be notified of the registration of new/updated producer NF service instances. SCP101may also support service discovery and selection of producer NF instances. SCP101may perform load balancing of connections between consumer and producer NFs. NRF100is a repository for NF or service profiles of producer NF instances. In order to communicate with a producer NF instance, a consumer NF or an SCP must obtain the NF or service profile of the producer NF instance from NRF100. The NF or service profile is a JavaScript object notation (JSON) data structure defined in 3GPP TS 29.510. The NF or service profile includes attributes that indicate the type of service provided, capacity of the NF instance, and information for contacting the NF instance. InFIG.1, any of the network functions can be consumer NFs, producer NFs, or both, depending on whether they are requesting, providing, or requesting and providing services. In the illustrated example, the NFs include a policy control function (PCF)102that performs policy related operations in a network, a unified data management (UDM)104that manages user data, and an application function (AF)106that provides application services. The NFs illustrated inFIG.1further include a session management function (SMF)108that manages sessions between access and mobility management function (AMF)110and PCF102. AMF110performs mobility management operations similar to those performed by a mobility management entity (MME) in 4G networks. An authentication server function (AUSF)112performs authentication services for user equipment (UEs), such as user equipment (UE)114, seeking access to the network. A network slice selection function (NSSF)116provides network slicing services for devices seeking to access specific network capabilities and characteristics associated with a network slice. A network exposure function (NEF)118provides application programming interfaces (APIs) for application functions seeking to obtain information about Internet of things (IoT) devices and other UEs attached to the network. NEF118performs similar functions to the service capability exposure function (SCEF) in 4G networks. A radio access network (RAN)120connects user equipment (UE)114to the network via a wireless link. Radio access network120may be accessed using a g-Node B (gNB) (not shown inFIG.1) or other wireless access point. A user plane function (UPF)122can support various proxy functionality for user plane services. One example of such proxy functionality is multipath transmission control protocol (MPTCP) proxy functionality. UPF122may also support performance measurement functionality, which may be used by UE114to obtain network performance measurements. Also illustrated inFIG.1is a data network (DN)124through which UEs access data network services, such as Internet services. SEPP126filters incoming traffic from another PLMN and performs topology hiding for traffic exiting the home PLMN. SEPP126may communicate with a SEPP in a foreign PLMN which manages security for the foreign PLMN. Thus, traffic between NFs in different PLMNs may traverse two SEPP functions, one for the home PLMN and the other for the foreign PLMN. As stated above, one problem that can occur in 5G and other networks is that mappings between self-constructed FQDNs and IP addresses and other types of DNS mappings for 5GC NFs are maintained using manual DNS configuration. The 3GPP has defined self-constructed FQDNs for 5G NFs which are utilized when the consumer cannot perform the discovery of such producer NFs from the NRF. Example use cases for self-constructed FQDNs include NFs that communicate with an NRF without local configuration for NF discovery, communications from a V-NRF to an H-NRF, communications from a V-NSSF to an H-NSSF, AMF to NSSF communications, etc. One challenge with self-constructed FQDNs is the need for manual configuration of DNS. Further, the self-constructed FQDN and IP address mappings at DNS need to be kept in sync with an ever changing cloud native 5G topology. The cloud native 5G topology information is already present at the NRF. However, there is no defined mechanism to sync the topology maintained by the NRF with the DNS system. According to the subject matter described herein, the NRF can be utilized to configure and update DNS with changes in mappings between self-constructed FQDNs and IP addresses and other types of DNS mappings, even in the cloud native 5G topology where the mappings change frequently. In 5G communications networks, 5GC NFs register their NF profiles with the NRF. The NF profile can include the self-constructed FQDN of the NF, the IP address of the NF, or both.FIG.2is a message flow diagram illustrating exemplary messages exchanged for an NF register service operation. Referring toFIG.2, in line1of the message flow, consumer NF200initiates the NF register service operation by sending a hypertext transfer protocol (HTTP) PUT message to NRF100. The HTTP PUT message includes the NF profile of the consumer NF200. The NF profile can include the FQDN, the IP address, or both for the NF whose NF profile is being registered with NRF100. If the NF register operation is successful, NRF100responds as indicated in line2awith a 201 Created message. If the NF register service operation is not successful or if the message is redirected to another NRF, NRF100responds as indicated in line2bwith a 4XX or 5XX message with problem details or a 3XX message indicating redirection. Once the NRF has the NF profile of the consumer NF, the NRF can use the information in the NF profile to configure DNS with a mapping between the self-constructed FQDN and the IP address. However, current 3GPP standards do not define such a procedure for the NRF to maintain DNS records for 5GC NFs. As indicated above, the NF profile for a 5GC NF can include the self-constructed FQDN for a 5GC NF, the IP address, or both.FIG.3is a block diagram illustrating exemplary attributes that may be included in an NF profile and an NF service profile. 5GC NFs register NF profiles with the NRF if the scheme in the URI portion of the FQDN does not require transport layer security (TLS). 5GC NFs register service profiles with the NRF if the scheme in the URI of the FQDN is HTTPS, which requires TLS. In the illustrated example, NF profile300includes FQDN, IPv4 address, and IPv6 address attributes. Service profile302includes an FQDN attribute and an IP endpoint attribute that includes an IPv4 address or an IPv6 address. Any of these parameters or attributes from the service profile or the NF profile can be used to automatically configure DNS for a 5GC NF. Self-constructed FQDNs can be created by 5GC NFs according to the format specified in 3GPP TS 23.003. Section 28 of 3GPP TS 23.003 defines self-constructed FQDNs for the following:N3 inter-working function (N3IWF)PLMN level NRF and H-NRFNSSFAMFTAI (Tracking Area Identifier FQDN)AMF setAMF instanceSMF setshort message service function (SMSF). An example of a self-constructed FQDN for an NRF is:https://nrf.5gc.mnc345.mcc012.3gppnetwork.org/ An example of a self-constructed FQDN for an NSSF is:https://nssf.5gc.mnc345.mcc012.3gppnetwork.org/ One point to highlight is that DNS needs to be configured and kept up-to-date with IP address mappings for self-constructed FQDNs so that consumer NFs that use self-constructed FQDNs can obtain a current IP address for a self-constructed FQDN. In the current 3GPP-defined architecture for 5G, there is no mechanism for automatic DNS configuration for self-constructed FQDNs of 5G NFs.FIG.4is a network diagram illustrating an exemplary network architecture where manual DNS configuration is required for self-constructed FQDNs of 5GC NFs. Referring toFIG.4, the network includes a visited PLMN and a home PLMN. The visited PLMN includes visited SEPP126A, visited NRF100A, visited NSSF116A, SMSF400, gNodeB402, SMF108, N3IWF404, and AMF110. The visited PLMN further includes visited PLMN DNS406A. The home PLMN includes home SEPP126B, home NRF1008, home NSSF116B, and home PLMN DNS406B. The NFs in the home and visited PLMNs self-construct FQDNs to identify and communicate with each other. The IP addresses associated with the self-constructed FQDNs can change frequently. Because there is no automatic DNS configuration procedure defined in the 3GPP standards, manual configuration of DNS406A and406B is required to maintain up to date mappings between self-constructed FQDNs of the 5GC NFs and IP addresses Table 1 shown below illustrates some examples where self-constructed FQDNs can be used in the architecture ofFIG.4. TABLE 1Self-Constructed FQDN Usage ExamplesProducerConsumerCommentsH-NRFV-NRF/H-SEPPV-NRF sends an SBI requestto the H-NRF using a self-constructed FQDN of theH-NRF via SEPPs. The H-SEPPhas to query DNS to resolvethe H-NRF FQDN.AMF, AMF Set,gNodeBThe gNodeB is required toAMF instancequery DNS to resolve theself-constructed FQDN foran AMF set, an AMF, and/orAMF instance.NSSFV-NSSF/H-SEPP,The V-NSSF sends an SBIAMFrequest to the H-NSSF usinga self-constructed FQDN ofthe H-NSSF via SEPPs. TheH-SEPP is required to queryDNS to resolve the H-NSSFFQDN.The AMF self-constructs theNSSF FQDN and resolves itusing DNS in the absence oflocal configuration.PLMN level NRFAll 5GC NFsThe NF self-constructs thePLMN level NRF FQDN andresolves the FQDN using DNSin the absence of localconfiguration. In each of the scenarios in Table 1, the NF that receives a message with a self-constructed FQDN of the target NF is required to query DNS to obtain the IP address of the target. Accordingly, it is desirable to have an efficient mechanism to keep DNS records for self-constructed FQDNs up to date that avoids or at least reduces the need for manual DNS configuration. FIG.5is a message flow diagram illustrating exemplary messages exchanged in a network where manual DNS configuration is performed for self-constructed FQDNs of 5GC NFs. Referring toFIG.5, when an NF registers its NF or service profile with NRF100A, it is necessary to manually configure DNS406A with the mapping between the self-constructed FQDN for the NF and the IP address. A similar operation occurs when an NF updates its profile with the NRF. The NRF DNS configuration must also be maintained with DNS406A. Referring to the message flow inFIG.5, in lines1and2, AMF110registers its NF profile with NRF100A, and NRF100A responds indicating successful registration of the NF profile of AMF110. In line3, NSSF116A sends an NF register message to NRF100A. In line4, NRF100A responds with a success message indicating successful registration of NSSF116A. After line4, or any time a registration is performed with NRF100A, DNS406A must be configured with the self-constructed FQDN of the NF whose NF or service profile is being registered and the corresponding IP address. In line5, DNS406A is manually configured with the IP address and self-constructed FQDN of NRF100A. In line6, DNS406A is manually configured with the self-constructed FQDN and IP address of AMF110. In line7, DNS406A is manually configured with the self-constructed FQDN and IP address of NSSF116A. When a consumer NF seeks to communicate with a target NF, the consumer NF self-constructs the FQDN of the target NF according to the format defined in 3GPP TS 23.003. Because the consumer NF200does not know the IP address corresponding to the FQDN, either the consumer NF or an SCP or SEPP must send the DNS query to DNS406A to resolve the FQDN into an IP address. In line9, consumer NF200receives a response to the DNS query containing the mapping between the FQDN and the IP address. After line9, consumer NF200can send a message to the target producer NF using the self-constructed FQDN and the IP address obtained from DNS406A. In line10of the message flow diagram, AMF110sends an NF update message to NRF100A to update the NF profile of AMF110with NRF100A. In line11, NRF100A responds with a success message indicating that the NF update service operation was successful. In line12, NSSF116A sends a message to NRF100A to update the NF profile of NSSF116A with NRF100A. In line13, NRF100A responds with a success message indicating that the NF update operation was successful. After line13, DNS406A must be manually configured with any changes in the IP address mappings for NRF100A, AMF110, and NSSF116A. In line14, DNS406A is manually configured with the updated IP address mapping information for NRF100A. In line15, DNS406A is manually configured with the updated IP address mapping information for AMF110. In line16, DNS406A is manually configured with the updated IP address mapping information of NSSF116A. In order to avoid or reduce the need for manual DNS configuration after each NF registration and/or NF update, the subject matter described herein adds functionality to the NRF to automatically configure DNS when a message concerning a 5G NF is received.FIG.6is a message flow diagram illustrating exemplary messages exchanged when an NRF performs automatic DNS configuration for self-constructed FQDNs of 5GC NFs. In line1, NRF100A automatically configures its FQDN to IP address mapping with DNS406A. NRF100A may automatically update its FQDN to IP address mapping with DNS406A at boot up of NRF100A or any time the IP address of NRF100A changes. When an NF registers or updates its NF or service profile with NRF100A, it is no longer necessary to manually configure DNS406A with the mapping between the self-constructed FQDN for the NF and the IP address. In line2, AMF110registers its NF profile with NRF100A, and, in line3, NRF100A responds indicating successful registration of the NF profile of AMF110. In line4, in response to registering the NF profile of AMF110, NRF100A automatically configures DNS406A with the mapping between the self-constructed FQDN of AMF110and the IP address corresponding to the self-constructed FQDN. If the IP address and the self-constructed FQDN are both in the NF profile, NRF100A may read the self-constructed FQDN and the IP address from the NF profile and use the self-constructed FQDN and the IP address in a message that NRF100A transmits to a DNS server that is part of DNS406A. The format of the message that NRF100A transmits to the DNS server depends on the application programming interface (API) used by the DNS server in the region where the mapping is being updated. If the IP address is not in the NF profile, NRF100A may obtain the IP address by querying another source, such as a load balancer, a cloud network service registry, a local DNS cache, or other source. In line5, NSSF116A sends an NF register message to NRF100A. In line6, NRF100A responds with a success message indicating successful registration of NSSF116A. In line7, NRF100A automatically configures DNS406A with the mapping between the self-constructed FQDN of NSSF116A and the IP address corresponding to the self-constructed FQDN. As with the case with AMF110, NRF100A may obtain the IP address from the NF or service profile of NSSF116A or from another source, such as a load balancer, a local DNS cache, or a cloud network service registry. When a consumer NF seeks to communicate with a target NF, the consumer NF self-constructs the FQDN of the target NF according to the format defined in 3GPP TS 23.003. Because the consumer NF200does not know the IP address corresponding to the FQDN, either the consumer NF or an SCP or SEPP must send the DNS query to DNS4046A to resolve the FQDN into an IP address. In line9, consumer NF200receives a response to the DNS query containing the mapping between the FQDN and the IP address. After line9, consumer NF200can send a message to the target producer NF using the self-constructed FQDN and the IP address obtained from DNS406A. Because DNS records for producer NFs are maintained by NRF100A, manual DNS configuration is not required, and consumer NF200will receive an IP address for the self-constructed FQDN that is synchronized with the IP address mapping data available to NRF100A. In line10of the message flow diagram, NRF100A configures its IP address mapping information with DNS406A. As described above, NRF100A may automatically update the IP address corresponding to the self-constructed FQDN of NRF100A any time the IP address changes, e.g., due to a change in cloud network resource allocations. In line11of the message flow diagram, AMF110sends an NF update message to NRF100A to update the NF profile of AMF110with NRF100A. In line12, NRF100A responds with a success message indicating that the NF update service operation was successful. In line13, NRF100A automatically configures DNS406A with the mapping between the self-constructed FQDN of AMF110and the IP address corresponding to the self-constructed FQDN. In line14, NSSF116A sends an NF update message to NRF100A to update the NF profile of NSSF116A with NRF100A. In line15, NRF100A responds with a success message indicating that the NF update operation was successful. In line16, NRF100A automatically configures DNS406A with the mapping between the self-constructed FQDN of NSSF116A and the IP address corresponding to the self-constructed FQDN. FIG.7is a block diagram illustrating an exemplary architecture for an NRF for performing automatic DNS configuration for self-constructed FQDNs of 5GC NFs. Referring toFIG.7, NRF100A includes at least one processor700and memory702. NRF100A further includes and NF/service profiles database704that may reside in memory702. NF/service profiles database704stores the NF and service profiles of NF that are registered with NRF100A. NRF100A further includes an NF register/update handler706that receives and processes NF register and update messages to store and update NF profiles and NF service profiles in NF/service profiles database704. NRF100A further includes a DNS auto updater708that automatically configures DNS in response to detecting changes in mappings between self-constructed FQDNs of NFs and IP addresses and other types of DNS mappings. DNS auto updater708may update DNS records in response to receiving and NF register message or an NF update message from a consumer NF. NF register/update handler706and DNS auto updater708may be implemented using computer executable instructions stored in memory702and executable by processor700. DNS auto updater708may interface with DNS using an API provided by DNS in the particular network in which DNS auto updater708resides. NRF100A may be configured with the following attributes of the API to allow DNS auto updater708to interface with DNS: TABLE 2DNS Configuration Attributes at NRFAttribute NameDescriptionDNS API endpointDNS configuration API endpoint, i.e.,FQDNDNS API prefixDNS configuration API prefixDNS security credentialsSecurity credentials to access DNS In Table 2, the value of the DNS API endpoint attribute is the FQDN of the DNS server that the NRF contacts to update DNS records. The value of the DNS API prefix attribute is a prefix to the FQDN of the DNS server that the NRF contacts to update DNS records. The value(s) of the DNS security credentials attribute includes any security credentials that are required for the DNS server to allow the NRF to update DNS records for 5GC NFs. FIG.8is a flow chart illustrating an exemplary process for performing automatic DNS configuration for self-constructed FQDNs of 5GC NFs. referring toFIG.8, in step800, the process includes receiving a message concerning a 5GC NF. For example, DNS auto updater708of NRF101A may receive an NF register request or an NF update request for registering or updating an NF or service profile of NRF100A. In step802, the process includes determining a first DNS resource record parameter for the 5GC NF. For example, DNS auto updater708of NRF100A may read the self-constructed FQDN from the NF or service profile if the FQDN is present in the NF or service profile. Alternatively, NRF100A may self-construct the FQDN of the 5GC NF using parameters available in the NF or service profile. In another example, NRF100A may read or construct a uniform resource name (URN) from the NF or service profile of the 5GC NF. In step804, the process includes determining a second DNS resource record parameter for the 5GC NF. For example, DNS auto updater708of NRF100A may read the IP address from the NF or service profile received in the NF register or NF update message if the IP address is present in the NF or service profile. Alternatively, DNS auto updater708of NRF100A may determine the IP address corresponding to the FQDN from an external source, such as a load balancer, a cloud network service registry, or a local DNS server or cache. In step806, the process includes automatically configuring DNS with a mapping between the first and second DNS resource record parameters. For example, DNS auto updater708of NRF100A may transmit a message to a DNS server to update a DNS record for the 5GC NF to include a mapping between a self-constructed FQDN of the 5GC NF and an IP address of the NF. In another example, DNS auto updater708may generate a naming authority pointer record (NAPTR) record for the 5GC NF and transmit the NAPTR record to a DNS server. The following is an example of an NAPTR record that may be generated by DNS auto updater708using parameters from an NF profile of a 5GC NF: ; AMF Set 1 of AMF Region 48set001.region48.amfset;IN NAPTR order pref. flag serviceregexp replacementIN NAPTR 100 999 “a” “x-3gpp-amf:x-n2”“” topoff.amf11.amfIN NAPTR 100 999 “a” “x-3gpp-amf:x-n“” topoff.amf12.amf In the example, the NAPTR record includes the AMF set FQDN, set001.region48.amfset, and the NF instance FQDNs, topoff.amf11.amf and topoff.amf12.amf, of the AMFs that are members of the AMF set. The lines that begin with a semicolon are comments. DNS auto updater708may generate the NAPTR record content using FQDNs and IP addresses extracted from the NF profile for the NF set. In one example, DNS auto updater708may keep or maintain a local DNS cache of mappings between FQDNs of 5GC NFs and IP addresses, and, prior to sending a message to DNS, check the cache to determine whether the DNS record requires updating. If the IP address received or determined from an NF register or NF update message is a new or updated IP address for the self-constructed FQDN of the 5GC NF, DNS auto updater708may transmit the message to the DNS server to update the mapping between the IP address and the self-constructed FQDN maintained by the DNS server. If the IP address received in or determined from an NF register or NF update message is not a new IP address for the self-constructed FQDN, DNS auto updater708may refrain from updating the DNS record with the DNS server. Exemplary advantages of the subject matter described herein include automation of DNS configuration for on-demand topology changes (e.g. network slice additions/deletions/updates that result in a change in IP address for a self-constructed FQDN or other mappings maintained by DNS. In general the NRF as described herein obtain NF topology information from NF and service profiles of 5GC NFs and uses the NF topology information to automatically update DNS resource records for the 5GC NFs. The dynamic nature of the cloud native topology, which changes very frequently, will benefit from automatic updating of DNS records, as manual changes cannot keep up with the pace of topology changes. DNS details for self-constructed and other FQDNs do not need to be configured manually. Mappings between IP addresses and FQDNs of 5GC NFs can be synced by the NRF, which operates in both the 5GC and DNS systems. For example, local DNS configuration maintained by the NRF can be synced to an external DNS. Implementing automatic DNS configuration using the NRF reduces implementation complexities. Only the NRF is required to implement DNS configuration. As new NF register and NF update messages are received, the DNS configuration maintained by the NRF is continuously audited for changes. When a change in IP address is detected, the NRF automatically populates the change to the DNS. The disclosure of each of the following references is incorporated herein by reference in its entirety. REFERENCES 1. 3rdGeneration Partnership Project; Technical Specification Group Core Network and Terminals; Numbering, addressing and identification; (Release 17) 3GPP TS 23.003 V17.3.0 (2021-09)2. 3rdGeneration Partnership Project; Technical Specification Group Core Network and Terminals; Technical Realization of Service Based Architecture (5GS); Stage 3 (Release 17) 3GPP TS 29.500 V17.4.0 (2021-09)3. 3rdGeneration Partnership Project; Technical Specification Group Services and System Aspects; System Architecture for the 5G System (5GS); Stage 2 (Release 17) 3GPP TS 23.501 V17.2.0 (2021-09)4. 3rdGeneration Partnership Project; Technical Specification Group Services and System Aspects; Procedures for the 5G System (5GS); Stage 2 (Release 17) 3GPP TS 23.502 V17.2.1 (2021-09)5. 3rdGeneration Partnership Project; Technical Specification Group Croup Core Network and Terminals; Principles and Guidelines for Services Definitions; Stage 3 (Release 17) 3GPP TS 29.501 V17.3.1 (2021-09)6. 3rdGeneration Partnership Project; Technical Specification Group Core Network and Terminals; 5G System; Network Function Repository Services; Stage 3 (Release 17) 3GPP TS 29.510 V17.3.0 (2021-09)7. 3rdGeneration Partnership Project; Technical Specification Group Services and System Aspects; Security architecture and procedures for 5G System (5GS) (Release 17) 3GPP TS 33.501 V17.3.0 (2021-09)8. 3rdGeneration Partnership Project; Technical Specification Group Core Network and Terminals; Domain Name System Procedures (Release 17) 3GPP TS 29.303 V17.0 (2021-03) It will be understood that various details of the subject matter described herein may be changed without departing from the scope of the subject matter described herein. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation, as the subject matter described herein is defined by the claims as set forth hereinafter.
26,535
11863519
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS The following further describes in detail this application with reference to the accompanying drawings. Embodiments of this application provide a communication method and apparatus, to trigger insertion of a local session anchor. In this way, an application server near an access point of a terminal device is selected, so that a path between the terminal device and the application server is short. The method and the apparatus of this application are based on a same technical concept. The method and the apparatus have similar principles for resolving problems. Therefore, for implementation of the apparatus and the method, refer to each other. Repeated parts are not described in detail again. In descriptions of this application, terms such as “first” and “second” are used only for distinction and description, but cannot be understood as indicating or implying relative importance, or as indicating or implying a sequence. It should be understood that, in embodiments of this application, “at least one” means one or more, and “a plurality of” means two or more. The term “and/or” describes an association relationship between associated objects and represents that three relationships may exist. For example, A and/or B may represent the following cases: Only A exists, both A and B exist, and only B exists, where A and B may be singular or plural. The character “/” usually represents an “or” relationship between the associated objects. “At least one of the following items (pieces)” or a similar expression thereof indicates any combination of these items, including a single item (piece) or any combination of a plurality of items (pieces). For example, at least one item (piece) of a, b, or c may represent: a, b, c, a and b, a and c, b and c, or a, b, and c, where a, b, and c may be singular or plural. To describe technical solutions in embodiments of this application more clearly, the following describes the communication method and apparatus according to embodiments of this application in detail with reference to the accompanying drawings. FIG.1shows a 5G network architecture. The network architecture includes network slice selection function (NSSF), an authentication server function (AUSF), a unified data management network element (UDM), a network element that implements an access and mobility management function (AMF), a network element that implements a session management function (SMF), a network element that implements a policy control function (PCF), a network element that implements an application function (AF), a terminal device, a radio access network (RAN) node (or device), a user plane network element (UPF), and a data network (DN). The network elements or devices may be connected through interfaces. An interface name shown inFIG.1is merely an example for description. This is not specifically limited in this embodiment of this application. The following describes in detail a function of a part of the network elements or devices in the network architecture. The terminal device may also be referred to as user equipment (UE), a mobile station (MS), a mobile terminal (MT), or the like, and is a device that provides voice and/or data connectivity to a user. For example, the terminal device may include a handheld device or a vehicle-mounted device that has a wireless connection function. Currently, the terminal device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a mobile internet device (MID), a wearable device, a virtual reality (VR) device, an augmented reality (AR) device, a wireless terminal in industrial control, a wireless terminal in self-driving, a wireless terminal in remote medical surgery, a wireless terminal in a smart grid, a wireless terminal in transportation safety, a wireless terminal in a smart city, a wireless terminal in a smart home, or the like. InFIG.1, the terminal device is shown by using UE as an example, and is not limited thereto. A radio access network may be an access network (AN) shown inFIG.1and provide a wireless access service for the terminal device. The RAN node (or device) is a device that connects the terminal device to a wireless network in the network architecture. Currently, some examples of the RAN node are a gNB, a transmission reception point (TRP), an evolved NodeB (eNB), a radio network controller (RNC), a NodeB (NB), a base station controller (BSC), a base transceiver station (BTS), a home base station (for example, a home evolved NodeB or a home NodeB, HNB), a baseband unit (BBU), and a wireless fidelity (Wi-Fi) access point (AP). The DN may be the Internet, an IP multi-media service (IMS) network, an area network (namely, a local network, for example, a mobile edge computing (MEC) network), or the like. The DN is an access destination of a PDU session of the terminal device. The data network includes an application server, and the application server provides a service for the terminal device by transmitting data to the terminal device. A core network is configured to connect the terminal device to a DN that can implement the service of the terminal device. The following describes functions of network elements in the core network. The AMF may access non-access stratum (NAS) signaling (including session management (SM) signaling) of the UE through an N1 interface and access signaling of a RAN through an N2 interface, to complete a registration procedure, SM signaling forwarding, and mobility management of the terminal device. The SMF may complete a procedure related to session establishment, release, update, or the like. The PCF may be responsible for policy management of the terminal device, including both a mobility-related policy and a PDU session-related policy, for example, a quality of service (QoS) policy and a charging policy. The UPF may be responsible for forwarding user data. The UDM stores subscription data of the terminal device, registration information related to the terminal device, and the like. The AUSF may be responsible for performing authentication and authorization on access of the UE. A main function of the AF is to interact with a 3rd generation partnership project (3GPP) core network to provide a service, to affect service flow routing, access network capability exposure, policy control, and the like. Each of the foregoing network elements in the core network may also be referred to as a functional entity, and may be a network element implemented on dedicated hardware, or may be a software instance running on dedicated hardware, or an instance of a virtual function on a proper platform. For example, the virtualization platform may be a cloud platform. It should be noted that the network architecture shown inFIG.1is not limited to including only the network elements shown in the figure, and may further include another device not shown in the figure. Details are not described herein in this application one by one. It should be noted that a distribution form of the network elements in the core network is not limited in this embodiment of this application. The distribution form shown inFIG.1is merely an example, and is not a limitation on this application. For ease of description, the network elements shown inFIG.1are used as examples for description subsequently in this application, and an XX network element is directly referred to as XX for short. For example, a UPF network element is referred to as a UPF for short. It should be understood that names of all network elements in this application are merely used as examples, and may also be referred to as other names in future communication, for example, 6G, or the network element in this application may be replaced by another entity or device that has a same function in future communication, for example, 6G. This is not limited in this application. A unified description is provided herein. Details are not described later. It should be noted that the 5G network architecture shown inFIG.1does not constitute a limitation on a 5G network. Optionally, the method in embodiments of this application is further applicable to various future communication systems, for example, 6G or other communication networks. There is only one UPF in the 5G network architecture shown inFIG.1. Based on the foregoing basic architecture, 5G further supports insertion of a plurality of session anchor UPFs on a user plane path of a PDU session, to support a connection to a local DN, so that UE can access a nearest application server in the local DN, for example, as shown in an architecture of a communication system inFIG.2. The plurality of UPFs introduced to the architecture of the communication system inFIG.2include a ULCL/BP, a UPF PSA1, and a UPF PSA2. The ULCL/BP distributes, to the PSA1or the PSA2according to a distribution rule, an uplink data packet received from the UE, and sends, to the UE, a data packet received from the PSA1or the PSA2. There is an N6 interface between the PSA1and a DN. For example, the DN may be a DN located in a central DC. There is an N6 interface between the PSA2and a local DN. For example, the local DN may be located in a local DC (namely, an MEC). When a UPF connected to the local DN exists at a location of the UE, an SMF may use the UPF as a local (local, L) PSA, and insert the UPF into a session path, so that the UE can access a nearest application in the local DN. For example, the PSA2inFIG.2is an L-PSA. It should be noted that a quantity of UPFs inFIG.2is merely an example, and more or fewer UPFs may alternatively be included. This is not limited in this application. It should be noted that only one L-PSA is shown inFIG.2. It should be understood that a plurality of L-PSAs may alternatively be included. This is not limited in this application. For example, in a network architecture shown inFIG.3, there are a plurality of L-PSAs, for example, an L-PSA1and an L-PSA2inFIG.3. In the network architecture, a ULCL/BP may be connected to the plurality of L-PSAs. In this example, the ULCL/BP and the L-PSA1are integrated and connected to an MEC1, the L-PSA2is connected to an MEC2, and a PSA is connected to a central DC. It should be understood that the ULCL/BP and the L-PSA1may be two independent devices. A same application server may be deployed in a central DC and a local DC. There is no good implementation method for how to select an application server for a terminal device, so that a path between the terminal device and the application server is the shortest. Based on this, this application provides a communication method, to trigger insertion of a local session anchor. In this way, an application server near an access point of the terminal device is selected, so that a path between the terminal device and the application server is short. An embodiment of this application provides a communication method. The communication method is applicable to the communication system shown inFIG.2. Refer toFIG.4. A specific procedure of the method may include the following steps. Step401: An SMF sends first information to a first user plane network element. The first information indicates the first network element to send a report message to the SMF when receiving a first domain name server (DNS) response message that meets a first condition. The report message includes information about an application server indicated by the first DNS response message or information about a data network corresponding to the application server. In addition, optionally, the report message may further include the first DNS response message. Alternatively, in another implementation, the report message is the first DNS response message. This may be understood as that the first information indicates the first user plane network element to forward the first DNS response message to the SMF when receiving the first DNS response message that meets the first condition. The first user plane network element may be considered as a remote anchor UPF, or certainly may be another UPF network element. This is not limited in this application. In a specific implementation, the information about the application server in the report message may be an IP address of the application server; and the information about the data network corresponding to the application server may be information about the data network (for example, an MEC) in which the application server is located, for example, a DNAI or network segment information of the data network. Step402: The first user plane network element sends the report message to the SMF when determining that the received first DNS response message meets the first condition. The first DNS response message is from a DNS server. Step403: The SMF inserts a local session anchor based on the report message. In an optional implementation, when the first information includes an address range of a data network where the report message needs to be sent, the first condition is that an (IP) address of the application server indicated by the first DNS response message belongs to the address range. Alternatively, when the first information further indicates information about an anycast address where the report message needs to be sent, the first condition is that an address of the application server indicated by the first DNS response message is included in the information about the anycast address. Herein, the address range of the data network may be one or more of the following: an IP address list, an IP address segment, an IP address prefix plus a prefix length, or a subnet IP address plus a subnet mask. In an optional implementation, the first information may alternatively include a DNAI corresponding to a data network where the report message needs to be sent. In this case, the first condition may be that the application server indicated by the first DNS response message is located in the data network corresponding to the DNAI. Specifically, the first user plane network element may obtain an address range of the data network corresponding to the DNAI, and then determine whether an IP address of the application server belongs to the address range. In an example, the first condition is that the IP address of the application server is within the address range of the data network. In this case, in step402, the first user plane network element sends the report message to the SMF when determining that the IP address of the application server indicated by the first DNS response message belongs to the data network. In another example, the first condition is that the address of the application server indicated by the first DNS response message is the anycast address. In this case, in step402, the first user plane network element sends the report message to the SMF when determining that the address of the application server indicated by the first DNS response message is the anycast address. It should be noted that the first information may indicate to send the report message when the address of the application server is any anycast address. In this case, the first information may not carry information about the anycast address that needs to be reported, or carry information indicating any anycast address. When the first information specifies a specific anycast address where the report message needs to be sent, the first information may include a range of anycast addresses where the report message needs to be sent. Similar to the address range of the data network, the range of anycast addresses may be one or more of the following: an anycast address list, an anycast address segment, an anycast address prefix plus a prefix length, or an anycast address prefix plus a subnet mask. In an optional implementation, before sending the first information to the first user plane network element, the SMF sends second information to the first user plane network element. The second information indicates access information of a terminal device. The access information of the terminal device indicates a location of an access point of the terminal device. For example, the access information of the terminal device is a data network access identifier (DNAI) corresponding to a data network that can be accessed by the terminal device, or an address of a data network that can be accessed by the terminal device (where the address may be one or more addresses in an address range supported by the data network). Alternatively, the access information may be address information of a UPF corresponding to a data network that can be accessed by the terminal device (for example, an interface address of the UPF, or an address that is configured in the UPF and that is used to provide communication for the terminal device, for example, an address used to perform NAT translation). The access information of the terminal device is used by the first user plane network element to select the application server for the terminal device. The information about the application server is included in the first DNS response message. For example, the address range may be one or more of the following: subnet information (for example, a subnet address), an address list, an address segment (the first and last addresses), an address prefix/prefix length, or an address plus a subnet mask. The data network may be an MEC. The SMF may send the access information of the terminal device to the first user plane network element by using the second information when a PDU session is established. For example, the second information is included in an N4 (namely, packet forwarding control protocol (PFCP)) session establishment message. Alternatively, the SMF may send new access information of the terminal device to the first user plane network element by using the second information when the access information of the terminal device changes and needs to be updated. For example, the second information is included in an N4 session modification message. Specifically, before sending the second information to the first user plane network element, the SMF obtains location information of the terminal device from an AMF, and determines the access information of the terminal device based on the location information of the terminal device. For example, the location information of the terminal device may be a tracking area identity (tracking area identity, TAI). The SMF may determine, based on the TAI, the DNAI corresponding to the data network (for example, an MEC) that can be accessed by the terminal device. For example, the SMF may subscribe to, from the AMF, an event that the terminal device moves out of or moves into a service area of the DNAI. When the terminal device moves out of or moves into the service area of the DNAI, the AMF sends a notification message to the SMF, and the SMF may determine, based on the notification message, the DNAI corresponding to the data network that can be accessed by the terminal device. It should be noted that a local data network in this application may also be referred to as an MEC. In an example, the SMF may configure a correspondence between location information of the terminal device and a DNAI. The correspondence may be a DNAI corresponding to a location area (for example, a TA list), or a location area (for example, a TA list) included in a service area of a DNAI. To obtain the address range of the data network, the SMF may configure a correspondence between a DNAI and an address range; or an AF provides the address range of the data network for a core network (and finally sends the address range to the SMF). The AF may be an MEC platform. The AF may send the correspondence between a DNAI and an address range. Alternatively, the AF sends only the address range, and the core network determines, based on the AF, a DNAI corresponding to the address range. In another optional embodiment, the SMF may determine, according to a policy and charging control (policy and charging control, PCC) rule, the access information of the UE that is to be sent to the first UPF. The SMF sends only a DNAI included in the PCC rule or subnet information corresponding to the DNAI. In an optional implementation, the second information further indicates priorities of the access information of the terminal device, so that the first user plane network element selects the application server for the terminal device based on the priorities of the access information of the terminal device. Optionally, the SMF may indicate preferred access information of the terminal device. In other words, the first user plane network element preferably selects the application server based on the access information. For example, when the terminal device can access a plurality of data networks at a current location, the SMF may indicate priorities of the plurality of data networks, and the first user plane network element selects, based on the priorities of the data networks, an application server from a data network in which the application server exists and has a highest priority. Herein, each data network corresponds to one piece of access information of the terminal device. In a specific implementation, the second information further indicates the first user plane network element to process, based on the access information of the terminal device, a DNS request message sent by the terminal device. Optionally, the processing may include: sending, based on the access information of the terminal device, the DNS request message to a DNS server that matches the access information of the terminal device; or adding the access information of the terminal device to the DNS request message. Correspondingly, after receiving the DNS request message from the terminal device, the first user plane network element adds the access information of the terminal device to the DNS request message, to obtain a new DNS request message, and forwards the new DNS request message; or the first user plane network element determines the DNS server corresponding to the access information of the terminal device, and sends the DNS request message to the DNS server corresponding to the access information of the terminal device. The DNS server corresponding to the access information of the terminal device may be configured in the first user plane network element, or may be obtained by the first user plane network element in another manner, for example, sent by the SMF to the first user plane network element. In an optional implementation, when the access information of the terminal device is a plurality of pieces of access information, that the first user plane network element adds the access information of the terminal device to the DNS request message, to obtain a new DNS request message may specifically include the following two methods. Method a1: The first user plane network element adds each piece of access information to the DNS request message. In this way, a new DNS request message is obtained based on each piece of access information. When there are a plurality of pieces of access information, the first user plane network element obtains a plurality of new DNS request messages. In this case, after the first user plane network element forwards the plurality of new DNS request messages, the DNS server that receives the plurality of new DNS request messages may return one DNS response message for each DNS request message. In other words, in this case, there are a plurality of first DNS response messages. Correspondingly, the first user plane network element may obtain information about a plurality of application servers. Method a2: The first user plane network element adds the plurality of pieces of access information to the DNS request message, to generate a new DNS request message including the plurality of pieces of access information. In this case, after the first user plane network element forwards the DNS request message including the plurality of pieces of access information, the DNS server that receives the DNS request message including the plurality of pieces of access information may determine one application server for each piece of access information, and return one first DNS response message including information about a plurality of application servers. In this case, the first user plane network element may obtain the information about the plurality of application servers. In an optional implementation, the access information of the terminal device is a plurality of pieces of access information. In this case, that the first user plane network element determines the DNS server corresponding to the access information of the terminal device, and sends the DNS request message to the DNS server corresponding to the access information of the terminal device may be specifically: The first user plane network element determines a DNS server corresponding to each of the plurality of pieces of access information, and sends the DNS request message to a DNS server corresponding to each piece of access information. In this case, a DNS server corresponding to each of the plurality of pieces of access information returns one DNS response message. That is, in this case, there are a plurality of first DNS response messages. Correspondingly, the first user plane network element may obtain information about a plurality of application servers. In one case, if the access information of the terminal device corresponds to no DNS server, the first user plane network element may forward the DNS request message to a DNS server requested by the DNS request message (namely, a DNS server indicated by a destination address of the DNS request message). In another case, if the DNS server corresponding to the access information of the terminal device returns a response message indicating that no corresponding application server is found, the DNS request message may be forwarded to a DNS server requested by the DNS request message. Optionally, in the foregoing method, when the access information of the terminal device is the plurality of pieces of access information, the first user plane network element may obtain the plurality of first DNS response messages, so that when the first user plane network element obtains the information about the plurality of application servers, a possible case may be: The plurality of first DNS response messages corresponding to the plurality of pieces of access information received by the first user plane network element include the information about the plurality of application servers. In this case, each first DNS response message may include information about one application server; or a part of the plurality of first DNS response messages may include the information about the application servers, and the remaining part of the first DNS response messages do not include the information about the application servers. Each of the part of the first DNS response messages may include information about at least one application server. Optionally, when obtaining the information about the plurality of application servers, the first user plane network element determines a target application server based on priorities of access information of access networks corresponding to the plurality of application servers. In this case, the report message further includes information about the target application server or access network information corresponding to the target application server; or the report message is a first DNS response message including information about the target application server. Alternatively, optionally, when the first user plane network element obtains the information about the plurality of application servers, the report message includes the information about the plurality of application servers or access network information corresponding to each of the plurality of application servers; or the report message is (one or more) first DNS response messages including the information about the plurality of application servers. In this case, the SMF network element determines the target application server. It should be noted that, if the first user plane network element forwards the plurality of new DNS request messages, or the first user plane network element sends the DNS request message to each of the DNS servers corresponding to the plurality of pieces of access information, that is, sends a plurality of request messages, the first user plane network element needs to send the report message to the SMF after DNS response messages of all sent DNS request messages are received or response timeout occurs. In an example, the first information further indicates the first user plane network element to buffer the first DNS response message. Correspondingly, the first user plane network element buffers the first DNS response message when sending the report message to the SMF. The first DNS response message buffered by the first user plane network element is (one or more) first DNS response messages corresponding to the application servers included in the report message. It should be noted that if the first user plane network element receives the plurality of first DNS response messages, and determines the target application server based on the plurality of first DNS response messages, the first user plane network element may buffer only the first DNS response message including the information about the target application server. Certainly, the first user plane network element may alternatively buffer all first DNS response messages. This is not limited in this application. In an optional implementation, the SMF sends third information to the first user plane network element after inserting the local session anchor, where the third information indicates the first user plane network element to send a second DNS response message to the terminal device. Then, the first user plane network element sends the second DNS response message to the terminal device. The second DNS response message indicates the target application server selected for the terminal device, and the second DNS response message is the first DNS response message, or the second DNS response message is determined based on the first DNS response message. When the second DNS response message is the first DNS response message, the first DNS response message is a DNS response message buffered by the first user plane network element. When the second DNS response message is the first DNS response message buffered by the first user plane network element, the first user plane network element receives only one first DNS response message, and the first DNS response message includes only an address of the target application server; or the first user plane network element buffers only one first DNS response message, and the first DNS response message includes only an address of the target application server. In the foregoing method a2, the first DNS response message may include the information about the plurality of application servers (for example, addresses of the application servers). When receiving the third information, the first user plane network element deletes an address of an application server other than the target application server from the first DNS response message, to generate the second DNS response message. In an optional implementation, when the report message may include the information about the plurality of application servers, the second information may not include the priorities, and the first user plane network element sends, to the SMF, the report message including the received information about the plurality of application servers. When the report message includes the information about the plurality of application servers, the SMF determines a target application server, where the third information includes an address of the target application server. In this case, the first user plane network element determines a second DNS response message based on the address of the target application server in the third information. Specifically, when the second DNS response message is determined based on the first DNS response message, to be specific, the first user plane network element determines the second DNS response message based on the address of the target application server in the third information, the following several cases may be specifically included. Case b1: When the first DNS response message includes addresses of the plurality of application servers, the first user plane network element deletes, based on the address of the target application server included in the third information, information about an application server other than the address of the target application server in the first DNS response message, to generate the second DNS response message. That is, only the address of the target application server in the first DNS response message is reserved, to obtain the second DNS response message. The second DNS response message includes the address of the target application server. Case b2: When there are a plurality of first DNS response messages, the first user plane network element selects, from the plurality of first DNS response messages based on the address of the target application server included in the third information, one first DNS response message including the address of the target application server, and uses the first DNS response message as the second DNS response message. The second DNS response message includes the address of the target application server. In another example, the report message is the first DNS response message, or the report message includes the first DNS response message. In this way, the first user plane network element does not need to buffer the first DNS response message, and may forward the first DNS response message to the SMF as the report message. In an optional implementation, the SMF sends a third DNS response message to the terminal device after inserting the local session anchor, where the third DNS response message indicates a target application server selected for the terminal device, and the third DNS response message is the first DNS response message, or the third DNS response message is determined based on the first DNS response message. Optionally, that the SMF sends a third DNS response message to the terminal device may include: The SMF sends the third DNS response message to the first user plane network element, and the first user plane network element sends the third DNS response message to the terminal device. When the third DNS response message is the first DNS response message, the first user plane network element receives only one first DNS response message, where the first DNS response message includes only an address of the target application server, and the report message includes only the first DNS response message. When the third DNS response message is determined based on the first DNS response message, the following cases may be included. Case c1: When the first DNS response message includes addresses of a plurality of application servers, that is, the report message includes the addresses of the plurality of application servers, the SMF determines the address of the target application server, and deletes information about an application server other than the address of the target application server in the first DNS response message, to generate the third DNS response message. That is, only the address of the target application server in the first DNS response message is reserved, to obtain the third DNS response message. The third DNS response message includes the address of the target application server. Case c2: When the first DNS response message includes addresses of a plurality of application servers, and the report message includes the address of the target application server (that is, the first user plane network element determines the address of the target application server), the SMF deletes, based on the address of the target application server, information about an application server other than the address of the target application server in the first DNS response message, to generate the third DNS response message. That is, only the address of the target application server in the first DNS response message is reserved, to obtain the third DNS response message. The third DNS response message includes the address of the target application server. Case c3: When the report message is a plurality of first DNS response messages, or the report message includes a plurality of first DNS response messages, the SMF determines the address of the target application server, selects, from the plurality of first DNS response messages, one first DNS response message including the address of the target application server, and uses the first DNS response message as the third DNS response message. The third DNS response message includes the address of the target application server. In an optional implementation, when the SMF performs step403, a specific method may be: The SMF determines, based on the information about the application server, a DNAI of the data network in which the application server is located, to determine the to-be-inserted local session anchor. Alternatively, the SMF determines the to-be-inserted local session anchor based on a DNAI corresponding to the information about the data network. For example, when determining, based on the information about the application server, that there is no local session anchor connected to the data network corresponding to the application server (that is, no local session anchor connected to the data network is inserted), the SMF determines that the local session anchor connected to the data network needs to be inserted, and selects the local session anchor based on the DNAI of the data network. It should be noted that the DNAI corresponding to the information about the data network may be understood as that the information about the data network is the DNAI, or the information about the data network indicates the DNAI. For example, the information about the data network is the address range of the data network, and the SMF determines the DNAI of the data network based on the address range of the data network. In another optional implementation, the SMF obtains routing information of the anycast address. The routing information of the anycast address includes information about at least one network element that implements a user plane function and that corresponds to the anycast address, or a DNAI of a data network corresponding to the anycast address. The SMF determines the local session anchor based on access information of the terminal device and the routing information of the anycast address. Optionally, the SMF may further determine, based on the DNAI of the data network, the session anchor that needs to be inserted. For example, when determining, based on the access information of the terminal device and the routing information of the anycast address, that the data network that can be currently accessed by the terminal device supports the anycast address and no local session anchor connected to the data network is inserted, the SMF determines that the local session anchor needs to be inserted. The SMF may determine the local session anchor based on the DNAI corresponding to the data network. For example, the SMF may configure the routing information of the anycast address. In an optional implementation, when determining that no ULCL UPF is inserted into a PDU session of the terminal device, the SMF determines that the ULCL UPF needs to be inserted, and inserts the ULCL UPF. According to the communication method provided in this embodiment of this application, in a process of discovering the application server, insertion of the local session anchor can be triggered by using the first condition. Therefore, the application server near the access point of the terminal device is selected, so that a path between the terminal device and the application server is short. Based on the foregoing embodiments, the following describes the communication method provided in embodiments of this application in detail by using specific examples, for example, embodiments shown inFIG.5andFIG.6. In the following example, an example in which the terminal device is UE, the first user plane network element is a first UPF, the local session anchor is an L-PSA, and the data network is an MEC is used for description. FIG.5shows an example of a communication method. In this example, a scenario is as follows: UE has created a PDU session, and an SMF has not inserted a ULCL into the PDU session; or an SMF has inserted a ULCL, but has not inserted an L-PSA corresponding to a nearest application server that provides an application and that can be accessed by UE. Specifically, a specific procedure of this example may include the following steps. Step501: The SMF sends second information to a first UPF, where the second information indicates access information of the UE. Specifically, the access information of the UE may be a DNAI corresponding to an MEC that can be currently accessed by the UE, or an address range (or an address in the address range) of an MEC that can be currently accessed by the UE, for example, one or more of the following: subnet information (for example, a subnet address), an address list, an address segment (the first and last addresses), an address prefix/prefix length, or an address plus a subnet mask of the MEC. Alternatively, the access information may be address information of a UPF corresponding to a data network that can be accessed by the UE (for example, an interface address of the UPF, or an address that is configured in the UPF and that is used to provide communication for the UE, for example, an address used to perform NAT translation). Alternatively, the access information may be address information of a DNS server corresponding to a data network that can be accessed by the UE. Optionally, the SMF may send the access information of the UE to the first UPF by using the second information when the PDU session is established. For example, the second information may be included in an N4 session establishment request message. Alternatively, the SMF may send new access information to the first UPF by using the second information when the access information of the UE changes and needs to be updated. For example, the second information may be included in an N4 session modification request message. In an optional implementation, the SMF obtains location information (for example, a TAI) of the UE from an AMF, and determines the access information of the UE based on the location information of the UE, for example, the DNAI corresponding to the MEC that can be accessed by the UE. For example, the SMF may configure a correspondence between location information and a DNAI. The correspondence may be a DNAI corresponding to a location area (for example, a TA list), or a location area (for example, a TA list) served by a DNAI. To obtain the address range (for example, network segment information) of the MEC, the SMF may configure a correspondence between a DNAI and an address range; or an AF provides the address range of the MEC for a core network (and finally sends the address range to the SMF). The AF may be an MEC platform. The AF may send the correspondence between a DNAI and an address range. Alternatively, the AF sends only the address range, and the core network determines, based on the AF, a DNAI corresponding to the address range. In another optional embodiment, the SMF may determine, according to a PCC rule, the access information of the UE that is to be sent to the first UPF. The SMF sends only a DNAI included in the PCC rule or subnet information of an MEC corresponding to the DNAI. In an optional implementation, the second information further indicates priorities of the access information of the terminal device, so that the first UPF selects an application server for the UE based on the priorities of the access information of the UE. For example, the first UPF preferably selects the application server based on the access information. For example, when the UE can currently access a plurality of MECs, the SMF may indicate priorities of the plurality of MECs, and the first UPF selects, based on the priorities of the MECs, an MEC having a highest priority in MECs that support the application server, and selects an application server from the MEC. Herein, each MEC corresponds to one piece of access information of the UE. In an optional implementation, the second information further indicates that the first UPF can discover a DNS request message sent by the UE, and process the DNS request message based on the access information of the UE. For example, the processing may be the following two types. One processing is sending, based on the access information of the UE, the DNS request message to a DNS server that matches the access information of the UE. For example, the access information of the UE is the DNAI. The first UPF may obtain a mapping relationship between a DNAI and a DNS server address. When receiving the DNS request message, the first UPF determines the DNS server based on the access information of the UE, and sends the DNS request message to the determined DNS server. Another processing is inserting the access information of the UE into the DNS request message. For example, the access information of the UE may be the subnet information of the MEC that can be accessed by the UE. Step502: The SMF sends first information to the first UPF. The first information indicates the first UPF to send a report message to the SMF when receiving a first DNS response message that meets a first condition. Optionally, the first information may also be included in the N4 session establishment request message or the N4 session modification request message. It should be noted that the N4 session establishment/modification request message including the first information may include the second information, or may not include the second information. In this example, the first condition is that an IP address of an application server indicated by the first DNS response message belongs to the address range of the data network included in the first information. In this case, the first information includes an address range of an MEC where the report message needs to be sent. In this way, the SMF can indicate the first UPF to send the report message to the SMF only when the application server indicated by the first DNS response message is located in the MEC that has not been inserted. This reduces a quantity of report messages sent by the first UPF. Specifically, the SMF may send, to the first UPF, the first information including the information about the MEC where the first UPF needs to send the report message. For example, the information may be the DNAI corresponding to the MEC or the address range corresponding to the MEC. Herein, the information about the MEC where the report message needs to be sent is a subset of the access information of the UE (in the second information). In an optional implementation, the first information further indicates the first UPF to temporarily buffer the first DNS response message. Step503: The UPF receives the DNS request message sent by the UE. Step504: The first UPF processes the DNS request message, and sends a DNS request message 2. The first UPF may process the DNS request message according to an indication of the second information. Specifically, the first UPF checks a data packet sent by the UE, and performs the following processing on the DNS request message based on the access information of the UE if the data packet is the DNS request message. In a first implementation, the first UPF inserts the access information of the UE into the DNS request message. For example, the access information of the UE may be a subnet address or the DNAI corresponding to the MEC that can be currently accessed by the UE. Then, the first UPF uses, as a new DNS request message (denoted as the DNS request message 2 herein), the DNS request message into which the access information of the UE is inserted, and sends the new DNS request message to the DNS server, where the DNS server is a DNS server specified by the DNS request message of the UE. If there are a plurality of pieces of access information of the UE, the first UPF may generate, by inserting access information, one DNS request message for each piece of access information (which may be understood as generating a plurality of DNS request messages 2) based on the plurality of pieces of access information of the UE, and the UPF sends the plurality of new DNS request messages to the DNS server. Alternatively, the first UPF may insert the plurality of pieces of access information into the DNS request message, and send the DNS request message to the DNS server. In a second implementation, the UPF determines, based on the access information of the UE, the DNS server corresponding to the access information, and sends the DNS request message to the DNS server corresponding to the access information. In this method, the first UPF sends the DNS request message to a DNS server that is not requested by the UE; or an address of a DNS server requested by the UE is an anycast address, and the first UPF determines, based on the access information of the UE, that the DNS server corresponding to the anycast address is the DNS server corresponding to the access information. Certainly, when the determined DNS server corresponding to the access information is not the DNS server requested by the UE, the first UPF may also send the DNS request message to the DNS server requested by the UE. Alternatively, when the address of the DNS server requested by the UE is the anycast address, the first UPF may send the DNS request message to DNS servers corresponding to all the access information, and may further send the DNS request message to a DNS server that corresponds to the anycast address and that is located in a central data center DC. Optionally, when there are a plurality of pieces of access information of the UE, and each of the plurality of pieces of access information corresponds to a DNS server, the first UPF may send the DNS request message to each of the plurality of DNS servers. In this method, the UPF does not need to modify the DNS request message. The DNS server corresponding to the access information of the UE may be configured in the first UPF, or may be obtained by the first UPF by using another method, for example, sent by the SMF to the first UPF. In an optional implementation, the first UPF may preferably obtain an IP address of the DNS server in the MEC that can be accessed by the UE. If the application server requested by the UE does not exist in the MEC, the UPF attempts to select another application server. For example, in the second implementation, if the first UPF does not obtain the IP address of the application server from the DNS server corresponding to the access information of the UE, the first UPF then sends the DNS request message to the DNS server requested by the UE (or the DNS server in the central DC). Optionally, the first UPF may determine, based on the priorities of the access information in the second information, an order of sending the DNS request message. For example, the first UPF first sends the DNS request message to a DNS server corresponding to access information having a highest priority. Step505: The first UPF receives the first DNS response message. Specifically, the first UPF receives the first DNS response message from the DNS server. If the first UPF sends a plurality of DNS request messages, the returned first DNS response message may indicate that no corresponding application server is found. If the first UPF inserts the access information of the UE into the DNS request message, optionally, the DNS server may indicate, in the first DNS response message, access information corresponding to the selected application server, that is, the DNS server is an application server selected based on the access information. In an example, the UE has the plurality of pieces of access information. In this case, in the first implementation in step504, when the first UPF sends the plurality of new DNS request messages to the DNS server, the DNS server that receives the plurality of new DNS request messages may return one DNS response message for each DNS request message. In other words, in this case, there are a plurality of first DNS response messages. Correspondingly, the first UPF may obtain information about a plurality of application servers. In another example, the UE has the plurality of pieces of access information. In this case, in the second implementation of step504, when the first UPF sends the DNS request message to each of the plurality of DNS servers, the DNS servers corresponding to the plurality of pieces of access information each return one DNS response message. In other words, in this case, there are a plurality of first DNS response messages. Correspondingly, the first UPF may obtain information about a plurality of application servers. If the first UPF inserts the plurality of pieces of access information into one DNS request message, the first UPF receives one first DNS response message, and the first DNS response message may include information about one or more application servers. Optionally, the first DNS response message further indicates access information corresponding to the one or more application servers. Step506: The first UPF sends the report message to the SMF when determining that the first DNS response message meets the first condition. The report message includes information about an application server indicated by the first DNS response message or information about a data network corresponding to the application server. In a specific implementation, the information about the application server in the report message may be an IP address of the application server; and the information about the data network corresponding to the application server may be information about an MEC in which the application server is located, for example, a DNAI or network segment information of the MEC. In an implementation, if the IP address of the selected application server in the first DNS response message is an IP address in one MEC corresponding to current access information of the UE, the first UPF sends the report message to the SMF. Specifically, the access information of the UE may be an address range of an MEC to which the UE can connect at a current access location. The first UPF may determine, based on the address range, whether the IP address of the application server in the first DNS response message belongs to the MEC. Alternatively, when the access information of the UE is the DNAI, the first UPF may obtain an address range of an MEC corresponding to the DNAI, and determine, based on the address range corresponding to the DNAI, whether the IP address of the application server corresponds to the DNAI. The first UPF may locally configure a correspondence between a DNAI and an address range corresponding to the DNAI, or the SMF may send the correspondence to the first UPF. In another implementation, the first UPF determines, based on the access information corresponding to the selected application server in the first DNS response message, whether a report needs to be sent to the SMF. To be specific, if the access information corresponding to the application server corresponds to an MEC specified by the SMF, the first UPF sends the report to the SMF. In this method, the first DNS response message includes the access information (namely, the information about the MEC) corresponding to the selected application server. Specifically, when the first UPF sends the report message to the SMF, the first UPF may temporarily buffer the first DNS response message. In an optional implementation, if there are a plurality of MECs that can be accessed (in other words, there are a plurality of pieces of access information of the UE), the first UPF may select the application server based on information about the plurality of MECs. If the first UPF obtains information about application servers in the plurality of MECs, for example, obtains addresses of application servers in an MEC1and an MEC2, optionally, the first UPF determines a target application server based on priorities (for example, obtained priorities of the MECs) of access information of access networks corresponding to the plurality of application servers. In this case, the report message includes information about the target application server or access network information corresponding to the target application server. Optionally, if the first UPF obtains IP addresses of application servers in a plurality of different MECs, the first UPF may also send information about the plurality of application servers (addresses of the servers or information about MECs corresponding to the servers) to the SMF. In other words, the report message includes a plurality of pieces of information about the application servers, or includes a plurality of pieces of information about the MECs corresponding to the application servers, and the SMF selects the target application server. In an optional implementation, the report message is the first DNS response message. To be specific, when the first condition is met, the first UPF sends the first DNS response message to the SMF, where the first DNS response message includes an address of the selected application server. In this manner, the information about the application server in the report message is the information about the application server in the first DNS response message, and the first UPF may not additionally send, in the report message, the information about the application server or the information about the data network corresponding to the application server. Certainly, optionally, the first UPF may alternatively include, in the report message, both the first DNS response message (where the first DNS response message includes the information about the application server) and the additional information about the application server or information about the data network corresponding to the application server. This is not limited in this application. Step507: The SMF inserts a local session anchor L-PSA based on the report message. Specifically, the SMF determines the to-be-inserted L-PSA based on the information about the application server or the information about the MEC in the report message. Specifically, the SMF may determine, based on the information about the application server, a DNAI of the data network in which the application server is located, to determine the to-be-inserted local session anchor. Alternatively, the SMF determines the to-be-inserted local session anchor based on a DNAI corresponding to the information about the data network. For example, when determining, based on the information about the application server, that the MEC corresponding to the application server has no L-PSA connected, the SMF inserts the L-PSA connected to the MEC. In an optional implementation, if no ULCL UPF is inserted into the PDU session of the UE, the SMF determines that the ULCL UPF needs to be inserted, and inserts the ULCL UPF. Certainly, if the ULCL UPF has been inserted, the SMF does not need to insert the ULCL UPF. In an optional implementation, the ULCL UPF inserted by the SMF and the L-PSA connected to the MEC may be co-located. In an optional implementation, when the first information indicates the first UPF to buffer the first DNS response message, the following steps508and509are performed after step507. Step508: The SMF sends third information to the first UPF, where the third information indicates the first UPF to send a second DNS response message to the UE. Optionally, if the report message sent by the first UPF to the SMF in step507includes the information about the plurality of application servers, the SMF may indicate the first UPF to select an application server, that is, the SMF determines the target application server. For example, the SMF may indicate an IP address of the selected target application server, or may indicate a DNAI corresponding to the application server. In this case, the third information includes the address of the target application server. Step509: The first UPF sends the second DNS response message to the UE. In an example, when the first UPF receives only one first DNS response message, where the first DNS response message includes only the address of the target application server, the second DNS response message is the first DNS response message buffered by the first UPF. In another example, if the first DNS response message includes addresses of a plurality of application servers, and the report message sent by the first UPF to the SMF includes information about the plurality of application servers, the first UPF deletes, based on the address of the target application server in a first message of the SMF, information about an application server other than the address of the target application server in the first DNS response message, to generate the second DNS response message. That is, only the address of the target application server in the first DNS response message is reserved, to obtain the second DNS response message. The second DNS response message includes the address of the target application server. In another example, if the first DNS response message includes addresses of a plurality of application servers, the first UPF determines the target application server based on priorities of access information corresponding to the application servers, and the report message includes only the information of the target application server, the first UPF deletes information about an application server other than the address of the target application server in the first DNS response message, to generate the second DNS response message. That is, only the address of the target application server in the first DNS response message is reserved, to obtain the second DNS response message. The second DNS response message includes the address of the target application server. In another example, when there are a plurality of first DNS response messages, the first user plane network element selects, from the plurality of first DNS response messages based on the address of the target application server included in the third information, one first DNS response message including the address of the target application server, and uses the first DNS response message as the second DNS response message. The second DNS response message includes the address of the target application server. In this example, the target application server may be determined by the first UPF, or may be determined by the SMF. In another optional implementation, when the report message is the first DNS response message, or the report message includes the first DNS response message, the following step510is performed after step507. Step510: The SMF sends a third DNS response message to the UE. In an example, when the first user plane network element receives only one first DNS response message, where the first DNS response message includes only the address of the target application server, the report message is the first DNS response message. In this case, the third DNS response message is the first DNS response message. In another example, when the first DNS response message includes addresses of a plurality of application servers, that is, the report message includes the addresses of the plurality of application servers, the SMF determines the address of the target application server, and deletes information about an application server other than the address of the target application server in the first DNS response message, to generate the third DNS response message. That is, only the address of the target application server in the first DNS response message is reserved, to obtain the third DNS response message. The third DNS response message includes the address of the target application server. In another example, when the first DNS response message includes addresses of a plurality of application servers, and the report message includes the address of the target application server (that is, the first user plane network element determines the address of the target application server), the SMF deletes, based on the address of the target application server, information about an application server other than the address of the target application server in the first DNS response message, to generate the third DNS response message. That is, only the address of the target application server in the first DNS response message is reserved, to obtain the third DNS response message. The third DNS response message includes the address of the target application server. In another example, when the report message is a plurality of first DNS response messages, the SMF determines the address of the target application server, selects, from the plurality of first DNS response messages, one first DNS response message including the address of the target application server, and uses the first DNS response message as the third DNS response message. The third DNS response message includes the address of the target application server. In the foregoing example, in a process of discovering the application server, when the application server indicated by the first DNS response message received by the first UPF is located in the data network specified by the SMF, insertion of the local session anchor is triggered. Therefore, an application server near an access point of the terminal device is selected, so that a path between the terminal device and the application server is short. FIG.6shows an example of another communication method. In this example, a scenario is as follows: UE has created a PDU session, and an SMF has not inserted a ULCL into the PDU session; or an SMF has inserted a ULCL, but has not inserted an L-PSA corresponding to a nearest application server that provides an application and that can be accessed by UE. In this example, an operator configures an anycast address for the application server. To be specific, a same anycast address may be configured for application servers that are located in different MECs and that provide a same service. After the UE obtains the anycast address of the application server, a data packet for sending a message includes the anycast address, and a network may select the nearest application server based on the anycast address. Specifically, a specific procedure of this example may include the following steps. Step601: The SMF sends first information to a first UPF. The first information indicates the first user plane network element to send a report message to the SMF when receiving a first DNS response message that meets a first condition. In this example, the first information further indicates an address range of an anycast address where the report message needs to be sent. Correspondingly, the first condition is that an address of an application server indicated by the first DNS response message is included in information about the anycast address. In an optional implementation, the first information indicates the first UPF to discover the first DNS response message and determine whether an IP address of the application server carried in the first DNS response message is included in an address range of an anycast IP address (anycast IP address) indicated by the first information. If the IP address of the application server carried in the first DNS response message is included in the address range of the anycast IP address indicated by the first information, the first information indicates the first UPF to send the report message to the SMF. Specifically, the first information may include that the SMF provides, for the first UPF, a range of anycast addresses that need to be discovered. The range of anycast addresses may be one or more of the following: an anycast address list, an anycast address segment, an anycast address prefix plus a prefix length, or an anycast address prefix plus a subnet mask. It should be noted that the first information may further indicate to send the report message when the address of the application server is any anycast address. In this case, the first information may not carry information about the anycast address that needs to be reported (that is, only indicates to send the report message when the address of the server is the anycast address), or carry information indicating any anycast address (that is, the first information includes the information indicating any anycast address). In an optional implementation, the first information further indicates the first UPF to buffer the first DNS response message when sending the report message to the SMF. In other words, the first information further indicates the first UPF to buffer the first DNS response message when finding that the IP address of the application server is the anycast IP address (or a specified anycast IP address). Step602: The UE sends a DNS request message to the first UPF, where the DNS request message includes information about a DNS server. Step603: The first UPF sends the DNS request message to the DNS server indicated by the DNS request message. Step604: The first UPF receives the first DNS response message sent by the DNS server. Step605: The first UPF sends the report message to the SMF when determining that the first DNS response message meets the first condition. The report message includes information about the application server indicated by the first DNS response message. Specifically, when determining that the address of the application server indicated by the first DNS response message is the anycast address indicated by the first information, the first UPF sends the report message to the SMF. For example, if the IP address of the application server in the first DNS response message is the anycast IP address, the SMF provides, in step601, a list (or a range) of anycast IP addresses that need to be discovered, and the IP address of the application server is in the list, the first UPF sends the report message to the SMF, where the information about the application server included in the report message is the anycast IP address of the application server. In an optional implementation, the first UPF buffers the first DNS response message when sending the report message to the SMF. In another optional implementation, the report message is the first DNS response message. To be specific, when the first condition is met, the first UPF directly sends the first DNS response message to the SMF. In this implementation, because the first DNS response message includes the information about the application server, the report message may not additionally carry the information about the application server. Step606: The SMF inserts an L-PSA based on the report message. Specifically, the SMF may determine, based on the anycast address of the application server included in the report message, whether to insert the L-PSA. In an optional implementation, the SMF obtains routing information of the anycast address. The routing information of the anycast address includes information about at least one UPF corresponding to the anycast address, or a DNAI corresponding to the anycast address. The SMF determines the local session anchor based on access information of the UE and the routing information of the anycast address. Optionally, if the routing information is the DNAI corresponding to the anycast address, the SMF may further determine, based on the DNAI, the session anchor that needs to be inserted. For example, when determining, based on the access information of the terminal device and the routing information of the anycast address, that an MEC that can be currently accessed by the terminal device supports the anycast address and no local session anchor connected to the MEC is inserted, the SMF determines that the L-PSA needs to be inserted. In this way, a data packet that is subsequently sent by the UE and whose destination address is the anycast address can be routed to the nearest MEC. If in the current access information of the UE, there are a plurality of corresponding MECs that can support the anycast address, the SMF may select one of the MECs, to insert an L-PSA connected to the MEC. To support this function, the SMF may obtain the MECs that support the anycast address. The information (namely, the MECs that support the anycast address) may be configured in the SMF. In an optional implementation, if no ULCL UPF is inserted into a PDU session path of the UE, the SMF determines that the ULCL UPF needs to be inserted, and inserts the ULCL UPF. In an optional implementation, when the first information indicates the first UPF to buffer the first DNS response message, the following steps607and608are performed after step606. Step607: The SMF sends third information to the first UPF, where the third information indicates the first UPF to send a second DNS response message to the UE. Specifically, the second DNS response message is the first DNS response message buffered by the first UPF. Step608: The first UPF sends the second DNS response message to the UE. In another optional implementation, when the report message is the first DNS response message, or the report message includes the first DNS response message, the following step609is performed after step606. Step609: The SMF sends a third DNS response message to the UE. The third DNS response message is the report message received by the SMF, namely, the first DNS response message. Subsequently, when the UE sends a data packet to the application server, a destination address of the data packet is the anycast IP address of the application server. The data packet may be sent by the ULCL/L-PSA to an application server in the MEC. In this example, if no L-PSA connected to an MEC supporting a nearby application server or no ULCL is inserted into the PDU session of the UE, the data packet is routed to a remote PSA even if an application server exists locally, resulting in route recurvation. In this example, insertion of the L-PSA may be triggered when the UE resolves the IP address of the application server through the DNS server, so that the data packet of the UE can be sent to the nearby application server. It should be noted that there may be another possible manner. The SMF indicates the first UPF to discover a data packet whose destination address is the anycast IP address. If the first UPF finds the data packet whose destination address is the anycast IP address, the first UPF sends the report message to the SMF, where the report message includes the anycast IP address. The SMF determines, based on the anycast IP address, whether the ULCL needs to be inserted. In other words, in this method, whether to insert the ULCL is not determined based on a DNS response message, but is determined based on the destination address of the data packet. A disadvantage of this method is that route recurvation occurs in the first data packet whose destination address is the anycast IP address, and the data packet may not be routed (because the first UPF does not have a route of the anycast IP address). Based on the foregoing embodiments, an embodiment of this application further provides a communication apparatus. Refer toFIG.7. The communication apparatus700may include a transceiver unit701and a processing unit702. The transceiver unit701is configured to receive information (a message or data) or send information (a message or data) by the communication apparatus700, and the processing unit702is configured to control and manage an action of the communication apparatus700. The processing unit702may further control a step performed by the transceiver unit701. For example, the communication apparatus700may be the SMF in the foregoing embodiment, and may be specifically a processor, a chip, a chip system, or a functional module in the SMF. Alternatively, the communication apparatus700may be the first user plane network element (for example, the first UPF) in the foregoing embodiment, and may be specifically a processor, a chip, a chip system, or a functional module in the first user plane network element. In an embodiment, when the communication apparatus700is configured to implement functions of the SMF in the embodiments shown inFIG.4toFIG.6, the communication apparatus700may specifically include: the transceiver unit701, configured to send first information to a first user plane network element, where the first information indicates the first user plane network element to send a report message to the SMF when receiving a first domain name server DNS response message that meets a first condition, and the report message includes information about an application server indicated by the first DNS response message or information about a data network corresponding to the application server; and the processing unit702, configured to insert a local session anchor based on the report message, where the transceiver unit701is further configured to receive the report message sent by the first user plane network element. In an optional implementation, the first condition is that an internet protocol IP address of the application server indicated by the first DNS response message belongs to an address range of the data network included in the first information. Alternatively, the first information further indicates information about an anycast address where the report message needs to be sent, and the first condition is that an address of the application server indicated by the first DNS response message is included in the information about the anycast address. In an implementation, the report message is the first DNS response message. In another implementation, the first information further indicates the first user plane network element to buffer the first DNS response message. In an optional implementation, the transceiver unit701is further configured to send third information to the first user plane network element after the processing unit702inserts the local session anchor, where the third information indicates the first user plane network element to send a second DNS response message to a terminal device. The second DNS response message indicates a target application server selected for the terminal device, and the second DNS response message is the first DNS response message, or the second DNS response message is determined based on the first DNS response message. In another optional implementation, the transceiver unit701is further configured to send a third DNS response message to a terminal device after the processing unit702inserts the local session anchor, where the third DNS response message indicates a target application server selected for the terminal device, and the third DNS response message is the first DNS response message, or the third DNS response message is determined based on the first DNS response message. In a specific implementation, the transceiver unit701is further configured to send second information to the first user plane network element. The second information indicates access information of the terminal device. The access information of the terminal device indicates a location of an access point that can be accessed by the terminal device. For example, the access information of the terminal device is a data network access identifier DNAI corresponding to a data network that can be accessed by the terminal device, or an address of a data network that can be accessed by the terminal device (where the address may be one or more addresses in an address range supported by the data network). Alternatively, the access information may be address information of a UPF corresponding to a data network that can be accessed by the terminal device (for example, an interface address of the UPF, or an address that is configured in the UPF and that is used to provide communication for the terminal device, for example, an address used to perform NAT translation). The access information of the terminal device is used by the first user plane network element to select the application server for the terminal device. The information about the application server is included in the first DNS response message. In an optional implementation, the transceiver unit701is further configured to obtain location information of the terminal device from an AMF before sending the second information to the first user plane network element. The processing unit702is further configured to determine the access information of the terminal device based on the location information of the terminal device. Specifically, the second information further indicates priorities of the access information of the terminal device, so that the first user plane network element selects the application server for the terminal device based on the priorities of the access information of the terminal device. Optionally, when the report message includes information about a plurality of application servers, the processing unit702is further configured to determine a target application server, where the third information includes an address of the target application server. In an example, when inserting the local session anchor based on the report message, the processing unit702is specifically configured to: determine, based on the information about the application server, a DNAI of the data network in which the application server is located, to determine the to-be-inserted local session anchor; or determine the to-be-inserted local session anchor based on a DNAI corresponding to the information about the data network. In another example, the processing unit702is further configured to obtain routing information of the anycast address, where the routing information of the anycast address includes information about at least one network element that implements a user plane function and that corresponds to the anycast address, or a DNAI of a data network corresponding to the anycast address; and determine the local session anchor based on access information of the terminal device and the routing information of the anycast address. In an embodiment, when the communication apparatus700is configured to implement functions of the first user plane network element (for example, the first UPF) in the embodiments shown inFIG.4toFIG.6, the communication apparatus700may specifically include: the transceiver unit701, configured to: receive first information from an SMF, where the first information indicates the first user plane network element to send a report message to the SMF when receiving a first domain name server DNS response message that meets a first condition, and the report message includes information about an application server indicated by the first DNS response message or information about a data network corresponding to the application server; and receive the first DNS response message; and the processing unit702, configured to determine that the first DNS response message meets the first condition, where the transceiver unit701is further configured to send the report message to the SMF when the processing unit702is configured to determine that the first DNS response message meets the first condition. In a specific implementation, the first information further includes an address range of a data network where the report message needs to be sent, and the first condition is that an IP address of the application server indicated by the first DNS response message belongs to the address range. Alternatively, the first information further indicates information about an anycast address where the report message needs to be sent, and the first condition is that an address of the application server indicated by the first DNS response message is included in the information about the anycast address. In an implementation, the report message is the first DNS response message. In another implementation, the first information further indicates the first user plane network element to buffer the first DNS response message. In an optional implementation, the transceiver unit701is further configured to: receive third information from the SMF, where the third information indicates the first user plane network element to send a second DNS response message to a terminal device; and send the second DNS response message to the terminal device, where the second DNS response message indicates a target application server selected for the terminal device, and the second DNS response message is the first DNS response message, or the first DNS response message is determined based on the second DNS response message. In a specific implementation, the transceiver unit701is further configured to receive second information from the SMF. The second information indicates access information of the terminal device. The access information of the terminal device indicates a location of an access point that can be accessed by the terminal device. For example, the access information of the terminal device is a data network access identifier DNAI corresponding to a data network that can be accessed by the terminal device, or an address of a data network that can be accessed by the terminal device (where the address may be one or more addresses in an address range supported by the data network). Alternatively, the access information may be address information of a UPF corresponding to a data network that can be accessed by the terminal device (for example, an interface address of the UPF, or an address that is configured in the UPF and that is used to provide communication for the terminal device, for example, an address used to perform NAT translation). The access information of the terminal device is used by the first user plane network element to select the application server for the terminal device. The information about the application server is included in the first DNS response message. For example, the second information further indicates priorities of the access information of the terminal device, so that the first user plane network element selects the application server for the terminal device based on the priorities of the access information of the terminal device. In an optional implementation, the processing unit702is further configured to add the access information of the terminal device to a DNS request message received from the terminal device, to obtain a new DNS request message. Alternatively, the processing unit702is further configured to determine a DNS server corresponding to the access information of the terminal device. The transceiver unit701is further configured to send the DNS request message to the DNS server corresponding to the access information of the terminal device. In an example, the access information of the terminal device is a plurality of pieces of access information. In this case, when adding the access information of the terminal device to the DNS request message received from the terminal device, to obtain the new DNS request message, the processing unit702is specifically configured to: add each piece of access information to the DNS request message, to obtain a plurality of new DNS request messages; or add the plurality of pieces of access information to the DNS request message. In another example, when determining the DNS server corresponding to the access information of the terminal device, the processing unit702is specifically configured to determine a DNS server corresponding to each of the plurality of pieces of access information. When sending the DNS request message to the DNS server corresponding to the access information of the terminal device, the transceiver unit701is specifically configured to send the DNS request message to a DNS server corresponding to each piece of access information. In an optional implementation, the processing unit702is further configured to: when obtaining information about a plurality of application servers, determine a target application server based on priorities of access information of access networks corresponding to the plurality of application servers. The report message further includes information about the target application server or access network information corresponding to the target application server. Optionally, the first message includes an address of the target application server. The second DNS response message includes the address of the target application server. In an optional implementation, the processing unit702is further configured to buffer the first DNS response message when the transceiver unit701sends the report message to the SMF. It should be noted that, in embodiments of this application, division into the units is an example, and is merely logical function division. During actual implementation, another division manner may be used. Functional units in embodiments of this application may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units may be integrated into one unit. The integrated unit may be implemented in a form of hardware, or may be implemented in a form of a software functional unit. When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, the integrated unit may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to a conventional technology, or all or some of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium and includes several instructions for indicating a computer device (which may be a personal computer, a server, or a network device) or a processor to perform all or some of the steps of the method described in embodiments of this application. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc. Based on the foregoing embodiments, an embodiment of this application further provides a communication apparatus. Refer toFIG.8. The communication apparatus800may include a transceiver801and a processor802. Optionally, the communication apparatus800may further include a memory803. The memory803may be disposed inside the communication apparatus800, or may be disposed outside the communication apparatus800. The processor802may control the transceiver801to receive and send information or data. Specifically, the processor802may be a central processing unit (CPU), a network processor (NP), or a combination of a CPU and an NP. The processor802may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof. The PLD may be a complex programmable logic device (CPLD), a field programmable gate array (FPGA), a generic array logic (GAL), or any combination thereof. The transceiver801, the processor802, and the memory803are connected to each other. Optionally, the transceiver801, the processor802, and the memory803are connected to each other by using a bus804. The bus804may be a peripheral component interconnect (PCI) bus, an extended industry standard architecture (EISA) bus, or the like. The bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is for representing the bus inFIG.8, but this does not mean that there is only one bus or only one type of bus. In an optional implementation, the memory803is configured to store a program and the like. Specifically, the program may include program code, and the program code includes computer operation instructions. The memory803may include a RAM, and may further include a non-volatile memory, for example, one or more magnetic disk memories. The processor802executes the application program stored in the memory803, to implement the foregoing function, so that a function of the communication apparatus800is implemented. For example, the communication apparatus800may be the SMF in the foregoing embodiment, or may be the first user plane network element (for example, the first UPF) in the foregoing embodiment. In an embodiment, when the communication apparatus800is configured to implement functions of the SMF in the embodiments shown inFIG.4toFIG.6, the communication apparatus800may specifically include: the transceiver801, configured to send first information to a first user plane network element, where the first information indicates the first user plane network element to send a report message to the SMF when receiving a first domain name server DNS response message that meets a first condition, and the report message includes information about an application server indicated by the first DNS response message or information about a data network corresponding to the application server; and the processor802, configured to insert a local session anchor based on the report message, where the transceiver801is further configured to receive the report message sent by the first user plane network element. In an optional implementation, the first condition is that an internet protocol IP address of the application server indicated by the first DNS response message belongs to an address range of the data network included in the first information. Alternatively, the first information further indicates information about an anycast address where the report message needs to be sent, and the first condition is that an address of the application server indicated by the first DNS response message is included in the information about the anycast address. In an implementation, the report message is the first DNS response message. In another implementation, the first information further indicates the first user plane network element to buffer the first DNS response message. In an optional implementation, the transceiver801is further configured to send third information to the first user plane network element after the processor802inserts the local session anchor, where the third information indicates the first user plane network element to send a second DNS response message to a terminal device. The second DNS response message indicates a target application server selected for the terminal device, and the second DNS response message is the first DNS response message, or the second DNS response message is determined based on the first DNS response message. In another optional implementation, the transceiver801is further configured to send a third DNS response message to a terminal device after the processor802inserts the local session anchor, where the third DNS response message indicates a target application server selected for the terminal device, and the third DNS response message is the first DNS response message, or the third DNS response message is determined based on the first DNS response message. In a specific implementation, the transceiver801is further configured to send second information to the first user plane network element. The second information indicates access information of the terminal device. The access information of the terminal device indicates a location of an access point that can be accessed by the terminal device. For example, the access information of the terminal device is a data network access identifier DNAI corresponding to a data network that can be accessed by the terminal device, or an address of a data network that can be accessed by the terminal device (where the address may be one or more addresses in an address range supported by the data network). Alternatively, the access information may be address information of a UPF corresponding to a data network that can be accessed by the terminal device (for example, an interface address of the UPF, or an address that is configured in the UPF and that is used to provide communication for the terminal device, for example, an address used to perform NAT translation). The access information of the terminal device is used by the first user plane network element to select the application server for the terminal device. The information about the application server is included in the first DNS response message. In an optional implementation, the transceiver801is further configured to obtain location information of the terminal device from an AMF before sending the second information to the first user plane network element. The processor802is further configured to determine the access information of the terminal device based on the location information of the terminal device. Specifically, the second information further indicates priorities of the access information of the terminal device, so that the first user plane network element selects the application server for the terminal device based on the priorities of the access information of the terminal device. Optionally, when the report message includes information about a plurality of application servers, the processor802is further configured to determine a target application server, where the third information includes an address of the target application server. In an example, when inserting the local session anchor based on the report message, the processor802is specifically configured to: determine, based on the information about the application server, a DNAI of the data network in which the application server is located, to determine the to-be-inserted local session anchor; or determine the to-be-inserted local session anchor based on a DNAI corresponding to the information about the data network. In another example, the processor802is further configured to obtain routing information of the anycast address, where the routing information of the anycast address includes information about at least one network element that implements a user plane function and that corresponds to the anycast address, or a DNAI of a data network corresponding to the anycast address; and determine the local session anchor based on access information of the terminal device and the routing information of the anycast address. In an embodiment, when the communication apparatus800is configured to implement functions of the first user plane network element (for example, the first UPF) in the embodiments shown inFIG.4toFIG.6, the communication apparatus800may specifically include: the transceiver801, configured to: receive first information from an SMF, where the first information indicates the first user plane network element to send a report message to the SMF when receiving a first domain name server DNS response message that meets a first condition, and the report message includes information about an application server indicated by the first DNS response message or information about a data network corresponding to the application server; and receive the first DNS response message; and the processor802, configured to determine that the first DNS response message meets the first condition, where the transceiver801is further configured to send the report message to the SMF when the processor802is configured to determine that the first DNS response message meets the first condition. In a specific implementation, the first information further includes an address range of a data network where the report message needs to be sent, and the first condition is that an IP address of the application server indicated by the first DNS response message belongs to the address range. Alternatively, the first information further indicates information about an anycast address where the report message needs to be sent, and the first condition is that an address of the application server indicated by the first DNS response message is included in the information about the anycast address. In an implementation, the report message is the first DNS response message. In another implementation, the first information further indicates the first user plane network element to buffer the first DNS response message. In an optional implementation, the transceiver801is further configured to: receive third information from the SMF, where the third information indicates the first user plane network element to send a second DNS response message to a terminal device; and send the second DNS response message to the terminal device, where the second DNS response message indicates a target application server selected for the terminal device, and the second DNS response message is the first DNS response message, or the first DNS response message is determined based on the second DNS response message. In a specific implementation, the transceiver801is further configured to receive second information from the SMF. The second information indicates access information of the terminal device. The access information of the terminal device indicates a location of an access point that can be accessed by the terminal device. For example, the access information of the terminal device is a data network access identifier DNAI corresponding to a data network that can be accessed by the terminal device, or an address of a data network that can be accessed by the terminal device (where the address may be one or more addresses in an address range supported by the data network). Alternatively, the access information may be address information of a UPF corresponding to a data network that can be accessed by the terminal device (for example, an interface address of the UPF, or an address that is configured in the UPF and that is used to provide communication for the terminal device, for example, an address used to perform NAT translation). The access information of the terminal device is used by the first user plane network element to select the application server for the terminal device. The information about the application server is included in the first DNS response message. For example, the second information further indicates priorities of the access information of the terminal device, so that the first user plane network element selects the application server for the terminal device based on the priorities of the access information of the terminal device. In an optional implementation, the processor802is further configured to add the access information of the terminal device to a DNS request message received from the terminal device, to obtain a new DNS request message. Alternatively, the processor802is further configured to determine a DNS server corresponding to the access information of the terminal device. The transceiver801is further configured to send the DNS request message to the DNS server corresponding to the access information of the terminal device. In an example, the access information of the terminal device is a plurality of pieces of access information. In this case, when adding the access information of the terminal device to the DNS request message received from the terminal device, to obtain the new DNS request message, the processor802is specifically configured to: add each piece of access information to the DNS request message, to obtain a plurality of new DNS request messages; or add the plurality of pieces of access information to the DNS request message. In another example, when determining the DNS server corresponding to the access information of the terminal device, the processor802is specifically configured to determine a DNS server corresponding to each of the plurality of pieces of access information. When sending the DNS request message to the DNS server corresponding to the access information of the terminal device, the transceiver801is specifically configured to send the DNS request message to a DNS server corresponding to each piece of access information. In an optional implementation, the processor802is further configured to: when obtaining information about a plurality of application servers, determine a target application server based on priorities of access information of access networks corresponding to the plurality of application servers. The report message further includes information about the target application server or access network information corresponding to the target application server. Optionally, the first message includes an address of the target application server. The second DNS response message includes the address of the target application server. In an optional implementation, the processor802is further configured to buffer the first DNS response message when the transceiver801sends the report message to the SMF. It should be noted that all the functions performed by the first user plane network element in the foregoing embodiment may be performed by the first network element, and a specific procedure performed by the first network element is not described in detail again. In this embodiment of this application, the first network element may be integrated into the first user plane network element (first UPF network element), or the first network element may be a network element independent of a user plane network element, or the first network element may be a local DNS resolver (local DNS resolver, LDNSR) or the like. As shown inFIG.9AandFIG.9B, this application further shows an example of another communication method. In this example, a scenario is as follows: UE has created a PDU session, and an SMF has not inserted a ULCL into the PDU session; or an SMF has inserted a ULCL, but has not inserted an L-PSA corresponding to a nearest application server that provides an application and that can be accessed by UE. In this example, the first network element is a network element that processes a DNS message, and the first network element and an anchor UPF corresponding to the PDU session are not a same network element. In this case, a DNS request message sent by the UE is sent to the anchor UPF network element, and then sent to the first network element by the anchor UPF network element. In this embodiment, if the anchor UPF network element supports NAT translation, when sending the DNS request message to the first network element, the anchor UPF network element replaces a source address (namely, an IP address of the UE, for example, a private IP address) in the data packet with a new IP address (for example, a public IP address). Optionally, the anchor UPF network element further replaces a source port number in the data packet, that is, replaces the source port number in the data packet with a new port number. In this embodiment, the anchor UPF network element may reserve, for the DNS message, an address and a port number that are obtained through NAT translation, and send the reserved address and port number to the SMF. The SMF sends the reserved address and port number to the first network element, so that the first network element can determine, based on the address and port number that are obtained through NAT translation, a PDU session corresponding to the DNS request. Specifically, a specific procedure of this example may include the following steps. Step901: The SMF sends an N4 session request message to the anchor UPF. Optionally, in the N4 session request message, the SMF requests the anchor UPF network element to reserve, for the terminal device, an IP address and an optional port number that are used for NAT translation. In an optional implementation, the SMF may indicate that the reserved IP address and port number are used only to perform NAT translation on the DNS message of the terminal device. In other words, the SMF requests the anchor UPF network element to reserve, for the DNS message of the terminal device, the IP address and the optional port number that are used for NAT translation. The N4 session request message may be sent by the SMF to a UPF in a PDU session establishment process. Optionally, if the SMF determines that the anchor UPF performs NAT translation on the session, the SMF requests the anchor UPF network element to allocate, to the DNS message of the terminal device, the IP address and the optional port number that are used for NAT translation. For example, the SMF may determine, based on a UE IP address of the session, that the anchor UPF is to perform NAT translation on the session. For example, the UE IP address is a private network address. Optionally, the anchor UPF may send, to the SMF, an indication indicating that the anchor UPF supports NAT translation, and the SMF determines, based on the indication, that the anchor UPF is to perform NAT translation on the session. In addition, there are other possible cases. This is not limited in this application. It should be noted that if the anchor UPF allocates only the IP address used for NAT translation, but does not allocate the port number, the IP address may also be actually used to perform NAT translation on a message other than the DNS message of the terminal device. Step902: The anchor UPF sends an N4 session response message to the SMF. In the N4 session response message, the anchor UPF sends, to the SMF, the IP address and the optional port number that are reserved for the DNS message and that are used for NAT translation. Optionally, when receiving an indication that the SMF requests the anchor UPF to reserve, for the terminal device, the IP address and the optional port number that are used for NAT translation, the anchor UPF reserves, for the DNS message, the IP address and the optional port number that are used for NAT translation. For step903, refer to related descriptions of step501. Different from step501, in step903, the second information further includes the IP address and the optional port number that are reserved for the terminal device by the anchor UPF and that are used for NAT translation. Step904is the same as step502. For details, refer to related descriptions of step502. Step905a: The anchor UPF receives the DNS request message. Specifically, the anchor UPF receives the DNS request message from the UE. For the DNS request message, the source address is the IP address of the UE (UE IP address), and the source port number is a port number allocated by the UE to the DNS request message. The anchor UPF performs NAT processing on the DNS request message by using the IP address and the optional port number that are obtained through NAT translation and that are reserved for the DNS message of the terminal device. For the DNS request message processed through NAT translation, the source address is the reserved IP address obtained through NAT translation, and the source port number is the reserved port number. For step905b, refer to related descriptions of step503. Different from step503, in step905b, the first network element matches the source address of the DNS request message with the second information, to determine access information of the terminal device corresponding to the DNS request message. For example, corresponding to the access information of the terminal device, an IP address obtained through NAT translation is an IP1, and a port number obtained through NAT translation is a port number1. If for the DNS request message, the source address is also the IP1, and the source port number is also the port number1, the access information is access information corresponding to the DNS request message. For step906and step907, refer to related descriptions of step504and step505. For step908, refer to related descriptions of step506. Different from step506, in step908, the report message may further include the IP address and the optional port number that are obtained through NAT translation. For step909, refer to related descriptions of step507. Different from step507, in step909, if the report message includes the IP address and the optional port number that are obtained through NAT translation, the SMF determines a corresponding PDU session based on the IP address and the optional port number that are obtained through NAT translation, and inserts the L-PSA into the PDU session. In step902, the SMF stores, in a PDU session context, the IP address and the optional port number that are obtained through NAT translation. For step910to step912, refer to related descriptions of step508to step510. A difference is as follows: In step911, the second DNS response message is sent to the anchor UPF network element. Then, the anchor UPF network element performs NAT processing on the data packet, that is, replaces a destination address (namely, the IP address obtained through NAT translation) in the data packet with the UE IP address, and optionally replaces a destination port number (namely, the port number obtained through NAT translation) in the data packet with the source port number in the original DNS request message sent by the UE. It should be noted that one or more first network elements may be deployed during actual deployment. In this case, when creating the PDU session for the terminal device, the SMF needs to select a first network element from the one or more first network elements, use the first network element as the first network element that serves the PDU session, use an address of the first network element as an address of a DNS server of the PDU session, and send the address to the terminal device, so that the DNS request message of the terminal device is sent to the selected first network element. In a possible implementation, the SMF receives a message from each of the one or more first network elements. The message includes at least one of a DNN, single network slice selection assistance information (S-NSSAI), and a service range that are supported by the first network element. Alternatively, the message includes an identifier of the first network element. The SMF obtains, based on the identifier of the first network element from a network element that implements a network repository function (NRF), at least one of a DNN, S-NSSAI, and a service range that are supported by the first network element. The SMF determines, based on at least one of the DNN and the S-NSSAI that correspond to the session, and information about the UPF selected for the PDU session, a first network element from the one or more first network elements, and uses the first network element as the first network element corresponding to the PDU session. The service range may be a tracking area list, a DNAI list, a UPF list, a service area identifier, or the like. That the SMF determines the first network element based on the information about the UPF includes: The SMF determines that the UPF is within the service range of the first network element. The SMF sends the first information and/or the second information to the first network element corresponding to the PDU session. After receiving a success response message of the first network element, the SMF uses the address of the first network element as the address of the DNS server, and sends the address to the terminal device. It should be noted that this embodiment may also be applied to a scenario corresponding to the embodiment shown inFIG.6. Corresponding to the scenario of the embodiment shown inFIG.6, in the embodiment shown inFIG.9AandFIG.9B, “refer to descriptions of step501” needs to be modified to “refer to descriptions of step601”; “refer to descriptions of step503” needs to be replaced with “refer to descriptions of step602”; “refer to descriptions of step504” needs to be replaced with “refer to descriptions of step603”; “refer to descriptions of step505” needs to be replaced with “refer to descriptions of step604”; “refer to descriptions of step506” needs to be replaced with “refer to descriptions of step605”; “refer to descriptions of step507” needs to be replaced with “refer to descriptions of step606”; “refer to descriptions of step508” needs to be replaced with “refer to descriptions of step607”; and “refer to descriptions of step510” needs to be replaced with “refer to descriptions of step609”. It should be noted that a function of the first network element in the embodiment shown inFIG.9AandFIG.9Bis integrated into the first UPF network element in the embodiment shown inFIG.5and the embodiment shown inFIG.6. The first network element in this embodiment corresponds to the first user plane network element in the embodiment shown inFIG.4. The first network element may not process any user plane data packet other than the DNS message. It should be noted that a NAT function may alternatively be performed by an independent network element instead of the anchor UPF network element. In this case, the anchor UPF network element in steps901and902is replaced with the independent network element. Herein, an entity performing the NAT function is referred to as a first NAT translation network element. The first NAT translation network element may be located in the anchor UPF, or may be an independent network element. An embodiment of this application further provides a communication method. The communication method is applicable to the communication system shown inFIG.2. Refer toFIG.10. A specific procedure of the method may include the following steps. Step1001: A first NAT translation network element obtains an IP address used for NAT translation and an optional port number used for NAT translation, where the IP address and the optional port number are reserved for a terminal device. In an implementation, the first NAT translation network element reserves, for a DNS message of the terminal device, the IP address used for NAT translation and the optional port number used for NAT translation. Step1002: The first NAT translation network element sends, to an SMF, the IP address used for NAT translation and the port number used for NAT translation. In an optional implementation, the first NAT translation network element receives a request message of the SMF, where the request message indicates the first NAT translation network element to reserve, for the terminal device, the IP address used for NAT translation and the port number used for NAT translation. In an optional implementation, the first NAT translation network element receives a request message of the SMF, where the request message indicates the first NAT translation network element to reserve, for the DNS message of the terminal device, the IP address used for NAT translation and the port number used for NAT translation. In an optional implementation, the first NAT translation network element receives the DNS message of the terminal device; and processes the DNS message based on the IP address used for NAT translation and the port number used for NAT translation. That the first NAT translation network element processes the DNS message specifically includes the following steps. The first NAT translation network element receives a DNS request message sent by the terminal device, and replaces a source address (namely, an IP address of the terminal device) in the DNS request message sent by the terminal device with the reserved IP address used for NAT translation. Optionally, the first NAT translation network element replaces a source port number in the DNS request message with the reserved port number used for NAT translation. The first NAT translation network element sends the DNS request message used for NAT translation. The first NAT translation network element receives a DNS response message, where a destination address in the DNS response message is the IP address used for NAT translation. The first NAT translation network element replaces the destination address in the DNS response message with the IP address of the terminal device. Optionally, the first NAT translation network element replaces a destination port number (the port number used for NAT translation) in the DNS response message with an original port number of the terminal device (namely, the source port number in the original DNS request message). Based on the foregoing embodiments, an embodiment of this application further provides a communication apparatus. Similarly, refer toFIG.7. For example, when the communication apparatus700is configured to implement a function of the first NAT translation network element in the embodiment shown inFIG.10, the communication apparatus700may specifically include: the processing unit702, configured to obtain an internet protocol IP address used for NAT translation and an optional port number used for NAT translation, where the IP address and the optional port number are reserved for a terminal device; and the transceiver unit701, configured to send, to an SMF, the IP address used for NAT translation and the port number used for NAT translation. In an optional implementation, the transceiver unit701is further configured to receive a request message of the SMF, where the request message indicates to reserve, for the terminal device, the IP address used for NAT translation and the optional port number used for NAT translation. In an optional implementation, the transceiver unit701is further configured to receive a DNS message of the terminal device; and the processing unit702is further configured to process the DNS message based on the IP address used for NAT translation and the optional port number used for NAT translation. Based on the foregoing embodiments, an embodiment of this application further provides a communication apparatus. Similarly, refer toFIG.8. For example, when the communication apparatus800is configured to implement a function of the first NAT translation network element in the embodiment shown inFIG.10, the communication apparatus800may specifically include: the processor802, configured to obtain an internet protocol IP address used for NAT translation and an optional port number used for NAT translation, where the IP address and the optional port number are reserved for a terminal device; and the transceiver801, configured to send, to an SMF, the IP address used for NAT translation and the optional port number used for NAT translation. In an optional implementation, the transceiver801is further configured to receive a request message of the SMF, where the request message indicates to reserve, for the terminal device, the IP address used for NAT translation and the optional port number used for NAT translation. In an optional implementation, the transceiver801is further configured to receive a DNS message of the terminal device; and the processor802is further configured to process the DNS message based on the reserved IP address used for NAT translation and the port number used for NAT translation. Based on the foregoing embodiments, an embodiment of this application provides a communication system. The communication system may include the SMF, the first user plane network element, and the like in the foregoing embodiments. An embodiment of this application further provides a computer-readable storage medium. The computer-readable storage medium is configured to store a computer program. When the computer program is executed by a computer, the computer may implement any communication method provided in the foregoing method embodiments. An embodiment of this application further provides a computer program product. The computer program product is configured to store a computer program. When the computer program is executed by a computer, the computer may implement any communication method provided in the foregoing method embodiments. An embodiment of this application further provides a chip. The chip includes a processor and a communication interface. The processor is coupled to a memory, and is configured to invoke a program in the memory, to enable the chip to implement any communication method provided in the foregoing method embodiments. A person skilled in the art should understand that embodiments of this application may be provided as a method, a system, or a computer program product. Therefore, this application may use a form of hardware only embodiments, software only embodiments, or embodiments with a combination of software and hardware. In addition, this application may use a form of a computer program product that is implemented on one or more computer-usable storage media (including but not limited to a disk memory, a CD-ROM, an optical memory, and the like) that include computer-usable program code. This application is described with reference to the flowcharts and/or block diagrams of the method, the device (system), and the computer program product according to this application. It should be understood that computer program instructions may be used to implement each process and/or each block in the flowcharts and/or the block diagrams and a combination of a process and/or a block in the flowcharts and/or the block diagrams. These computer program instructions may be provided for a general-purpose computer, a dedicated computer, an embedded processor, or a processor of another programmable data processing device to generate a machine, so that the instructions executed by the computer or the processor of the another programmable data processing device generate an apparatus for implementing a specific function in one or more procedures in the flowcharts and/or in one or more blocks in the block diagrams. These computer program instructions may alternatively be stored in a computer-readable memory that can indicate a computer or another programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable memory generate an artifact that includes an instruction apparatus. The instruction apparatus implements a specified function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams. The computer program instructions may alternatively be loaded onto a computer or another programmable data processing device, so that a series of operations and steps are performed on the computer or another programmable data processing device, to generate computer-implemented processing. Therefore, the instructions executed on the computer or the another programmable data processing device provide steps for implementing a specified function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams. Clearly, a person skilled in the art can make various modifications and variations to this application without departing from the protection scope of this application. In this way, if these modifications and variations to this application fall within the scope of the claims of this application and their equivalent technologies, this application is also intended to cover these modifications and variations.
127,606
11863520
DESCRIPTION OF EMBODIMENTS The following further describes this application in detail with reference to the accompanying drawings and embodiments. It can be understood that the embodiments described herein are merely configured for explaining the relevant invention and not for limiting the invention. In addition, it should be noted that, for ease of description, parts related to the relevant invention are shown in the accompanying drawings. It should be noted that, without conflicts, the embodiments and features in the embodiments in this application can be mutually combined. The following describes this application in detail with reference to the accompanying drawings and embodiments. FIG.1Ais a flowchart of steps of a data access method in accordance with one embodiment. In step S101, an RDMA control service assigns, based on user information and a corresponding connection relationship between a switch and a first instance defined by a user, an address segment to the first instance. In one embodiment, the RDMA control service can be understood as a RoCE-based RDMA communication function service for controlling instances of public clouds. Processes, applications, virtual machines, servers, and so on may be used to implement the RDMA control service. The user information may be identification information of the user, for example, an account of the user or a nickname of the user. The first instance can be understood as a core part of an ECS (Elastic Compute Service, elastic compute service) product, which is a server with a corresponding CPU, memory, system disk, and running operating system configuration. The first instance is a most basic resource of ECS. In one exemplary implementation, only based on the first instance can other resources such as networks, storage, and snapshots be used. The switch may be a RoCE-based switch. The corresponding connection relationship between the switch and the first instance defined by the user can be understood as a corresponding connection relationship between a RoCE-based RDMA network port of the first instance and a port of a RoCE-based switch defined by the user. The address segment can be understood as a continuous segment of addresses. The RoCE may be a DCB (Data Center Bridge, data center bridge) network. In one embodiment, before the RDMA control service assigns an address segment for the first instance, the method further includes: receiving, by the RDMA control service, a multicast packet carrying information about a network port of a second instance that is sent by the second instance through the network port; sending, by the RDMA control service, a query request to a machine deployment control service based on the information about the network port that is carried in the multicast packet, to query for deployment information of the second instance; and receiving, by the RDMA control service, the deployment information returned by the machine deployment control service based on the query request, and determining a corresponding connection relationship between the second instance and the switch based on the deployment information. The information about the network port may be port numbers of two ports of the RDMA network port of the second instance, or may be identification information of the RDMA network port of the second instance. The multicast packet may be a multicast packet based on LLDP (Link Layer Discovery Protocol, link layer discovery protocol). The machine deployment control service can be understood as a service for controlling instance deployment. Processes, applications, virtual machines, servers, and so on may be used to implement the machine deployment control service. The specific meaning of the second instance is similar to the specific meaning of the first instance, and is therefore not elaborated herein. In one embodiment, the method further includes: storing, by the RDMA control service, the corresponding connection relationship between the second instance and the switch into a relationship table. In this way, by storing the corresponding connection relationship between the second instance and the switch in the relationship table, it is convenient to subsequently build an access control list for controlling access between different first instances defined by a same user. In one exemplary implementation, the machine deployment control service controls an adapter of the second instance through a management channel, and downloads a deployment image file from a designated deployment image storage service to the second instance. Specifically, the deployed image file can reach the second instance through the management channel or a path from the deployment image storage service to an Ethernet switch. The machine deployment control service then controls, through the management channel, the adapter of the second instance to start the deployment image file that has reached the second instance, causing the second instance to perform related deployment. Specifically, by controlling the two ports of the RDMA network port, the second instance sends LLDP multicast packets carrying information about the ports respectively to a DCB switch, such that the LLDP multicast packets are sent to the RDMA control service. After receiving the LLDP multicast packets, the RDMA control service sends a query request to the machine deployment control service, to query for deployment information of the second instance. The machine deployment control service returns the deployment information of the second instance to the RDMA control service. The RDMA control service then may establish the following correspondence based on the deployment information of the second instance: a corresponding connection relationship between the RDMA network port of the second instance and the DCB switch. In addition, the RDMA control service stores the corresponding connection relationship into a corresponding relationship table. The deployment image storage service can be understood as a service for storing deployment image files. Processes, applications, virtual machines, servers, and so on may be used to implement the deployment image storage service. In one embodiment, the method further includes: receiving, by the RDMA control service, an address assignment request sent by an instance control service based on the first instance; and querying, by the RDMA control service, for a corresponding connection relationship between the first instance and the switch based on the address assignment request. In this way, by querying for the corresponding connection relationship between the first instance and the switch based on the received address assignment request, it is convenient to subsequently build an access control list for controlling access between different first instances defined by a same user. The instance control service can be understood as a service for controlling first instances defined by a user. Processes, applications, virtual machines, servers, and so on may be used to implement the instance control service. In one exemplary implementation, when the RDMA control service queries for the corresponding connection relationship between the first instance and the switch based on the address assignment request, the RDMA control service queries, based on identification information of the first instance carried in the address assignment request, the relationship table storing the corresponding connection relationship between the second instance and the switch, to acquire the corresponding connection relationship between the first instance and the switch. In one embodiment, when the RDMA control service assigns, based on user information and a corresponding connection relationship between a switch and a first instance defined by the user, an address segment to the first instance, the RDMA control service invokes, based on the user information carried in the address assignment request and the corresponding connection relationship between the first instance and the switch that is acquired through quay, an address assignment service to assign the address segment to the first instance. In this way, the address segment is assigned to the first instance by invoking the address assignment service, thereby effectively resolving an issue of address assignment for different users accessing an RDMA network node. The address assignment service can be understood as a service for assigning addresses. Processes, applications, virtual machines, servers, and so on may be used to implement the address assignment service. In step S102, the RDMA control service builds an access control list based on the address segment assigned to the first instance. In one embodiment, the access control list is used for controlling access between different first instances defined by the user. In one exemplary implementation, in building the access control list, the access control list is built based on addresses of different first instances defined by the user and included in the assigned address segment, to allow access between the different first instances defined by the same user. Specifically, access between different first instances defined by the same user is allowed, and access between first instances defined by different users is forbidden. In one exemplary implementation, to create a first instance with a RoCE-based RDMA communication function, the user may create a data structure of an instance cluster using a web console or an external service API (application programming interface, application programming interface) of a product. The instance control service returns identification information of the instance cluster. The user further uses the identification information to select a specification from a corresponding elastic compute specification family, and invokes the API to create a specific instance according to the specification selected from the elastic compute specification family. The instance control service requests, based on the created specific instance, the RDMA control service to assign an address for the created specific instance, where parameters carried in the quest may include identification information of the instance. The RDMA control service queries the corresponding connection relationship between the network port of the instance and the switch based on the identification information of the instance carried in the request, and invokes a DHCP (Dynamic Host Configuration Protocol, dynamic host configuration protocol) service to assign an address to the network port of the instance. Generally, for reliability, the two ports corresponding to the network port of the instance may be bound, and therefore one or two addresses may be assigned depending on an actual situation. The DHCP service can be understood as an IP address assignment service. In addition, the RDMA control service has already stored in the deployment phase a corresponding connection relationship between RDMA network ports of the instances and DCB switches. Therefore, the RDMA control service may quay, based on the identification information of the instance carried in the request, a corresponding connection relationship between a DCB switch and an RDMA network port of an instance defined by a user, and create, based on the identification information of the user carried in the address assignment request and the corresponding connection relationship between the DCB switch and the RDMA network port of the instance defined by the user, an access control list for the instance defined by the user. The access control list acts on DCB switches corresponding to RDMA network ports of instances defined by the user and is configured on DCB switches through the switch control service, allowing the RDMA network ports of the instances defined by the same user to communicate with each other. In step S103, the RDMA control service sends the access control list to a switch control service, such that the switch control service configures the access control list for the switch. In one embodiment, the switch control service can be understood as a service for controlling RoCE-based switches. Processes, applications, virtual machines, servers, and so on may be used to implement the switch control service. The switch may be a DCB (Data Center Bridge, data center bridge) switch. In one embodiment, after the RDMA control service sends the access control list to the switch control service, the method further includes: returning, by the RDMA control service to the instance control service, the address segment assigned by the address assignment service for the first instance, such that the instance control service starts the first instance to acquire from an image service an image file matching a specification of the first instance, and configures, based on the image file, the address segment assigned by the address assignment service for the first instance. The image service can be understood as a service for storing image files. Processes, applications, virtual machines, servers, and so on may be used to implement the image service. In this way, the address segment assigned by the address assignment service for the first instance is returned to the instance control service, such that the instance control service can start the address segment configured and assigned by the first instance. In one exemplary implementation, after the switch control service has configured the access control list for the DCB switch, the RDMA control service returns the address assigned by the address assignment service for the first instance to the instance control service. The instance control service starts the instance and designates the instance to acquire an image file for the specification of the instance from a corresponding image service. After the image file is transferred to the corresponding instance and is executed, the instance starts to configure the address requested by the instance control service for the instance on the RDMA network port of the instance. The user starts to use the instance that has a RoCE-based RDMA communication function. In one embodiment, the method further includes: receiving, by the RDMA control service, a second instance release request sent by the instance control service based on a first instance release request sent by a user; reclaiming, by the RDMA control service based on the second instance release request, an address of a to-be-released instance corresponding to instance identification information carried in the second instance release request, and configuring, through the switch control service, the switch connected to the to-be-released instance such that a network port of the switch connected to the to-be-released instance is disconnected from other network ports; and returning, by the RDMA control service to the instance control service, a release result of the to-be-released instance requested by the second instance release request, such that the instance control service returns the to-be-released instance to an inventory and returns a release success message to the user. In this way, a to-be-released instance requested by the user can be released through the received second instance release request. In one exemplary implementation, the user sends an instance release request to the instance control service through an API or a console. The instance control service receives the request and requests the RDMA control service to release the instance. The RDMA control service reclaims the address of the RDMA network port of the instance, configures the DCB switch through the switch control service, configures the DCB switch corresponding to the RDMA network port of the instance to be disconnected from other network ports, and return an instance release result to the instance control service. The instance control service continues to release other resources related to the instance, returns the instance to an inventory, and returns instance release success information to the user. In one exemplary implementation, when an instance created by a user through an API or a console is a bare metal instance, a data access procedure can be described in detail with reference to the data access system shown inFIG.1B. A structure of the data access system shown inFIG.1Bis. An adapter of the bare metal instance can realize interconnection between the bare metal instance and elastic computing control, with two network ports connected to an Ethernet switch to form dual upstream connections and a management channel connected to a machine deployment control service. In addition, the bare metal instance further has two enhanced Ethernet-based RDMA network ports, which are connected to a DCB switch to form dual upstream connections, and the DCB switch is further connected to a DCB aggregation switch with dual upstream connections to avoid a single point of failure. Furthermore, the data access system has a highly available machine deployment control service, which is connected to the bare metal instance and an RDMA control service through the management channel. The RDMA control service also has a highly available configuration to avoid system unavailability caused by a single server failure. The RDMA control service is connected to the machine deployment control service and a switch control service to control a configuration of the DCB switch. Moreover, the data access system also includes an elastic bare metal ECS (Elastic Compute Service, elastic compute service) production control service, which provides the public with elastic compute services through external service APIs or web consoles. Next, the data access procedure is introduced in detail with reference to the data access system shown inFIG.1B. Specifically, the data access procedure includes a deployment phase, a user usage phase, and a user destruction phase. The deployment phase is as follows: The machine deployment control service controls an adapter of the bare metal instance through the management channel, and downloads a deployment image file from a designated deployment image storage service to the bare metal instance. Specifically, the deployment image file can reach the bare metal instance through the management channel or a path from the deployment image storage service to an Ethernet switch. The machine deployment control service then controls, through the management channel, the adapter of the bare metal instance to start the deployment image file that has reached the bare metal instance, causing the bare metal instance to perform related deployment. Specifically, by controlling the two ports (port-a and port-b) of the RDMA network port, the bare metal instance sends LLDP multicast packets carrying information about the ports respectively to a DCB switch, such that the LLDP multicast packets are sent to the RDMA control service. After receiving the LLDP multicast packets, the RDMA control service sends a query request to the machine deployment control service, to query for deployment information of the bare metal instance. The machine deployment control service returns the deployment information of the bare metal instance to the RDMA control service. The RDMA control service then may establish the following corresponding relationship based on the deployment information of the bare metal instance: a corresponding connection relationship between the RDMA network port of the bare metal instance and the DCB switch. In addition, the RDMA control service stores the corresponding connection relationship into a corresponding relationship table. In one embodiment, the user usage stage is as follows: To create a bare metal instance with a RoCE-based RDMA communication function, the user may create a data structure of a bare metal instance cluster through a web console or an external service API (application programming interface, application programming interface) of a product. The bare metal ECS instance control service returns identification information of the bare metal instance cluster. The user further uses the identification information to select a specification from a corresponding elastic compute specification family, and invokes the API to create a specific bare metal instance according to the specification selected from the elastic compute specification family. The bare metal ECS instance control service requests, based on the created specific bare metal instance, the RDMA control service to assign an address for the created specific bare metal instance, where parameters carried in the quest may include identification information of the bare metal instance and the identification information of the user. The RDMA control service queries the corresponding connection relationship between the network port of the bare metal instance and the DCB switch based on the identification information of the bare metal instance carried in the request, and invokes a DHCP (Dynamic Host Configuration Protocol, dynamic host configuration protocol) service to assign an address to the network port of the bare metal instance. Generally, for reliability, the two ports corresponding to the network port of the bare metal instance may be bound, and therefore one or two addresses may be assigned depending on an actual situation. In addition, the RDMA control service has already stored the corresponding connection relationship between the RDMA network port of the bare metal instance and the DCB switch in the deployment phase. Therefore, the RDMA control service may query, based on identification information of a bare metal instance defined by a user, a corresponding connection relationship between a DCB switch and an RDMA network port of the bare metal instance defined by a user, and create, based on the identification information of the user carried in the address assignment request and the corresponding connection relationship between the DCB switch and the RDMA network port of the bare metal instance defined by the user, an access control list for the bare metal instance defined by the user. The access control list acts on DCB switches corresponding to RDMA network ports of bare metal instances defined by the user and is configured on DCB switches through the switch control service, allowing the RDMA network ports of the bare metal instances defined by the same user to communicate with each other. After the switch control service has configured the access control list for the DCB switch, the RDMA control service returns the address assigned by the address assignment service for the RDMA network port of the bare metal instance to the bare metal ECS instance control service. The bare metal ECS instance control service starts the bare metal instance and assigns the bare metal instance to a corresponding image service to acquire an image file for the specification of the bare metal instance. After the image file is transferred to the corresponding bare metal instance and is executed, the bare metal instance starts to configure an IP address requested by the bare metal ECS instance control service for the bare metal instance on the RDMA network port of the bare metal instance. The user starts to use the bare metal instance that has a RoCE-based RDMA communication function. The user destruction phase is as follows: The user sends a bare metal instance release request to the ECS instance control service through an API or a console. The ECS instance control service receives the request and requests the RDMA control service to release the bare metal instance. The RDMA control service reclaims the address of the RDMA network port of the bare metal instance, configures the DCB switch through the switch control service such that the DCB switch corresponding to the RDMA network port of the bare metal instance is disconnected from other network ports, and return a bare metal instance release result to the ECS instance control service. The ECS instance control service continues to release other resources related to the bare metal instance, returns the bare metal instance to an inventory, and returns bare metal instance release success information to the user. In one exemplary implementation, the bare metal instance can be understood as a server with only one set of corresponding configurations: CPU, memory, system disk, and running operating system. According to a data access method, an RDMA control service assigns, based on user information and a corresponding connection relationship between a switch and a first instance defined by a user, an address segment to the first instance; builds an access control list based on the address segment assigned to the first instance, where the access control list is used for controlling access between different first instances defined by the user; and sends the access control list to a switch control service, such that the switch control service configures the access control list for the switch. Compared with other existing traditional methods, configuring for the switch the access control list that is built can effectively control access between different instances defined by the same user, thus effectively resolving the issue of access isolation for different users accessing an RDMA network node. It is appreciated data access methods described herein can be executed by any suitable device with a data processing capability, including but not limited to: a camera, a terminal, a mobile terminal, a PC machine, a server, an in-vehicle device, an entertainment device, an advertising device, a personal digital assistant (PDA), a tablet computer, a notebook computer, a handheld game console, smart glasses, a smart watch, a wearable device, a virtual display device or a display enhanced device (for example, Google Glass, Oculus Riff Hololens, or Gear VR), and the like. FIG.2Ais a flowchart of steps of a data access method in accordance with one embodiment. In step S201, an RDMA control service assigns, based on user information and a corresponding connection relationship between a switch and a first instance defined by a user, an address segment to the first instance. This step S201is similar to the foregoing step S101, and is not elaborated herein. In step S202, the RDMA control service receives an address assignment request sent by an instance control service based on the first instance. In one embodiment, information carried in the address assignment request includes at least one of the following: identification information of the user, identification information of an instance cluster defined by the user, and identification information of an instance. In step S203, the RDMA control service builds an access control list based on the address segment assigned to the first instance and identification information of a cluster to which the first instance belongs that is carried in the address assignment request. In one embodiment, the access control list is used for controlling access between different first instances in a same cluster. In one exemplary implementation, in building the access control list, the RDMA control service performs, based on the identification information of the cluster to which the first instance belongs that is carried in the address assignment request, a filtering operation on addresses of different first instances defined by the user and included in the assigned address segment, to acquire addresses of different first instances belonging to a same cluster; and builds the access control list based on the addresses of the different first instances belonging to the same cluster, to allow access between the different first instances defined by the same user and belonging to the same cluster. Specifically, access between different first instances defined by the same user and belonging to the same cluster is allowed, access between first instances defined by different users is forbidden, and access between different first instances defined by the same user and belonging to different clusters is forbidden. In step S204, the RDMA control service sends the access control list to a switch control service, such that the switch control service configures the access control list for the switch. This step S204is similar to the foregoing step S103, and is not elaborated herein. In one exemplary implementation, when an instance created by a user through an API or a console is a virtual machine instance, a data access procedure can be described in detail with reference to the data access system shown inFIG.2B. The structure of the data access system shown inFIG.2Bis similar to the structure of the data access system shown inFIG.1B, and is not elaborated herein. The data access procedure with reference to the data access system shown inFIG.2Bis roughly similar to that with reference to the data access system shown inFIG.1B, and therefore is not further elaborated herein. A difference between the data access procedure for the virtual machine instance and the data access procedure for the bare metal instance lies in that the virtual machine instance runs a virtual machine monitor (virtual machine monitor or hypervisor). In addition, in creating a virtual machine instance, an enhanced Ethernet-based RDMA network port of the virtual machine instance is directly connected to a virtual machine of the virtual machine instance, and the enhanced Ethernet-based RDMA network port of the virtual machine instance is configured on the virtual machine of the virtual machine instance. Direct connection methods include but are not limited to PF (physical function) direct connection, VF (virtual function) direct connection using SRIOV technology, or other analog solutions. The virtual machine instance can be understood as a core part of an ECS (Elastic Compute Service, elastic compute service) product, which is a server configured with a corresponding virtual CPU and virtual memory. According to the data access method provided in one embodiment, an RDMA control service assigns, based on user information and a corresponding connection relationship between a switch and a first instance defined by a user, an address segment to the first instance; receive an address assignment request sent by the instance control service based on the first instance, and builds an access control list based on the address segment assigned to the first instance and identification information of a cluster to which the first instance defined by the user belongs that is carried in the address assignment request, where the access control list is used for controlling access between different first instances belonging to the same cluster; and sends the access control list to a switch control service, such that the switch control service configures the access control list for the switch. Compared with other existing traditional methods, the access control list that is built can effectively control access between the different instances belonging to the same cluster, thus effectively resolving the issue of isolation of access to an RDMA network node by different instance clusters defined by the same user. In one embodiment, a data access method can be executed by any suitable device with a data processing capability, including but not limited to: a camera, a terminal, a mobile terminal, a PC machine, a server, an in-vehicle device, an entertainment device, an advertising device, a personal digital assistant (PDA), a tablet computer, a notebook computer, a handheld game console, smart glasses, a smart watch, a wearable device, a virtual display device or a display enhanced device (for example, Google Glass, Oculus Riff Hololens, or Gear VR), and the like. FIG.3Ais a schematic structural diagram of a data access system in accordance with one embodiment. In one embodiment, the data access system provided includes: an RDMA control service301configured to assign, based on user information and a corresponding connection relationship between a switch and a first instance defined by a user, an address segment to the first instance, build an access control list based on the address segment assigned to the first instance, where the access control list is used for controlling access between different first instances defined by the user, and send the access control list to a switch control service; and the switch control service that configures the access control list for the switch based on the received access control list. In one embodiment, the first instance includes an adapter, and the adapter integrates a network port. The network port is an enhanced Ethernet-based RDMA network port. In this way, by integrating an enhanced Ethernet-based RDMA network port of an instance into an adapter of the instance, it is possible to save two Ethernet switches connected to the instance. In one exemplary implementation, when the instance created by the user through the API or the console is a bare metal instance, as shown inFIG.3B, an adapter of the bare metal instance integrates an enhanced Ethernet-based RDMA network port of the bare metal instance. In this way, two uplink Ethernet switches can be saved, and the enhanced Ethernet-based RDMA network port of the bare metal instance is not only used as a transmission channel of enhanced Ethernet-based RDMA, but also used as a transmission channel of ordinary Ethernet. In one exemplary implementation, when the instance created by the user through the API or the console is a virtual machine instance, as shown inFIG.3C, an adapter of the virtual machine instance integrates an enhanced Ethernet-based RDMA network port of the virtual machine instance, and the RDMA network port integrated into the adapter is directly connected to a virtual machine of the virtual machine instance. In this way, two uplink Ethernet switches can be saved, and the enhanced Ethernet-based RDMA network port of the virtual machine instance is not only used as a transmission channel of enhanced Ethernet-based RDMA, but also used as a transmission channel of ordinary Ethernet. In one embodiment, the data access system can be used to implement the corresponding data access methods in the foregoing method embodiments, and has the same beneficial effects as the corresponding method embodiments, which are not repeated herein. FIG.4is a schematic structural diagram of a data access apparatus in accordance with one embodiment. In one embodiment, the data access apparatus includes: an assignment module401configured to assign, based on user information and a corresponding connection relationship between a switch and a first instance defined by a user, an address segment to the first instance; a building module402configured to build an access control list based on the address segment assigned to the first instance, where the access control list is used for controlling access between different first instances defined by the user; and a first sending module403configured to send the access control list to a switch control service, such that the switch control service configures the access control list for the switch. In one embodiment, the data access apparatus can be used to implement the corresponding data access methods in the foregoing method embodiments, and has the same beneficial effects as the corresponding method embodiments, which are not repeated herein. FIG.5is a schematic structural diagram of a data access apparatus in accordance with one embodiment. In one embodiment, the data access apparatus can include: an assignment module505configured to assign, based on user information and a corresponding connection relationship between a switch and a first instance defined by a user, an address segment to the first instance; a building module508configured to build an access control list based on the address segment assigned to the first instance, where the access control list is used for controlling access between different first instances defined by the user; and a first sending module509configured to send the access control list to a switch control service, such that the switch control service configures the access control list for the switch. Optionally, before the assignment module505, the apparatus further includes: a first receiving module501configured to receive a multicast packet carrying information about a network port of a second instance that is sent by the second instance through the network port; a second sending module502configured to send a query request to a machine deployment control service based on the information about the network port that is carried in the multicast packet, to quay for deployment information of the second instance; and a determining module503configured to receive the deployment information returned by the machine deployment control service based on the quay request, and determine a corresponding connection relationship between the second instance and the switch based on the deployment information. Optionally, the apparatus further includes: a storage module504configured to store the corresponding connection relationship between the second instance and the switch into a relationship table. Optionally, the apparatus further includes: a second receiving module506configured to receive an address assignment request sent by an instance control service based on the first instance; and a quay module507configured to quay for a corresponding connection relationship between the first instance and the switch based on the address assignment request. Optionally, the assignment module505is specifically configured to invoke, based on the user information carried in the address assignment request and the corresponding connection relationship between the first instance and the switch that is acquired through query, an address assignment service to assign the address segment to the first instance. Optionally, the quay module507is specifically configured to: quay, based on identification information of the first instance carried in the address assignment request, the relationship table storing the corresponding connection relationship between the second instance and the switch, to acquire the corresponding connection relationship between the first instance and the switch. Optionally, the building module508is specifically configured to: build the access control list based on the address segment assigned to the first instance and identification information of a cluster to which the first instance belongs that is carried in the address assignment request, where the access control list is used for controlling access between different first instances belonging to a same cluster. Optionally, after the first sending module509, the apparatus further includes: a first returning module510configured to return, to the instance control service, the address segment assigned by the address assignment service for the first instance, such that the instance control service starts the first instance to acquire from an image service an image file matching a specification of the first instance, and configures, based on the image file, the address segment assigned by the address assignment service for the first instance. Optionally, the apparatus further includes: a second receiving module511configured to receive a second instance release request sent by the instance control service based on a first instance release request sent by a user; a release module512configured to reclaim, based on the second instance release request, an address of a to-be-released instance corresponding to instance identification information carried in the second instance release request, and configure, through the switch control service, the switch connected to the to-be-released instance such that a network port of the switch connected to the to-be-released instance is disconnected from other network ports; and a second returning module513configured to return, to the instance control service, a release result of the to-be-released instance requested by the second instance release request, such that the instance control service returns the to-be-released instance to an inventory and returns a release success message to the user. In one embodiment, the data access apparatus can be used to implement the corresponding data access methods in the foregoing method embodiments, and has the same beneficial effects as the corresponding method embodiments, which are not repeated herein. FIG.6is a schematic structural diagram of an electronic device in accordance with one embodiment. The electronic device may include: one or more processors601; and a computer readable medium602, which may be configured to store one or more programs; where when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement a data access method. The method can be similar to the methods described herein (e.g., method similar toFIG.1A,FIG.2A, etc.). FIG.7shows a hardware structure of an electronic device in accordance with one embodiment. As shown inFIG.7, the hardware structure of the electronic device may include: a processor701, a communications interface702, a computer readable medium703, and a communications bus704. The processor701, the communications interface702, and the computer readable medium703communicate with each other through the communications bus704. Optionally, the communications interface702may be an interface of a communications module, for example, an interface of a GSM module. The processor701may be configured such that an RDMA control service assigns, based on user information and a corresponding connection relationship between a switch and a first instance defined by a user, an address segment to the first instance; the RDMA control service builds an access control list based on the address segment assigned to the first instance, where the access control list is used for controlling access between different first instances defined by the user; the RDMA control service sends the access control list to a switch control service, such that the switch control service configures the access control list for the switch. The processor701may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), or may be a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate arrays (FPGA) or other programmable logic devices, a discrete gate or transistor logic device, or a discrete hardware component, which can implement or execute the methods, steps, and logical block diagrams that are disclosed herein. The general-purpose processor may be a microprocessor or any conventional processor. The computer readable medium703may be, but is not limited to, a random access memory (Random Access Memory, RAM), a read only memory (Read Only Memory, ROM), a programmable read-only memory (Programmable Read-Only Memory, PROM), an erasable programmable read-only memory (Erasable Programmable Read-Only Memory, EPROM), an electric erasable programmable read-only memory (Electric Erasable Programmable Read-Only Memory, EEPROM), and the like. In one embodiment, the procedures described above with reference to the flowcharts can be implemented as a computer software program. For example, an embodiment can include a computer program product, which includes a computer program carried on a computer readable medium, and the computer program includes program code used for executing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from a network through the communications part, and/or installed from a removable medium. When the computer program is executed by a processor, the foregoing functions defined in the methods herein can be implemented. It should be noted that the computer readable medium described in this application may be a computer-readable signal medium or a computer readable storage medium, or any combination thereof. The computer readable medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of the computer readable medium may include, but are not limited to: an electrical connection having one or more wire conductors, a portable computer diskette, a hard disk drive, a random access memory (RAM), a read only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage medium, a magnetic storage medium, or any suitable combination thereof. In this application, the computer readable medium may be any tangible medium that contains or stores a program for use by or in combination with an instruction execution system, apparatus, or device. However, in this application, the computer-readable signal medium may include a data signal that is propagated in baseband or as part of a carrier wave, in which computer-readable program code is carried. Such a propagated signal may be in a variety of forms, including but not limited to, an electromagnetic signal, an optical signal, or any suitable combination thereof. The computer-readable signal medium may also be any computer readable medium other than the computer readable storage medium, and that computer readable medium can send, propagate, or transport a program configured for use by or in combination with an instruction execution system, apparatus, or device. Program code contained in the computer readable medium may be transmitted using any appropriate medium, including but not limited to a wireless connection, an electrical cable, an optical cable, RF, or any suitable combination thereof. The computer program code configured for performing operations in this application may be compiled by using one or more programming languages or a combination thereof. The programming languages include object-oriented programming languages such as Java, Smalltalk, C++, and also include conventional procedural programming languages such as “C” language or similar programming languages. The program code may be executed entirely on a user's computer, partly on a user's computer, as a stand-alone software package, partly on a user's computer and partly on a remote computer, or entirely on a remote computer or server. In situations involving a remote computer, the remote computer may be connected to a computer of the user through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (for example, through the Internet with the help of an Internet service provider). The flowcharts and block diagrams in the drawings can include illustrations exemplary implementations of system architectures, functions, operations of the system, methods, and computer program products according to the embodiments of this application. In one embodiment, each block in the flowcharts or the block diagrams may represent a module, a program segment, or part of code, and the module, the program segment or the part of code includes one or more executable instructions configured to realize a specified logical function. Specific sequential relations are present in the foregoing embodiments, but these sequential relations are merely examples. In specific implementations, fewer or more steps may be executed, or the execution order of these steps may be adjusted. In other words, in some alternative implementations, the functions marked in the blocks may be implemented in a different sequence than marked in the accompanying drawings. For example, two blocks in succession may actually be executed substantially concurrently or executed in a reverse sequence sometimes, depending on the involved functions. It should be further noted that each block in the block diagrams and/or the flowcharts and a combination of the blocks in the block diagrams and/or the flowcharts may be implemented by a dedicated hardware-based system for executing a specified function or operation or may be implemented by a combination of dedicated hardware and computer instructions, or the like. In one embodiment, the modules can be implemented in software or hardware. The described modules may alternatively be provided in the processor, which may be described as, for example, a processor including an assignment module, a building module, and a first sending module. The names of these modules do not constitute a limitation on the modules themselves in some circumstances. For example, the assignment module may alternatively be described as “a module configured to assign, based on user information and a corresponding connection relationship between a switch and a first instance defined by a user, an address segment to the first instance”. In one embodiment, a computer readable medium stores a computer program, and when the computer program is executed by a processor, a method similar to the methods described herein (e.g., method similar toFIG.1A,FIG.2A, etc.) can be implemented. In addition, this application further provides a computer readable medium. The computer readable medium may be included in an apparatus (e.g., similar to the apparatus described in the foregoing embodiment, etc.); or may exist alone without being assembled into the apparatus. The computer readable medium can carry one or more programs, and when the one or more programs are executed by the apparatus, the apparatus is caused to have an RDMA control service assign an address segment to a first instance defined by a user based on user information and a corresponding connection relationship between the first instance and a switch; have the RDMA control service build an access control list based on the address segment assigned to the first instance, where the access control list is used for controlling access between different first instances defined by the user; and have the RDMA control service send the access control list to a switch control service, such that the switch control service configures the access control list for the switch. The expressions “first”, “second”, “the first”, or “the second” used in various embodiments of this disclosure can modify various components regardless of order and/or importance, but these expressions do not limit the corresponding components. The foregoing expressions are configured for the purpose of distinguishing elements from other elements. For example, first user equipment and second user equipment represent different user equipment, although the two are both user equipment. For example, without departing from the scope of this disclosure, a first element may be referred to as a second element, and similarly, a second element may be referred to as a first element. When an element (for example, a first element) is referred to as being “(operably or communicatively) linked to” or “(operably or communicatively) coupled to” another element (for example, a second element), or “connected to” another element (for example, a second element), it should be understood that the element is directly connected to the another element or the element is indirectly connected to the another element through another element (for example, a third element). On the contrary, it can be understood that when an element (for example, a first element) is referred to as being “directly connected” or “directly coupled” to another element (a second element), no element (for example, a third element) is inserted between the two elements. The above description includes a preferred embodiment and explanation of the applied technical principles. Those skilled in the art should understand that the scope of invention involved in this application is not limited to the technical solutions defined by the specific combinations of the above technical features, but should also cover other technical solutions defined by any combinations of the above technical features or their equivalent features without departing from the above inventive concept, for example, technical solutions defined by replacement between the above features and technical features having similar functions disclosed in this application (without limitation).
52,558
11863521
DESCRIPTION OF EXAMPLE EMBODIMENTS Overview This disclosure describes techniques for automating the provisioning of network devices by bringing the network devices up into an L2 network, converting the L2 network into an L3 network, and pushing configurations to the network devices. The techniques may include a method for automated device deployment in a hierarchical order in a network of devices. The method may include booting up a first network device and causing ports of the first network device to enter an initialization mode. In the initialization mode, the ports are unable to transmit Dynamic Host Configuration Protocol (DHCP) packets that have been generated locally on the first network device (e.g., CPU generated DHCP packets). The method may further include determining that a second network device has at least one of (i) been given a first Internet Protocol (IP) address or (ii) has been configured by a controller associated with the network. In some instances, the second network device is upstream from the first network device in the network. Further, the method may include causing a first port of the ports to enter a forwarding mode in which the first port is able to transmit DHCP packets to the second network device, and transmitting, from the first network device and using the first port, one or more first DHCP packets to prompt a server to offer the first network device a second IP address. Additionally, the method may include receiving, at the first network device, one or more second DHCP packets that include the second IP address given to the first network device. In some examples, the method may further include using the second IP address, sending, from the first network device, a request to the controller to be configured, and receiving, at the first network device, configuration data usable to configure the first network device. Additionally, the techniques described in this disclosure may be performed as a method and/or by a system having non-transitory computer-readable media storing computer-executable instructions that, when executed by one or more processors, performs the techniques described above. Example Embodiments This disclosure describes automated techniques for converting network devices from a Layer 2 (L2) network into a Layer 3 (L3) network in a hierarchical manner. The network devices may be configured to boot up in an automated-provisioning program where each port of the network devices is started in an initialization mode in which the ports are unable to transmit locally generated DHCP packets. When a network device detects that a neighbor device, or peer device, has acquired an IP address or has been configured by a network controller, then the port on which the neighbor device is detected can then be transitioned from the initialization mode into a forwarding mode. In the forwarding mode, the port can then be used to transmit DHCP packets in order to obtain an IP address. In this way, the network devices are converted from an L2 device to an L3 device in a hierarchical order such that upstream devices are discovered and converted into L3 devices before downstream devices are discovered and converted. Generally, network devices that are configured for automated provisioning in a network have software, such as an agent, pre-installed to perform various operations for automating the provisioning of the network devices. The agent may be an embedded software component that is present in the network devices and supports simplified deployment architecture. Traditionally, the software agents may run on the network devices in order to attempt to discover a server with which it can communicate, and once that server is found and a connection established, the software agent performs deployment related activities like configuration, image, license, and file updates by communicating with the server. The server may be a centralized server that encodes the logic of managing and distributing deployment information (images, configurations, files, and licenses) for the devices being deployed. The server communicates with the agent on the network devices that support the simplified deployment process using a specific deployment protocol. Traditionally, network devices would convert from L2 to L3 by obtaining IP addresses, and then they contact the centralized server (or network controller) to get configured. However, the controller generally needs to wait for the entire network to be discovered to complete configuration of the discovered part of the network. Additionally, new links detected afterwards often require human intervention to get configured by the controller. Thus, there is no solution where a network controller can configure the network devices it discovers (as it discovers them) and not wait for the entire network to be discovered. According to the techniques of this disclosure, the network devices that are configured with a software agent to perform the automated provisioning techniques may boot up such that all the ports on the devices are in an initialization mode. The initialization mode may be any mode in which the ports are unable to transmit locally generated (or CPU-generated) DHCP packets that are used to obtain IP addresses. Thus, rather than beginning to flood DHCP packets once a device has booted up, the network devices may place their ports in initialization mode and refrain from flooding the network with locally generated DHCP packets. Once the ports are placed in the initialization mode, a timer is started for each of the network devices and/or on each port of the network devices. While the timer is running, the network devices may use Layer 2 discovery protocols to detect neighboring devices (e.g., Link Layer Discovery Protocol (LLDP), Cisco Discovery Protocol (CDP), etc.). If networking devices detect neighbors that have IP addresses and/or have been configured by the network controller, the networking devices may transition those ports from the initialization mode to a forwarding mode and the timer will be stopped on those ports. In the forwarding mode, the ports may be used to flood packets, such as Layer 3 packets including DHCP packets. Additionally, because upstream networking devices are the first devices to be assigned IP addresses and be configured by the controller, the downstream devices will first detect upstream devices and transition those respective ports to forwarding mode. In some situations, the networking devices may not detect a neighboring device on a port before the timer has expired. In such examples, that may indicate that peer devices on those ports are not configured with the software agent or are otherwise not communicating using a layer 2 discovery protocol. In such examples, the ports for those devices are transitioned from initialization mode into forwarding mode as well. In further examples, when a neighbor device is detected that can use the L2 discovery protocol, but the neighboring device is not discovered by the controller and/or does not have a IP address, the ports for those “undiscovered” devices are transitioned from the initialization mode into a block mode where locally generated DHCP packets are not transmitted on those blocked ports. Once the neighboring devices have been discovered by the controller and/or provided IP addresses, the ports may be transitioned from the block mode into the forwarding mode such that packets are sent over the ports to the now-discovered peer devices. After the automated provisioning process has been completed for the network devices, the state machines for the ports on the network devices may be destroyed and there are no longer restrictions on forwarding locally generated DHCP packets. In this way, networks may be deployed and provisioned such that network devices in the networks are brought up and discovered in a layer-by-layer, or hierarchical, manner directly as L3 links with the need for human intervention. As used herein, the term “network devices” may be any type of computing device such as one or more of a switch, a router, a server computer, virtual machine, virtual server, router, gateway, communication node, backend node, load balancer, and the like. Certain implementations and embodiments of the disclosure will now be described more fully below with reference to the accompanying figures, in which various aspects are shown. However, the various aspects may be implemented in many different forms and should not be construed as limited to the implementations set forth herein. The disclosure encompasses variations of the embodiments, as described herein. Like numbers refer to like elements throughout. FIG.1illustrates a system-architecture diagram100of a networked computing environment102in which switches are provisioned in an automated fashion and in a hierarchical order. Generally, the networked computing environment102may include devices that are housed or located in one or more data centers104that may be located at different physical locations. For instance, the networked computing environment102may be supported by networks of devices across data centers, in a public cloud computing platform, a private/enterprise computing platform, campus networks, and/or any other type of computing environment in which switches and/or other networking devices are deployed. The one or more data centers104may be physical facilities or buildings located across geographic areas that are designated to store networked devices that are part of the networked computing environment102. The data centers104may include various networking devices, as well as redundant or backup components and infrastructure for power supply, data communications connections, environmental controls, and various security devices. In some examples, the data centers104may include one or more virtual data centers which are a pool or collection of cloud infrastructure resources specifically designed for enterprise needs, and/or for cloud-based service provider needs. Generally, the data centers104(physical and/or virtual) may provide basic resources such as processor (CPU), memory (RAM), storage (disk), and networking (bandwidth). However, in some examples the devices in the networked computing environment102may not be located in explicitly defined data centers104and, rather, may be located in other locations or buildings. The networked computing environment102may be include one or more networks implemented by any viable communication technology, such as wired and/or wireless modalities and/or technologies. The networked computing environment102may include any combination of Personal Area Networks (PANs), Local Area Networks (LANs), Campus Area Networks (CANs), Metropolitan Area Networks (MANs), extranets, intranets, the Internet, short-range wireless communication networks (e.g., ZigBee, Bluetooth, etc.) Wide Area Networks (WANs)—both centralized and/or distributed—and/or any combination, permutation, and/or aggregation thereof. The networked computing environment102may include devices, virtual resources, or other nodes that relay packets from one network segment to another by nodes in the computer network. In some examples, the networked computing environment102may be managed and/or controlled by a network controller106. The network controller106may comprise software, firmware, and/or hardware components that orchestrate network functions for the networked computing environment102. The network controller106serves as a centralized, programmable point of automation to manage, configure, monitor, and troubleshoot the networked computing environment102. In the illustrated example, the networked computing environment102may have at least a portion of a network being provisioned and/or deployed, such as a branch deployment of a data center104, a campus network, and/or other deployments. For instance, the networked computing environment102may be adding a data center104branch and the network controller106may be used to help automate the branch deployment. The networked computing environment102may include various types of network devices, such as switches110(1)-110(N), servers112(1)-112(N), DHCP server(s)114, routers, and/or other networking devices (where “N” is any integer greater than 1). The switches110may be any of different types of network switches that connect devices in the networked computing environment102using packet switching to receive and forward data with other devices (e.g., servers112). As shown, the networked computing environment102may include network devices that are deployed or provisioned according to a hierarchical manner or order such that upstream devices are discovered and configured prior to downstream devices. As shown, several network devices, specifically, switches110(1)-110(3), have been discovered (and potentially configured) and are included in a discovered network108. Thus, the discovered network108may grow from upstream devices to downstream devices according to the techniques described herein. FIG.1depicts an example discovered-neighbor process118that includes multiple steps for a switch110(4) to discover the states of neighboring devices, and provision the switch110(4) in the networked computing environment102in a hierarchical manner. As shown, the switch110(4) may boot up in an automated-provisioning mode where software, such as a software-embedded agent, performs operations for provisioning the switch110(4). At “1,” the software agent may run on the switch110(4) and may boot up ports of the switch4into an initiation mode. As described herein, a port may generally refer to a communication endpoint. In terms of software, a port may be a logical construct that specifies a process or type of network service. Each port may be identified using a port number according to a transport protocol, which may be Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). Further, each port number may be associated with an IP address of the switch110and a type of transport protocol. Each switch110may have multiple ports over which communications with different devices may be performed. Generally, when ports are in the initialization mode, they are unable to transmit locally generated DHCP packets (e.g., CPU generated). Thus, when all of the ports are booted into the initialization mode, the switches110are unable to flood or transmit and locally generated DHCP packets on any of their ports. At “2,” the switch110(4) may determine that a neighbor switch110(2) has been discovered by the controller106(e.g., is in the discovered network108). For instance, the switch110(4) may determine, using an L2 discovery protocol such as LLDP or CDP, that the neighbor switch110(2) has been discovered. For instance, the L2 neighbor discovery protocol allows devices to advertise device information to their directly connected peers/neighbors. In this way, the switches110may advertise various device information to their peers/neighbors, such as an indication of IP addresses, indications that they have been configured, and so forth. Accordingly, at “2” the switch110(4) may receive data from the neighbor switch110(2) using a L2 discovery protocol that indicates the switch110(2) has been discovered and/or configured by the network controller106. At “3,” the switch110(4) may transition port A from the initialization mode (INIT) into a forwarding mode (FWD) where the port A is able or allowed to transmit DHCP packets116on port A and to the switch110(2). Thus, locally generated DHCP packets116that are generated by the switch110(4) may be transmitted on port A, which is now in the forward mode (FWD), such that the switch110(4) is able to attempt to obtain an IP address in order to transition to layer 3. At “4,” the switch110(4) may send DHCP packets116on port A to the switch110(2) due to the port A being in forward mode such that the DHCP packets116would ultimately reach a DHCP server114and be provided an IP address. For instance, the DHCP packets116may include a DHCPDISCOVER packet as defined by RFC 1541. At “5,” the switch110(4) may receive an IP address from the DHCP server114according to the DHCP standard protocol (RFC 1541) that allows the DHCP server114to distribute IP addressing and configuration information to devices in the networked computing environment102. For instance, the switch110(4) may receive a DHCPOFFER packet and/or DHCPACK packet according to the DHCP protocol that includes an IP address for the switch110(4). At “6,” the switch110(4) may use the IP address to get configured by the network controller106. For instance, the switch110(4) may send a request to the network controller106for the controller106to push down configurations for the switch110(4). For instance, when the switch110(4) discovers the network controller106and establishes a connection with the network controller106, the software agent running on the switch110(4) may perform deployment related activities like configuration, image, license, and file updates by communicating with the controller106. The network controller106may be a centralized server that encodes the logic of managing and distributing deployment information (images, configurations, files, and licenses) for the devices being deployed. The network controller106communicates with the agent on the devices that support the simplified deployment process using a specific deployment protocol. In this way, the switch110(4) may obtain an IP address and become configured in a hierarchical order such that upstream switches110(1)-110(3) are discovered and configured before downstream devices such as the switch110(4). FIG.2illustrates a system-architecture diagram200of a networked computing environment102in which a switch determines that a downstream device is unable to automatically configure itself, and the switch begins forwarding traffic on a port that is connected to the downstream device. FIG.2illustrates a server-neighbor process202performed at least partly by the switch110(4). At “1,” the software agent may run on the switch110(4) and may boot up ports of the switch4into an initiation mode. Generally, when ports are in the initialization mode, they are unable to transmit locally generated DHCP packets (e.g., CPU generated). Thus, when all of the ports are booted into the initialization mode, the switches110are unable to flood or transmit and locally generated DHCP packets on any of their ports. At “2,” the switch110(4) may start a timer that expires after a predefined period of time, and/or based on another threshold. For instance, the timer may expire after one or more timeouts for the L2 discovery protocol (e.g., 1 CDP timeout, 5 CDP timeouts, etc.). At “3,” the switch110(4) may detect an end of the predefined period of time using the timer and/or detect a threshold number of CDP timeouts or LLDP timeouts (or other protocol). At “4,” the switch110(4) may determine that the device on port B of the switch110(4) is a server112(1) (or another device that is not configured with the software agent). Generally, the server112(1) may not have the software agent installed and running thereon, and thus the server112(1) may not perform the discovery techniques of the switches110. Therefore, because the server112(1) has not send any discovery messages for the switch110(4) to detect, the switch110(4) determines that the device on port B is a server112(1) and/or another device that is not configured to perform the discovery techniques described herein. At “5,” the switch110(4) may transition port B to a forward mode (FWD) based at least in part on determining that the predefined period of time has expired and/or another threshold has been met. In this way, the port B is in forwarding mode such that the switch110(4) communicates data packets to the server112(1). FIGS.3A and3Bcollectively illustrate a system-architecture diagram300of an example networked computing environment102in which a switch110(4) determines that a neighbor device has not been discovered by a controller, and blocking a port associated with that neighbor device. Once the neighbor switch is detected, the switch then transitions the port from being blocked to forwarding mode. The switch110(4) may perform an undiscovered-neighbor process302in which the switch110(4) determines that a neighboring device has not been discovered by the network controller106. At “1,” the software agent may run on the switch110(4) and may boot up ports of the switch4into an initiation mode. Generally, when ports are in the initialization mode, they are unable to transmit locally generated DHCP packets (e.g., CPU generated). Thus, when all of the ports are booted into the initialization mode, the switches110are unable to flood or transmit and locally generated DHCP packets on any of their ports. At “2,” the switch110(4) may determine that neighbor switch110(6) has not been discovered by the network controller106. For instance, the switch110(4) may determine, using a discovery protocol such as CDP or LLDP, the switch110(4) may determine that there is no IP address in the discovery messages sent from the switch110(6). At “3,” the switch110(4) may transition port C to a block (BLK) mode during which the switch110(4) is unable to transmit at least locally generated DHCP packets on the port C. In this way, the switch110(4) may refrain from sending DHCP packets116on port B to the switch110(6) such that downstream switches are configured and discovered in a hierarchical manner. As shown inFIG.3B, the switch110(4) may, at “4,” determine that the neighbor switch110(6) has been discovered. For instance, the switch110(4) may detect a message, such as a discovery message sent using CDP, LLDP, or another L2 discovery protocol. The discovery message may include an IP address associated with the switch110(6), and/or another indication that the switch110(6) has been discovered by the network controller106and is in the discovered network108. At “5,” the switch110(6) may transition port C from the block mode and into a forward mode such that packets are forwarded or sent to the switch110(6) on port C of the switch110(4). FIG.4illustrates an example state diagram400showing stages through which a network switch110transitions ports between initialization mode, forwarding mode, and/or blocking mode depending on the neighboring device on the respective ports. At402, the switch110may boot up such that the ports are put in an initialization mode. Generally, when ports are in the initialization mode, they are unable to transmit locally generated DHCP packets (e.g., CPU generated). Thus, when all of the ports are booted into the initialization mode, the switches110are unable to flood or transmit and locally generated DHCP packets on any of their ports. At404, the switch110may determine whether or not neighbor/peer devices have been discovered by the network controller106. For instance, the switch110may determine whether IP addresses are included in discovery messages sent by the neighboring devices in L2 discovery messages. In instances where a neighbor device has been discovered by the controller106, the switch110may, at406, transition the port(s) on which the discovered device(s) are communicating to forwarding mode such that packets are forward to the discovered neighbor devices. Further, at408the switch110may send locally generated DHCP packets on ports that are in the forwarding mode and to the discovered devices listening on those ports. At410, the switch110may obtain an IP address from a DHCP server using the DHCP packets and the DHCP protocol. Using the IP address, the switch110may, at412, communicate with the controller106(e.g., input IP address in packets) in order to become configured. For instance, the controller may send configuration information to the switch110based on the switch110having an IP address that is usable by the switch110to communicate. In instances where neighbor device(s) have not been discovered by the network controller106, the switch110may, at414, may determine whether the undiscovered devices are configured to use the discovery protocol. For instance, the switch110may determine whether L2 discovery messages are being sent from the undiscovered devices (e.g., CDP, LLDP, etc.). In such examples, the switch may, at416, transition the port on which undiscovered devices that can use the L2 discovery protocol to block mode such that locally generated DHCP packets116are not transmitted on those ports to the undiscovered devices. Once the devices become discovered and the IP address is in the header of the discovery messages, the switch110may then transition the blocked ports back to forward mode. However, if the switch110determines at414that the undiscovered devices are not configured with the discovery protocol or does not use the discovery protocol, the switch110may determine whether the timer has expired at418. The timer may have been started when the ports are started in initialization mode, and if the timer expires, then the switch110may transition the ports from initialization mode into forwarding mode at420. FIG.5illustrates a flow diagram of example methods that illustrate various aspects of the techniques of this disclosure. The logical operations described herein with respect toFIG.5may be implemented (1) as a sequence of computer-implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system. The implementation of the various components described herein is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as operations, structural devices, acts, or modules. These operations, structural devices, acts, and modules can be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. It should also be appreciated that more or fewer operations might be performed than shown in theFIG.5and described herein. These operations can also be performed in parallel, or in a different order than those described herein. Some or all of these operations can also be performed by components other than those specifically identified. Although the techniques described in this disclosure is with reference to specific components, in other examples, the techniques may be implemented by less components, more components, different components, or any configuration of components. FIG.5illustrates a flow diagram of an example method500for transitioning ports through modes based on the states of neighboring devices such that a network is discovered in a hierarchical order. In some instances, the method500may be included in automated device deployment techniques performed in a hierarchical order in a network of devices. At502, a first network device may be booted up for provisioning or deployment in a network. The first network device may be a switch110and/or any other type of network device that is capable of automated provisioning in a network. The first network device may run a software agent that performs at least some techniques for automated provisioning. At504, the first network device may cause ports to enter an initialization mode. Generally, in the initialization mode the ports are unable to transmit locally generated Dynamic Host Configuration Protocol (DHCP) packets. At506, the first network device may determine that a second network device has at least one of (i) been given a first Internet Protocol (IP) address or (ii) has been configured by a controller associated with the network. In some examples, the second network device is upstream from the first network device in the network. For instance, the first network device may determine whether packets sent by the second network device include an IP address (e.g., management IP address). At508, the first network device may cause a first port of the ports to enter a forwarding mode in which the first port is able to transmit DHCP packets to the second network device. At510, the first network device may transmit, using the first port, one or more first DHCP packets to prompt a server to offer the first network device a second IP address. At512, the first network device may receive one or more second DHCP packets that include the second IP address given to the first network device. In some instances, the method500may further include using the IP address to send packets to a controller106to be configured according to configurations for the network. FIG.6illustrates a computer architecture diagram showing an example computer hardware architecture600for implementing a computing device that can be utilized to implement aspects of the various technologies presented herein. The computer hardware architecture600shown inFIG.6illustrates a switch (e.g., switch110) conventional server computer (e.g., server112), computing resource, network device (e.g., router, load balancer, data store, etc.), workstation, desktop computer, laptop, tablet, network appliance, e-reader, smartphone, or other computing device, and can be utilized to execute any of the software components presented herein. The computer600may, in some examples, correspond to a network device described herein, and may comprise networked devices such as servers, switches, routers, hubs, bridges, gateways, modems, repeaters, access points, etc. The computer600includes a baseboard602, or “motherboard,” which is a printed circuit board to which a multitude of components or devices can be connected by way of a system bus or other electrical communication paths. In one illustrative configuration, one or more central processing units (“CPUs”)604operate in conjunction with a chipset606. The CPUs604can be standard programmable processors that perform arithmetic and logical operations necessary for the operation of the computer600. The CPUs604perform operations by transitioning from one discrete, physical state to the next through the manipulation of switching elements that differentiate between and change these states. Switching elements generally include electronic circuits that maintain one of two binary states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements can be combined to create more complex logic circuits, including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like. The chipset606provides an interface between the CPUs604and the remainder of the components and devices on the baseboard602. The chipset606can provide an interface to a RAM608, used as the main memory in the computer600. The chipset606can further provide an interface to a computer-readable storage medium such as a read-only memory (“ROM”)610or non-volatile RAM (“NVRAM”) for storing basic routines that help to startup the computer600and to transfer information between the various components and devices. The ROM610or NVRAM can also store other software components necessary for the operation of the computer600in accordance with the configurations described herein. The computer600can operate in a networked environment using logical connections to remote computing devices and computer systems through a network, such as a network624. The chipset606can include functionality for providing network connectivity through a Network Interface Controller (NIC)612, such as a gigabit Ethernet adapter. The NIC612is capable of connecting the computer600to other computing devices over a network. It should be appreciated that multiple NICs612can be present in the computer600, connecting the computer to other types of networks and remote computer systems. In some examples, the NIC612may be configured to perform at least some of the techniques described herein, such as packet redirects and/or other techniques described herein. The computer600can be connected to a storage device618that provides non-volatile storage for the computer. The storage device618can store an operating system620, programs622, and data, which have been described in greater detail herein. The storage device618can be connected to the computer600through a storage controller614connected to the chipset606. The storage device618can consist of one or more physical storage units. The storage controller614can interface with the physical storage units through a serial attached SCSI (“SAS”) interface, a serial advanced technology attachment (“SATA”) interface, a fiber channel (“FC”) interface, or other type of interface for physically connecting and transferring data between computers and physical storage units. The computer600can store data on the storage device618by transforming the physical state of the physical storage units to reflect the information being stored. The specific transformation of physical state can depend on various factors, in different embodiments of this description. Examples of such factors can include, but are not limited to, the technology used to implement the physical storage units, whether the storage device618is characterized as primary or secondary storage, and the like. For example, the computer600can store information to the storage device618by issuing instructions through the storage controller614to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description. The computer600can further read information from the storage device618by detecting the physical states or characteristics of one or more particular locations within the physical storage units. In addition to the mass storage device618described above, the computer600can have access to other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media is any available media that provides for the non-transitory storage of data and that can be accessed by the computer600. In some examples, the operations performed by the switches110and or any components included therein, may be supported by one or more devices similar to computer600. Stated otherwise, some or all of the operations performed by the switches110, and or any components included therein, may be performed by one or more computer devices600. By way of example, and not limitation, computer-readable storage media can include volatile and non-volatile, removable and non-removable media implemented in any method or technology. Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM (“EPROM”), electrically-erasable programmable ROM (“EEPROM”), flash memory or other solid-state memory technology, compact disc ROM (“CD-ROM”), digital versatile disk (“DVD”), high definition DVD (“HD-DVD”), BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information in a non-transitory fashion. As mentioned briefly above, the storage device618can store an operating system620utilized to control the operation of the computer600. According to one embodiment, the operating system comprises the LINUX operating system. According to another embodiment, the operating system comprises the WINDOWS® SERVER operating system from MICROSOFT Corporation of Redmond, Washington According to further embodiments, the operating system can comprise the UNIX operating system or one of its variants. It should be appreciated that other operating systems can also be utilized. The storage device618can store other system or application programs and data utilized by the computer600. In one embodiment, the storage device618or other computer-readable storage media is encoded with computer-executable instructions which, when loaded into the computer600, transform the computer from a general-purpose computing system into a special-purpose computer capable of implementing the embodiments described herein. These computer-executable instructions transform the computer600by specifying how the CPUs604transition between states, as described above. According to one embodiment, the computer600has access to computer-readable storage media storing computer-executable instructions which, when executed by the computer600, perform the various processes described above with regard toFIGS.1-5. The computer600can also include computer-readable storage media having instructions stored thereupon for performing any of the other computer-implemented operations described herein. The computer600can also include one or more input/output controllers616for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic stylus, or other type of input device. Similarly, an input/output controller616can provide output to a display, such as a computer monitor, a flat-panel display, a digital projector, a printer, or other type of output device. It will be appreciated that the computer600might not include all of the components shown inFIG.6, can include other components that are not explicitly shown inFIG.6, or might utilize an architecture completely different than that shown inFIG.6. As described herein, the computer600may comprise one or more of a switch110or another network device (e.g., server computer, computing resource, router, etc.). The computer600may include one or more hardware processors604(processors) configured to execute one or more stored instructions. The processor(s)604may comprise one or more cores. Further, the computer600may include one or more network interfaces configured to provide communications between the computer600and other devices, such as the communications described herein as being performed by the switches110, servers112, DHCP server114, network controller106, etc. The network interfaces may include devices configured to couple to personal area networks (PANs), wired and wireless local area networks (LANs), wired and wireless wide area networks (WANs), and so forth. For example, the network interfaces may include devices compatible with Ethernet, Wi-Fi™, and so forth. While the invention is described with respect to the specific examples, it is to be understood that the scope of the invention is not limited to these specific examples. Since other modifications and changes varied to fit particular operating requirements and environments will be apparent to those skilled in the art, the invention is not considered limited to the example chosen for purposes of disclosure, and covers all changes and modifications which do not constitute departures from the true spirit and scope of this invention. Although the application describes embodiments having specific structural features and/or methodological acts, it is to be understood that the claims are not necessarily limited to the specific features or acts described. Rather, the specific features and acts are merely illustrative some embodiments that fall within the scope of the claims of the application.
39,826
11863522
DESCRIPTION OF EXAMPLE EMBODIMENTS Overview In a communications network, information may be transmitted through one or more network nodes before arriving at its destination. In some networks, the Border Gateway Protocol (BGP) routing protocol is utilized between certain network nodes to transmit information. More specifically, BGP is used to exchange routing and reachability information among autonomous systems (AS) on a network such as the Internet. In some situations, certain network nodes in a BGP network may become compromised. For example, an attacker may gain control of a node and direct traffic from the node to the attacker's computing device. In the event the attacker gains access to one or more network nodes, the attacker may tamper with the sensitive information transmitted through the compromised node. Example Embodiments To address these and other problems in networks that utilize BGP, embodiments of the disclosure provide apparatuses, systems, methods, and computer-readable media for applying attestation to BGP. In some embodiments, the attestation that is applied to BGP includes a token which may allow external entities to validate freshness of asserted data based on the state of internal counters within a Trusted Platform Module (TPM). The token or signed measurement may be referred as a canary stamp (or simply “Stamp”) since a token or signed measurement may provide authenticity similar to a stamp and may be used as an early indicator of trouble. In some embodiments, the attestation is applied to a BGP keepalive message by appending the canary stamp to the end of a typical BGP keepalive message. In some embodiments, the attestation is applied to a BGP update message by appending the canary stamp to a new attribute of a BGP update message. In both cases, the canary stamps may be transmitted to other network entities where they may be analyzed in order to determine whether the attesting node has been compromised. The advantages and features of certain embodiments are discussed in more detail below in reference toFIGS.1-5.FIG.1illustrates a network utilizing the Border Gateway Protocol (BGP) routing protocol.FIGS.2A-2Dillustrate example BGP signaling messages.FIGS.3A and3Billustrate example BGP signaling messages with added attestation.FIG.4illustrates an example method for applying attestation to BGP.FIG.5illustrates an example computer system. FIG.1illustrates a network100that utilizes BGP, according to certain embodiments. Network100includes multiple network elements120(e.g.,120a-h) and multiple routing domains or autonomous systems (e.g., ASes110a-d). ASes110a-dare interconnected by edge network nodes120(e.g.,120a-cand120f-g). Network elements120may be switches, routers, or any other appropriate network elements. ASes110are illustratively interconnected by edge network elements120a-cand120f-gvia point-to-point communication links140, such as frame relay links, asynchronous transfer mode links or other serial links. The edge network elements120a-cof AS110aare illustratively coupled to network elements120d-evia subnetworks, such as local area networks130. Communication among network elements120is typically effected by exchanging discrete data packets or messages in accordance with predefined protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP). It will be understood to those skilled in the art that other protocols, such as the Internet Packet Exchange (IPX) protocol, may be advantageously used with the disclosed embodiments. Routing decisions within each AS110may rely on a predetermined “interior” gateway routing protocol (IGP). An example of an IGP is a conventional link-state protocol, such as the Open Shortest Path First (OSPF) or Intermediate-System-to-Intermediate-System (ISIS) protocol. In addition, routing information may be exchanged among the ASes110a-dusing an “exterior” gateway protocol (EGP), such as BGP. To that end, the BGP-enabled network elements120(i.e., “BGP speakers”) exchange routing information with other BGP speakers that are not in the same AS110(e.g.,120f-h) using an external form of BGP (eBGP), while BGP-enabled network elements120a-cwithin the same AS110exchange routing information with each other using an internal form of BGP (iBGP). BOP-enabled network elements120communicate information to other BGP-enabled network elements120using a BGP signaling message150. BOP signaling messages150may include an attestation160, as described in more detail below. In general, BGP-enabled network elements120(e.g., network elements120a-cand120f-hinFIG.1) apply attestation160to signaling messages150that are transmitted to other BOP-enabled network elements120(e.g., external BGP peers). Attestation160provides verifiable evidence of the trustworthiness of network elements120, thereby enabling external devices to ascertain if any network element120has been compromised (e.g., hacked or captured). This increases the security of network100and reduces or eliminates the possibility of sensitive information being stolen. FIGS.2A-2Dillustrate typical BGP signaling messages150a-d, according to certain embodiments. InFIG.2A, a BGP open message150ais illustrated. BGP open message150ahas a type code of “I”. BGP open message150ais the first message that is sent by both peers (e.g., network elements120cand120f) after a connection (e.g., a TCP connection) has been established. If BGP open message150ais acceptable to a network element120, a BGP keepalive message150dis sent to confirm the BGP open message150a. BGP keepalive message150d, BGP update message150b, and BGP notification message150ccan be exchanged only after BGP open message150ahas been confirmed and the BGP connection had been established. As illustrated inFIG.2A, BGP open message150aincludes the following fields: a marker field (16 octets), a length field (2 octets), a type field (1 octet), a version field (1 octet), an AS field (2 octets), a hold time field (2 octets), a BGP ID field (4 octets), an optional length field (octet), and an optional field (7 octets). InFIG.2B, a BGP update message150bis illustrated. BGP update message150bhas a type code of “2”. BGP update message150bis exchanged between network elements120to communicate incremental changes in a routing table. BGP update message150bincludes the following fields: a marker field (16 octets), a length field (2 octets), a type field (1 octet), an unfeasible routes length field (2 octets), a withdrawn routes field (variable), an attribute length field (2 octets), an attributes field (variable), and NLRI field (variable). As discussed in more detail below in reference toFIG.3A, the attributes field of BGP update message150bmay be used to transport attestation160in certain embodiments. InFIG.2C, a BGP notification message150cis illustrated. BGP notification message150chas a type code of “3”. BGP notification message150cis sent when an error or a special condition is detected. The BGP connection is terminated immediately when a BGP notification message150cis sent. BGP notification message150cincludes the following fields: a marker field (16 octets), a length field (2 octets), a type field (I octet), an error code field (1 octet), an error sub-code field (1 octet), and a diagnostic data field (variable). InFIG.2D, a BGP keepalive message150dis illustrated. BGP keepalive message150dhas a type code of “4”. BGP keepalive message150dmaintains the BGP connection between two BGP peers. BOP keepalive message150dis exchanged on a period of one-third of the hold time, but not less than one second (60 seconds by default). BGP keepalive message150dincludes the following fields: a marker field (16 octet), a length field (2 octets), and a type field (1 octet). As discussed in more detail below in reference toFIG.3B, BGP keepalive message150dmay include additional fields that may be used to transport attestation160in certain embodiments. FIGS.3A and3Billustrate novel BGP signaling messages150that may be used to apply attestation to BGP.FIG.3Aillustrates a BGP update message150bwith added attestation160, andFIG.3Billustrates a BOP keepalive message150dwith added attestation160, according to certain embodiments. As illustrated inFIG.3A, attestation160may be added to a typical BGP update message150busing the attributes field of BGP update message150b. As illustrated inFIG.3B, attestation160may be added to a typical BGP keepalive message150dby appending attestation160to the end of a BGP keepalive message150d. In general, BGP update message150bwith added attestation160and BGP keepalive message150dwith added attestation160may be utilized by network elements120(e.g., BGP-enabled network elements120a-c) to apply attestation to BGP. In some embodiments, the attestation160that is applied to BGP includes a token which may allow external entities to validate freshness of asserted data based on the state of internal counters within a TPM. In some embodiments, the attestation is provided by a TPM. Dedicated crypto-processors, such as a TPM, may take measurements necessary to attest the identity of a device and running binaries on the device. These measurements may include evidence that the device is in a known safe state. However, a receiver must be able to certify the evidence as fresh. Without a guarantee of freshness, an attacker may have an opening to inject previously recorded measurements, asserting what is replayed as being current. When sensitive information is being transmitted to a destination device through a network, network traffic should not be sent through comprised network nodes (e.g., hacked or captured nodes) to prevent leakage of or tampering with the sensitive information. However, traditional protections and link encryption are ineffectual to ensure that each router in the end to end path is not compromised specially when an attacker gains root access to a device via some exploit. In particular embodiments, a first network node (e.g., a network element120acting as a BGP speaker such as edge network element120f) may be configured to communicate using the BGP routing protocol. The first network node may receive a BGP signaling message (e.g., a BGP update message150bor a BGP keepalive message150d) that includes an attestation token160from a second network node (e.g., edge network element120a). The attestation token160may be for proving that the second network node is in a known safe state. The first network node may determine that the attestation token is valid for the second network node at a current time. The first network node may compute a trust level for the second network node based at least on the received attestation token160. The trust level for the second network node may be used by the network nodes in the network to compute a routing table of the network. For example, if an attestation token160for a particular network node indicates that the particular network node has been compromised, the routing table of the network may be updated to avoid sending traffic through the compromised node. As described herein, verifiable evidence of device tampering (e.g., canary stamps) may be appended to interactions based on existing communication protocols. This may give evidence receivers the option of evaluating trustworthiness of the network device and reacting accordingly. For example, the evidence receiver may determine that it no longer trusts the network device and adjusts network policy to mitigate possible damage or potential security threats. Dedicated crypto-processors such as a TPM may take necessary measurements to attest the identity of a device and its running binaries. These measurements may include detecting evidence which indicates that the device is in a known safe state. However, a receiver may need to certify the evidence as fresh because, without a guarantee of freshness, an attacker may inject previously recorded measurements to make the receiver to assert what is replayed as being current. Traditional systems and methods may identify or detect the replaying of old evidence via a nonce. For example, a nonce could be a random number provided by the entity making the request. This nonce may be passed into the TPM which may generate results including a signature based on the nonce which could not have been generated without providing that nonce. However, the nonce-based method may rely on the transactional challenge/response interaction model. In other words, the nonce-based method may not work with unidirectional communications originating from the attesting device. For example, a nonce may not work with an asynchronous push, multicast, broadcast messages, etc. Particular embodiments of this disclosure may perform a unidirectional attestation which is applicable to, for example, an asynchronous push, multicast, broadcast messages, etc., for the authentication of the corresponding devices in conjunction with corresponding binaries. Particular embodiments may enable a communication platform to assess whether the peer platforms are trustworthy. For example, the system may perform a detection of invalid attestations that can trigger alarms/events reduction of network access from a suspect device or can become a part of Admission Control (e.g., IEEE 802.1X). The communication platforms may be capable of supporting the unidirectional attestation mechanism. As an alternative approach for attesting freshness, particular embodiments of the system may generate a token which may allow external entities to validate freshness of asserted data based on the state of internal counters within the TPM. The token may allow the replay attacks to be detected without a nonce and make it possible for the attestation for asynchronous push, multicast, broadcast, etc. The token or signed measurement may be referred as a canary stamp since a token or signed measurement may provide authenticity like a stamp and may be used as an indicator of an early sign of trouble. Particular embodiments of the system may combine the token or signed measurement with TPM-integrated capabilities aimed at verifying that valid binary processes are running. The TPM-integrated capabilities may include, for example, but are not limited to, trusted execution environments (TEE) which may provide runtime malware protections and authenticated code modules (ACM) which may ensure that only digitally signed code modules can be loaded into a CPU. Particular embodiments of this disclosure may be implemented within a communication platform (e.g., a proprietary platform) or/and across multiple communication platforms (e.g., proprietary platforms and third-party platforms). Particular embodiments of the system provide an advantageous technical solution for communication platforms to attest authenticity and allow a common unidirectional attestation framework to be applied across existing networking hardware as well as virtual routers. Particular embodiments of this disclosure may be applicable to, for example, but not limited to, Cisco Secure Boot, Linux Integrity Measurement Architecture (IMA), Intel's Trusted Execution Technology (TXT), etc., and may enable these platforms to validate that a processor is running known software with valid chain of binary signatures. Particular embodiments of the system may provide defining requirements for the placement of different types of signed measurements (e.g., token or stamps) while allowing receivers to evaluate potential trustworthiness of attested information. Particular embodiments of the system may support controller-based evaluation of signed measurement (e.g., token or stamps) which includes subscription-based mechanisms to incrementally push information/evidence to be verified or/and beachhead use cases and platforms. TPM functionality may be embedded in a wide variety of devices including mobile phones, PCs, routers, etc. While traditional TPM methods may enable a device to prove freshness in a replay to a response, these methods may not support unidirectional attestation. Particular embodiments of this disclosure may provide mechanisms for verifiable unidirectional attestation by creating and distributing a token. This token may link counters on an attesting device with one or more globally verifiable characteristics or parameters (e.g., a counter on a controller, a RADIUS server, or a time authority). Upon its creation, the token may be distributed freely to any number of receivers/verifiers. Upon receiving the token, a receiver may accept subsequently attested information (e.g., stamps) from a remote TPM and attest asynchronous push, multicast, or broadcast communications of a device. It is notable that, in this disclosure, the term “TPM” may be used as an umbrella term for the necessary functionality. The mechanisms described may be supported by one or more proprietary hardware and other hardware supporting the TPMv2 specification. In particular embodiments, the system may create the initial token by extracting current counters from an attestee's TPM (e.g., either the local network element120or another network element120), and hashing it with information from an external TPM. As a result, the system may generate a non-spoofable token which binds continuously incrementing counters on an attestee with some known external state. In particular embodiments, any resetting of the TPM counters may be visible in any subsequent TPM queries. Any restarting of platform may be exposed in subsequent TPM queries. Within these bounds of reset and restart, the TPM's counter time-tick may keep continuous increments. Therefore, the push of attestee TPM information which includes these three counters may be known to have occurred subsequently to any previously received measurement. On the other hand, if the reset and restart counters have not changed, the incremental time since any previous measurement may also be known. In particular embodiments, the system may validate device information asserted from outside the TPM's program configuration registers (PCR). The majority of information needing to be trusted by network peers may not be contained within the TPM's PCR. Particular embodiments of the system may provide indirect methods of validating that a device has not been compromised based on the data or processes of exiting systems or platforms. Particular embodiments of the system may provide integration solutions with both STO's integrity verification (IV) solution and, where applicable, integrity measurement architecture (IMA). The system may provide combination solutions that enable validating that acceptable binaries are currently loaded on a peer communication system or platform. Particular embodiments of the system may allow the receiver to receive stamps and verify the information without supplementary evidence being sent with the stamp. For non-controller-based implementations, the system may not require that the verification steps occur on the receiver. A network may only be a secure as its weakest links. Information sent from a first device to a second device on the network may pass through multiple intermediary nodes or devices (e.g., routers, network controllers, etc.) before it reaches the target device. It is vitally important that said information, especially when it includes sensitive material, should not be sent through compromised network nodes (e.g., hacked or captured nodes) to prevent leakage of or tampering with the sensitive information. However, as network size and complexity increase, the potential number of attack vectors for an attacker to exploit also grows. It may be difficult to determine with certainty whether each individual node through which an arbitrary piece of information may pass is secured without having a dramatic effect on the performance of the network. Moreover, if an attacker gains root access to a device (e.g., via some previously undetected exploit), traditional protections and link (e.g., in-transit) encryption may prove ineffectual at protecting any sensitive information. Particular embodiments may apply attestation in the context of security management at a network-level to determine in real-time whether a node in a network should be trusted. This disclosure introduces an asynchronous, unidirectional time-based variant of attestation that may allow other nodes in a network to reliably ascertain if a source that is routing information has been compromised. As previously discussed, the token used in this variant of attestation may be referred to as a “canary stamp” as it positively marks data as it transitions through the network and can indicate on a front-line basis whether any security problems may exist within the network or within a given node. In particular embodiments, a network element120may be configured to validate attestation160received from other network elements120. The receiving network element120may be further configured to take action based on the status of the validation according to a specified policy provided to the network node. As an example, network element120fmay validate a canary stamp received in a BGP signaling message150from a neighboring network element120a. The canary stamp may fail the validation and network element120fmay determine to refuse to acknowledge the neighboring network element120a, as it has determined that it is not trustworthy (e.g., it may be executing unexpected or unsigned code, or it may otherwise show evidence of tampering). Network element120fmay further advertise that it has determined that the neighboring network element120ais not trustworthy. This may reduce the likelihood of other nodes in the network sending sensitive information to the untrustworthy neighboring network element120a. In particular embodiments, a network element120such as network element120fmay compute a trust level for a network element120based on a received attestation token. As an example and not by way of limitation, network element120fmay set a maximum value to the trust level for network element120aif it determined that the attestation token for network element120ais valid at the current time. As another example and not by way of limitation, network element120fmay set a minimum value to the trust level for network element120aif it determined that the attestation token for network element120ais not valid at the current time. Although this disclosure describes computing a trust level in a particular manner, this disclosure contemplates computing a trust level for a link in any suitable manner. FIG.4illustrates an example method400for applying attestation to BGP. In some embodiments, method400may be performed by any apparatus of a BGP network such as network element120. Method400may begin at step410where an attestation token for the apparatus is received or otherwise accessed. In some embodiments, the attestation token may be attestation160as described herein. In some embodiments, the attestation token is generated by a crypto-processor of the apparatus and is accessed in step410from one or more memory devices of the apparatus. At step420, method400encodes the attestation token accessed in step410in a BGP signaling message. In some embodiments, the BOP signaling message is a BOP keepalive message and step420includes appending the attestation token to the BGP keepalive message after a type field of the BGP keepalive message. In some embodiments, the BGP signaling message is a BGP update message and step420includes appending the attestation token to the BOP update message in an attribute field of the BGP update message. At step430, method400sends the BGP signaling message with the encoded attestation token to second apparatus of the BGP network. The second apparatus may be any appropriate BGP-enabled network element such as an external peer network element. After step430, method400may end. In some embodiments, a new BGP trust capability (i.e., capability for supporting attestation160) is negotiated in a BGP open message150aduring session establishment and prior to step410above. Based on local router policy, the BGP session may be terminated with a proper notification message if the trust capability is not properly received in an open message. If both peers negotiate the new BGP trust capability for attestation160, the attestation160provided by the trusted computing module of the local router may be appended to a BGP signaling message (e.g., a BGP periodic keepalive message150d) as described in method400above. As illustrated inFIG.3B, the length of the keepalive messages may be modified as 19 octets plus the length of attestation160. In some embodiments, the new BOP trust capability is sent in an optional capability parameter of a BGP open message. In these embodiments, the capability code of the open message may be assigned by LANA. In some embodiments, the capability length may be 1 octet. In some embodiments, a new BGP trust attribute may be defined in order to apply attestation to BGP. In general, BGP update messages150bcarry path attributes followed by a list prefixes sharing those attributes. A new BGP attribute may be defined in order to carry canary stamps. In these embodiments, the type code may be assigned by IANA and the value is a set of Type-Length-Value (LV) triplets. In a first example, a router ID type for trusted peering may include a value that is the router ID of the speaker followed by attestation160. In this example, a first router and a second router attest and validate the trustworthiness of updates by attaching canary stamps in all updates in the router ID TLV of the BGP trust attribute. In a second example, a next hop type TLV/MP reach type TLV is used for a trusted next hop. In this example, the value is the router ID of the speaker updating the next hop followed by attestation160. If a first router sets the next hop to itself, the first router updates the canary stamp in the next hop TLV of the BGP trust attribute and a second router validates the canary stamp if it uses the next hop in forwarding. The second router does not update the canary stamp of the next hop TLV if it is not modifying the next hop. A third router executes the same procedure as the second router. In a third example, a cluster list type TLV is used for trusted reflection. In this example, the value is an ordered list (router IDs, canary stamp). The order matches the cluster list attribute prepend by various speakers during reflection. In some embodiments, after step430above, the receiver of the BGP signaling message transmitted in step430of method400(e.g., BGP periodic keepalive message150d) may read the appended attestation160based on the length and then validate the attestation160with its trusted compute module in order to ascertain the trustworthiness of the peer that transmitted the BOP signaling message. If the peer is found to be compromised, then based on the local policy, the BGP session can be terminated with a proper notification message. In some embodiments, method400may additionally include attaching, during a typical BGP update generation, the attestation160provided by the trusted computing module of local router as a BGP trust attribute and advertising the attestation in a prefix advertisement. In these embodiments, the receiving network element speaker may first recognize the BGP trust attribute and then validate the attestation160with its trusted computing module in order to ascertain the trustworthiness of the advertisements. In addition, the BGP best path selection may be modified to prefer the trusted path and further advertisement to peers. Particular embodiments may repeat one or more steps of the method ofFIG.4, where appropriate. Although this disclosure describes and illustrates particular steps of the method ofFIG.4as occurring in a particular order, this disclosure contemplates any suitable steps of the method ofFIG.4occurring in any suitable order. Moreover, although this disclosure describes and illustrates an example method for applying attestation to BGP including the particular steps of the method ofFIG.4, this disclosure contemplates any suitable method for applying attestation to BGP including any suitable steps, which may include all, some, or none of the steps of the method ofFIG.4, where appropriate. Furthermore, although this disclosure describes and illustrates particular components, devices, or systems carrying out particular steps of the method ofFIG.4, this disclosure contemplates any suitable combination of any suitable components, devices, or systems carrying out any suitable steps of the method ofFIG.4. FIG.5illustrates an example computer system500. In particular embodiments, one or more computer systems500perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer systems500provide functionality described or illustrated herein. In particular embodiments, software running on one or more computer systems500performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated herein. Particular embodiments include one or more portions of one or more computer systems500. Herein, reference to a computer system may encompass a computing device, and vice versa, where appropriate. Moreover, reference to a computer system may encompass one or more computer systems, where appropriate. This disclosure contemplates any suitable number of computer systems500. This disclosure contemplates computer system500taking any suitable physical form. As example and not by way of limitation, computer system500may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these. Where appropriate, computer system500may include one or more computer systems500; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems500may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems500may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems500may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate. In particular embodiments, computer system500includes a processor502, memory504, storage506, an input/output (I/O) interface508, a communication interface510, and a bus512. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement. In particular embodiments, processor502includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor502may retrieve (or fetch) the instructions from an internal register, an internal cache, memory504, or storage506; decode and execute them; and then write one or more results to an internal register, an internal cache, memory504, or storage506. In particular embodiments, processor502may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor502including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor502may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory504or storage506, and the instruction caches may speed up retrieval of those instructions by processor502. Data in the data caches may be copies of data in memory504or storage506for instructions executing at processor502to operate on; the results of previous instructions executed at processor502for access by subsequent instructions executing at processor502or for writing to memory504or storage506; or other suitable data. The data caches may speed up read or write operations by processor502. The TLBs may speed up virtual-address translation for processor502. In particular embodiments, processor502may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor502including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor502may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors502. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor. In particular embodiments, memory504includes main memory for storing instructions for processor502to execute or data for processor502to operate on. As an example and not by way of limitation, computer system500may load instructions from storage506or another source (such as, for example, another computer system500) to memory504. Processor502may then load the instructions from memory504to an internal register or internal cache. To execute the instructions, processor502may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor502may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor502may then write one or more of those results to memory504. In particular embodiments, processor502executes only instructions in one or more internal registers or internal caches or in memory504(as opposed to storage506or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory504(as opposed to storage506or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor502to memory504. Bus512may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside between processor502and memory504and facilitate accesses to memory504requested by processor502. In particular embodiments, memory504includes random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory504may include one or more memories504, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory. In particular embodiments, storage506includes mass storage for data or instructions. As an example and not by way of limitation, storage506may include a had disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage506may include removable or non-removable (or fixed) media, where appropriate. Storage506may be internal or external to computer system500, where appropriate. In particular embodiments, storage506is non-volatile, solid-state memory. In particular embodiments, storage506includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage506taking any suitable physical form. Storage506may include one or more storage control units facilitating communication between processor502and storage506, where appropriate. Where appropriate, storage506may include one or more storages506. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage. In particular embodiments, I/O interface508includes hardware, software, or both, providing one or more interfaces for communication between computer system500and one or more I/O devices. Computer system500may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system500. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these I/O. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces508for them. Where appropriate, I/O interface508may include one or more device or software drivers enabling processor502to drive one or more of these I/O devices. I/O interface508may include one or more I/O interfaces508, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface. In particular embodiments, communication interface510includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system500and one or more other computer systems500or one or more networks. As an example and not by way of limitation, communication interface510may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface510for it. As an example and not by way of limitation, computer system500may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system500may communicate with a wireless PAN (WPAN)(such as, for example, a BLUETOOTH (WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network, a Long-Term Evolution (LTE) network, or a 5G network), or other suitable wireless network or a combination of two or more of these. Computer system500may include any suitable communication interface510for any of these networks, where appropriate. Communication interface510may include one or more communication interfaces510, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface. In particular embodiments, bus512includes hardware, software, or both coupling components of computer system500to each other. As an example and not by way of limitation, bus512may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PC) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus512may include one or more buses512, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect. Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HH-Ds), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs) RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate. Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context. The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages.
43,522
11863523
DETAILED DESCRIPTION One skilled in the art appreciates that aspects of the present disclosure may be illustrated and described as pertaining to several patentable classes or contexts, including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combined software and hardware implementation that may all generally be referred to herein as a “circuit,” “module,” “component,” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon. The aspects of the present disclosure may use one or more computer readable media. The computer readable media may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would comprise the following: a portable computer diskette, a hard disk, a random access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), an appropriate optical fiber with a repeater, a portable compact disc read-only memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium able to contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take a variety of forms comprising, but not limited to, electro-magnetic, optical, or a suitable combination thereof. A computer readable signal medium may be a computer readable medium that is not a computer readable storage medium and that is able to communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable signal medium may be transmitted using an appropriate medium, comprising but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present disclosure may be written in a combination of one or more programming languages, comprising an object oriented programming language such as JAVA®, SCALA®, SMALLTALK®, EIFFEL®, JADE®, EMERALD®, C++, C #, VB.NET, PYTHON® or the like, conventional procedural programming languages, such as the “C” programming language, VISUAL BASIC®, FORTRAN® 2003, Perl, COBOL 2002, PHP, ABAP®, dynamic programming languages such as PYTHON®, RUBY® and Groovy, or other programming languages. Unless specifically indicated otherwise, the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (“LAN”) or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (“SaaS”). This disclosure includes flowchart illustrations and/or block diagrams of methods, apparatuses (e.g., systems or computers), and computer program products according to embodiments of the disclosure to reference aspects of the present disclosure. Each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks, or otherwise described herein. These computer program instructions may also be stored in a computer readable medium that, when executed, may direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions, when stored in the computer readable medium, produce an article of manufacture comprising instructions which, when executed, cause a computer to implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable instruction execution apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatuses, or other devices to produce a computer implemented process, such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. While embodiments of the present disclosure may be described with respect to healthcare, banking, client service representatives, or other industry, one of ordinary skill in the art will appreciate that the methods, computers, and systems disclosed herein is useful in other industry, context, and applications. Specifically, the methods, computers, and systems described herein are not limited to application in the context of remote CSRs, and one of skill in the art will appreciate that broader applications fall within the scope of the present invention. Organizations and persons involved in certain industries, such as those in healthcare and banking, are faced with legal obligations to implement specific information security policies, and the penalty for failing to do so can be harsh. The duty imposed to protect sensitive information flows down in an organization from the highest corporate representative to customer service representatives (CSRs). Thus, the obligation to protect sensitive information permeates all levels of an organization. Complicated issues arise in adhering to the security obligations at all levels. For example, CSRs in the healthcare industry often require access Personally Identifiable Information (PII) and Protected Health Information (PHI) to provide service to clients and patients. The same is true for other industries, such as banking and finance, where access to sensitive information is inherent in the day-to-day operation of the business. Traditionally, CSRs service clients from data stations in a processing or call center environment controlled by an organization. The traditional, physical processing center environment provides maximum control over sensitive information, and organizations implement a multitude of security protocols to protect PII and PHI of patients and clients as that information available to and accessed by CSR during the ordinary course of business. For example, a supervisor or team of supervisors may walk the floor or use cameras to monitor one or more CSRs to identify unauthorized viewing, copying, or other misappropriation of PII or PHI. In such controlled call center environments, a supervisor monitoring the CSRs can detect if a CSR is copying PII or PHI onto a piece of paper, or taking a picture with their cell phone or other image capturing device, for example. In addition to direct supervision within the call center, organizations may prohibit digital devices such as smart phones in areas where PII and PHI are accessible. Some may require CSRs to store all smart phones or other communication devices in a secured area so that they are inaccessible while the CSR is able to access PII or PHI. Similarly, organizations may prohibit writing utensils or scratch paper of any kind from areas where PII and PHI are accessible. Therefore, in the traditional processing center environment, the common practice of protecting sensitive information such as PII and PHI involves exercising control over and placing restrictions on the environment in which sensitive information can be accessed, and is enforced by an overseeing managing entity to ensure adherence to the security measures in the secure areas. But there is a trend in phasing out traditional processing centers, and there are legitimate business incentives and forces prompting organizations to do so. Organizations may enable employees, agents, or others to access the private network or resources of the organization via a remote connection. Remote access to protected or private networks, servers, computing devices, peripheral devices, platforms, software, data, or other network resources is becoming increasingly important and necessary. For example, Virtual Private Networks (VPN), Virtual Machines (VM), Remote Desktop Technology (RDT) provide different techniques for enabling a remote device, i.e., a device that is not within the private network, to access resources within the private network. One of skill in the art appreciates that there are many other technologies to facilitate remote access to network resources. In the context of the remote work environment, techniques for monitoring and protecting data accessed via a remote connection include monitoring the resources accessed by the remote device through its remote connection to the private network, enabling a supervisor to monitor a replication of a remote device's desktop, and/or recording the conversation between the remote agent and the client, as such tools and techniques were traditionally available in the context of the traditional processing center. However, the full range security measures typically exercised in controlled processing centers is not available in the context of a remote environment, and the remote environment introduces new risks to sensitive data accessed via the remote connection. As a practical matter, for example, organizations cannot install stand-alone video monitoring equipment in dozens or even hundreds of CSR's homes. Moreover, doing so would defeat some of the purpose of enabling remote work by limiting the CSR to a single remote work environment that could be monitored by the organization. But even if practical options existed to enable an organization to monitor all remote CSRs through live monitoring feeds, such monitoring does not protect against many risks to the remotely accessed data. For example, a remote CSR may keep their note taking device out of view of the camera, and thus the supervisor cannot see the CSR misappropriating the sensitive information, and such a breach may go undetected. Or, unauthorized individuals may be in the room, but out of view of the camera, such that the supervisor cannot see the unauthorized individuals and detect the misappropriation of the sensitive information the CSR is accessing or discussing with a client. There is currently no effective means for a computer network to monitor the environment from which the network resources are remotely accessed, and without the traditional processing center environment, there is a hole in data security introduced by providing remote access to sensitive network resources. Thus, there is a need for a system capable of automatic multidimensional detection and remediation of security risks to data accessed via a remote connection over a network. The systems and methods discussed herein provide such a solution and address the many issues and problems introduced by providing access to sensitive data over a remote connection. There are many inventive concepts within the scope of the present disclosure relating to leveraging the components of an agent device to enable the network to provide a multidimensional monitoring platform of the remote environment. A detailed description of several embodiments of systems and methods within the scope of the present disclosure follows. Referring toFIG.1, Systems disclosed herein, such as System100, may comprise networked server devices115, client devices110, supervisor devices, and agent devices105. Server devices, such as server device125, may include servers, data stores, memories, databases, software, platforms or the like. In certain embodiments, the server devices115may comprise and provide access to Customer Relationship Management (CRM) software, a client database, a database or memory that stores information regarding clients or other relevant persons, or other information resources and software. In such embodiments, the server device may be referred to as a server resource or comprising a server resource. There may be several server resources, such as a CRM software platform and its backend database structure. Where several server resources exist, one or more agent devices105may access different or the same server resource simultaneously. Continuing with the healthcare example, the server devices115may comprise a database or CRM software platform used to manage client or client information, which in some embodiments comprises a server resource. Such information stored on the server devices115may be sensitive information such as personal health information or personally identifiable information that an enterprise has a legal obligation to protect. An enterprise that provides client support or maintains a network of agents may operate server devices, in some exemplary systems. Server devices115may include one or more server devices, such as server device125, communicating or hosted on a private or enterprise network, such as network140. It is typical in enterprise and private networks that a firewall or other similar protected point of entry exists to prevent unauthorized access to the components of the private network, and ultimately the data thereon. Thus, in certain examples, only authorized users or devices are able to access one or more server devices115. To facilitate access to private or protected server devices115, the server devices115may comprise one or more Dynamic Host Configuration Protocol (DHCP) servers or Virtual Private Network (VPN) servers or a server to provide remotely displayed Virtual Machines (VM) on remote devices. In such embodiments, a remote access manager or an access management server, which may be comprised of one or more server or supervisor devices, manages remote access and connections to server resources. Supervisor devices may comprise one or more personal computing devices such as laptop computers, desktop computers, smart phones, servers, or server devices115. In some embodiments, trusted supervisors use supervisor devices to access one or more server devices, for example, when such server devices are within a private, secured network. In some configurations, supervisor devices may be devices provisioned by the same entity that operates the server devices. In such an embodiment, the supervisor device may be a device with pre-established trust to one or more of the server devices115, and agent devices105may be equipped with software and/or firmware that facilitates a secure and private connection with one or more of the server devices. Supervisor devices may be in a client-server relationship with one or more server devices. In other embodiments, the supervisor devices may access the server devices115via a connection through the World Wide Web or the Internet using a web browser. Within the scope of the present disclosure, the systems described herein may involve a plurality of supervisor devices. Agent devices, such as agent device120, may comprise personal computing devices such as laptop computers, desktop computers, smart phones, or other peripheral devices such as webcams or microphones associated with the remote agent. In some embodiments, agent devices105are used by trusted agents to access one or more server devices115, for example, when such server devices115are within a private, secured network. In some configurations, agent devices105may be devices provisioned by the same entity that operates the server devices115. In such an embodiment, the agent device may be a device with pre-established trust to one or more of the server devices, and agent devices may be equipped with software and/or firmware that facilitates a secure and private connection with one or more of the server devices. Agent devices may be in a client-server relationship with one or more server devices. In other embodiments, the agent devices may access the server devices via a connection through network140, for example the World Wide Web or the Internet, using a web browser. Within the scope of the present disclosure, the systems described herein may involve a plurality of agent devices105. Client devices110may include personal computing devices such as laptops, e.g., client device135, or smart phones associated with a client or client, or other third-party. In some embodiments, client devices may include telephones networked via a telephone network such as the Public Switched Telephone Network (PSTN), as represented by client device130. One of skill in the art will appreciate that client devices110within the scope of the present disclosure comprises other similar devices that facilitate communication with a third-party client or protected party. In certain embodiments of the systems within the scope of the present disclosure, server devices115, supervisor devices, agent devices,105and client devices110may communicate via a network140such as the Internet. The connection between the server devices, supervisor devices, agent devices, and client devices may be in a star or mesh type configuration. In a star-type network configuration, the server devices may act as a central network point, such that connections and communications between the agent devices, supervisor devices, and the client devices must pass through one or more server devices, rather than the agent devices, supervisor devices, and client devices communicating directly with one another over the network. In other embodiments however, agent devices and supervisor devices may communicate directly with one another or client devices, thereby providing a mesh-type, point-to-point, or peer-to-peer connection between the networked devices. The systems and methods described herein may involve one or more server devices or supervisor devices establishing a connection with the first agent device. Such connections may comprise, in certain embodiments, the server device or supervisor device providing a remotely-displayed virtual machine at the agent device. In such embodiments, the agent device may access data stored on one or more server devices via the virtual machine. Providing a remotely-displayed virtual machine at the agent device facilitates the monitor and identification of the data accessed through the virtual machine by a server device facilitating the transfer of data between the remotely-displayed virtual machine. In other embodiments, the connection with the agent device may occur via a portal accessible through a web browser. Exchange of sensitive information may occur via the connection between the server device and the agent device. Thus, the connection is preferably secure. In certain embodiments, the server device or supervisor device may provide the agent device with remote access to the server device or supervisor device, or resources stored or available thereon. Providing remote access may refer to establishing a connection with the agent device, or may include copying or otherwise reproducing or presenting information from the server device or supervisor device to the remote device, or other similar means of providing the agent device access the enterprise resources and sensitive information via the connection from a remote location, i.e., a remote environment. As mentioned above, remote access may be established through a direct connection between the server device and the agent device, or remote access may be established through an intermediary such as, for example, a supervisor device or another server device, or some other trusted intermediary. In certain configurations, the supervisor device may establish a connection with the agent device, wherein the supervisor device has a connection with one or more server devices and enables the agent device to access the server devices through the supervisor device acting as an intermediary. By establishing the connection with the agent device, the server device or supervisor device is able to control, monitor, and record the resources on the server devices accessible to the agent device. As will be discussed further, in certain embodiments, the systems and methods herein may comprise monitoring the connection with the agent device contemporaneous with a potential security risk to identify the potentially compromised data. The systems and methods described herein may also provide for obtaining remote environment data from one or more agent devices. In certain embodiments, one or more server or supervisor devices may obtain the remote environment data. Remote environment data may comprise data obtained from the agent device or peripheral devices coupled to the agent device. For example, referring toFIG.2, remote environment data220may be data collected or obtained through a camera, such as agent camera data230, microphone, such as agent mic data225, screen or other display, or peripheral device controller, such as agent PIC data235, within or coupled to the agent device. As discussed above, in certain embodiments, the agent device may be a microcomputing device such as a laptop or desktop personal computer. In such embodiments, the camera and microphone devices associated with the agent device may be built-in devices within the agent device, such as a built-in microphone and camera in a laptop computer. Thus, through the agent device, the systems and methods herein may collect and monitor data that represents the visual and auditory environment associated with the agent device, thereby providing a multidimensional stream of data representing the remote environment. In some embodiments, the remote environment data may comprise one of video, image, or audio data. The remote environment data may be transmitted in instances, snapshots, segments, or continuously, all of which will be referred to herein as a stream of data. In addition, the remote environment data220may comprise data obtained from a peripheral interface controller (PIC), such as agent PIC data235, to monitor input/output (I/O) from the agent device, thereby providing another dimension of remote environment data220. In such an embodiment, the remote environment data220comprises the data the agent device exchanges via I/O with other peripheral devices such as external displays, networking devices, storage devices and other memories, for example. Other peripheral I/O devices may include a keyboard and mouse, from which the remote agent's keystrokes or navigation inputs may comprise remote environment data. In certain configurations, remote desktop software may be utilized to capture the activity on the desktop of the agent device. As an example of a multidimensional stream of data representing the remote environment, consider the agent device is a personal computing device and the remote environment data comprises data obtained via the microphone and/or camera within the agent device, i.e., data obtained via the laptop's built-in camera and/or microphone. With a multidimensional stream of data representing the remote environment, enterprises can provide novel monitoring of the agent device and remote environment to address the problems introduced by enabling access to sensitive data via a remote connection. By way of example, consider a CSR who appears to not pose a risk of misappropriation to the sensitive information, i.e., the CSR is not using their smart phone or appear to be writing down sensitive information. The CSR is easily able to escape detection of misappropriation by placing a pad of paper or audio recording device, for example, out of view of the camera. Traditional systems offer no solution. However, an exemplary embodiment of the present disclosure provides a solution by leveraging the I/O devices at the remote device to provide a multifaceted and multidimensional data stream representing the remote environment associated with the agent device. In other words, in addition to data associated with the camera in the agent device, such as a video stream of the CSR, the multidimensional stream of remote environment data may comprise data associated with the agent device built-in microphone, for example. Thus, if a CSR is copying sensitive information out of view of the camera, the internal microphone in the agent device may detect the sound of a pen or pencil dragging across paper. Or, in the situation where an unauthorized third-party is in the vicinity, the internal microphone may capture the sound of that person talking in the background or making other noises. In addition to obtaining remote environment data from the agent device, which as described above may comprise a multidimensional stream of remote environment data in certain embodiments, the systems and methods herein may obtain the communication data exchanged between the agent device and a client device, such as communication data205. In some embodiments, a server device or supervisor device may obtain the data exchanged between the agent device and the client device. The data exchanged between the agent device and the client device could comprise data associated with a telephone call, Voice Over IP (VoiP) session, videoconferencing session, or other communication session. In particular embodiments, the data exchanged between agent device and the client device will comprise a multidimensional stream of data comprising both agent data210and client data215. For example, the exchanged data may include both agent audio data and client audio data, and/or agent video data and client video data. In other embodiments, the data exchanged between the agent and the client devices could be data such as emails, chat messages, or other text messages. In accordance with this disclosure, the data exchanged between the agent and the client may include communication data. By obtaining a multidimensional stream of the exchanged data, the systems described herein may be able to perform separate sentiment or other audio analysis on the individualized agent or client data streams to detect changes in sentiment of the agent and/or client separately. To facilitate the automatic detection and remediation of risks to sensitive data, systems and methods herein may provide one or more monitoring environments. The systems and methods disclosed herein may provide a monitoring environment on one or more server devices. Monitoring environments may comprise a platform for presenting and processing a plurality of multidimensional monitoring objects. The monitoring environment may comprise a graphical user interface providing access to the plurality of multidimensional monitoring objects, as seen inFIGS.3-6. The systems and methods described herein may create or establish a plurality of multidimensional monitoring objects. The multidimensional monitoring object relates closely to the inventive concepts of the present disclosure as one of the underlying technologies enabling a network to adequately monitor the integrity of remotely accessed data by monitoring the remote environment. Referring toFIG.2, the combination of remote environment data220and the data exchanged between an agent and a client205in a multidimensional monitoring object200leverages the technical properties and configuration of a remote device to improve a computer, network, or server's ability to protect and monitor data transmitted via a remote connection by providing new functionality to the computer, server, or network enabling it to protect against threats to the integrity and privacy of sensitive data in the physical remote environment. A multidimensional monitoring object, such as monitoring object200, may be a data structure, function, method, variable, or the like comprising the remote environment data from the agent device, such as remote environment data220, and the data exchanged between the agent device and the client device, such as communication data205. The multidimensional monitoring object may be created or provided by aggregating the remote environment data into a package with the data exchanged between the client and the agent, thereby providing access to both data sets through one object. In other embodiments, a multidimensional monitoring object may be created by porting the multidimensional streams of remote environment data and data exchanged between the agent and client to a graphical user interface, such as the monitoring environment, wherein a user of the interface can selectively access one or more of the data streams. Thus, components of the multidimensional monitoring object, such as video or pictures from the camera at or within the agent device may be previewed in a monitoring environment, such as that depicted inFIGS.3-6. In certain embodiments, the systems and methods described herein may create a multidimensional monitoring object by defining a monitoring object class, where such class can be a combination of variable, functions, and other data structures. For example, the systems and methods may define a monitoring object class, which comprises a remote environment data structure and an agent-client communication data structure. Each of these data structures within the monitoring object class may themselves comprise variable, functions, or other data structures, as one of skill in the art would appreciate. In such examples, the monitoring object would need to be initialized, and a new instance of monitoring object may be initialized and created for each agent device. Initializing an instance of monitoring object may occur by initializing the values of the remote environment data and the data exchanged between the agent device and the client device with the remote environment data obtained from the agent device and the obtained data representing the communications between the client and the agent. The systems and methods may process the data comprising the multidimensional monitoring object via functions, global or private, accessible to the instances of the monitoring object. For example, the monitoring object may have access to a global function that performs sentiment analysis, therefore, each monitoring object could invoke a sentiment analysis function, or a thread or instantiation of that function to run sentiment analysis on agent or client communication data obtained by the system. Similarly, the processes for determining changes in image data obtained via a camera at the agent device, for example, may be implemented in a function accessible to the monitoring objects so that each monitoring object may invoke the change function or a thread of the function to determine changes to the image data associated with each agent device. One of skill in the art would appreciate the discussion above describing the multidimensional monitoring object as a data structure or data object is illustrative and exemplary, and one of skill in the art would recognize that a monitoring object can be created and provided through other common computer programming techniques. For example, the multidimensional monitoring object may comprise several objects, data structures, functions, methods, or it may comprise multiple layers or levels of data objects, structures, functions, or methods. In certain embodiments, the multidimensional monitoring object comprises the remote environment data from the agent device and the data exchanged between the agent and the client, or a pointer or reference to that data. The multidimensional monitoring object may provide access to the remote environment data and/or the data exchanged between the agent device and the client device. For example, a supervisor or other individual with access to the monitoring environment may access the multidimensional remote environment data and multidimensional data exchanged between the remote agent and the client devices by hovering over, clicking on, or otherwise selecting a monitoring object associated with an agent device. Thus, in such embodiments, the monitoring object may be represented by a window in a graphical user interface corresponding to an agent or agent device, such as monitoring objects305and310illustrated inFIG.3. In other similar embodiments, the monitoring objects could be presented in list or tabular format. In certain embodiments, several multidimensional monitoring objects may be accessible via the monitoring environment, for example, monitoring environment300, such that the monitoring environment provides access to a plurality of remote environment data associated with a plurality of agent devices, along with the corresponding data exchanged between each of the plurality of agent devices and respective corresponding devices. Thus, particular embodiments may provide for a multithreaded monitoring environment comprising a plurality of threads, each thread corresponding to an agent device. The monitoring environments described herein may perform different levels of processing of remote environment data and data exchanged between remote agents and client devices to automatically detect and remediate risk to sensitive data. For example, the server devices and monitoring environment detect changes to the multidimensional monitoring objects by processing the remote environment data to detect significant changes or anomalies in the remote environment data. Such changes may correspond to changes with respect to, for example, changes in remote image data associated with the camera, remote audio data associated with the microphone, or changes to the state of the agent device associated with the peripheral interface controller. In some embodiments, the server devices and the remote monitoring environment may monitor all three dimensions of the remote environment data discussed in this example simultaneously. In other embodiments, the server devices and monitoring environment may monitor only one dimension of the remote environment data, for example, remote environment data obtained through the camera at the agent device. As mentioned above, the processes described herein may be implemented in one or more functions capable of processing one or more monitoring objects and one or more threads or streams of data corresponding to a monitoring object. The monitoring environment and server devices may detect changes in remote environment data in several ways within the scope of the present disclosure. As one of skill in the art would appreciate, there any many known techniques for detecting changes to images, audio, and other data such as the peripheral device status of the agent device. An exemplary technique for doing so involves establishing a norm or acceptable state for each dimension of the remote environment data and uses that as a baseline for detecting changes. In such systems, detecting changes or anomalies may involve calculating a delta value, which represents the difference between a current instance or set of instances from a predetermined or prior instance or set of instances, which are used as a baseline. For example, detecting a change in an image captured by the camera may be accomplished by processing images captured by the camera at the agent device and performing pixel analysis on a frame or set of frames. To provide an example, suppose the stream of remote environment data comprising images and/or video from the agent device begins and the monitoring environment obtains that data. For this example, the system processes fifteen frames of image data to determine whether meaningful change in pixel arrangement occurred. After processing fifteen frames of image data, the system may start detecting changes in the image, i.e., changes in the pixel arrangement, by comparing the pixel sets of each set of fifteen frames to the fifteen frames before it. In some embodiments, the images being compared may not be consecutive. For example, the system may capture a first image at one point in time and capture a second image at a subsequent point in time and compare the second image to the first image, wherein the first and second images are not consecutive images captured by the camera. In addition, in certain embodiments, more than one image may be compared to prior images and the images being compared may not be consecutive sets of images captured by the agent camera. The system may also compare each set of frames to a predetermined set of frames as a check on gradually changing environment conditions against an accepted or approved baseline, for example, the first fifteen frames, or some other baseline model. Because it is highly unlikely the remote environment would not change whatsoever during the course of a monitoring session, a threshold change value can be set to differentiate a meaningful change from an inconsequential change. In such embodiments, the system determines a change value representing the difference in pixel arrangement, using a bitmap or raster technique, for example, and compares the change value to the threshold change value. It might be determined that a five or ten percent change in image arrangement is acceptable, and thus if the change value is less than that threshold, the system will not detect a change to the image data captured by the camera at the agent device. Other threshold techniques exist, such as thresholds for detecting change based on the number of different pixels, or thresholds that detect when groups of pixels of certain size or pattern change, for example. Regarding detecting a change in audio captured by the microphone associated with the agent device, a similar baseline or delta technique can be used. However, rather than image samples, the system establishes a baseline and detects changes in the composition of audio samples. For example, the system may detect a change to the audio data from the microphone at the agent device if the remote agent begins to write down sensitive information by detecting the noise generated by the pen or pencil dragging across a piece of paper, because comparing the audio data with the pencil scratching may amount to a change in the audio data from before the writing started. Detecting a change to the agent device via the peripheral interface controller may be even simpler because it is likely a very low tolerance, if any tolerance, for changes in the state of peripheral devices at the agent device. The system may detect a change anytime a removable storage device, display, voice recorder, keystroke logger, printer, or other peripheral device connects or plugs-in to the agent device via one of its peripheral device interfaces or ports. A peripheral interface controller, or similar module, may be responsible for managing connections to peripheral devices. In some embodiments, a supervisor device participates in establishing the baseline for the remote environment data including the image data, the audio data, and the status of peripherals. To further explain, upon establishing a connection with the agent device, the monitoring environment and server devices may begin obtaining remote environment data from the agent device, and may make that data available via a monitoring object in the monitoring environment. A supervising entity may access the remote environment data through the monitoring object. The supervising entity may require certain authentication and may require the remote agent to manipulate the agent device, such as by turning the agent device so the monitoring agent can see the entire room to detect alarming conditions, or by speaking in a neutral tone for a specified period to establish a voice baseline, before approving a remote agent device to provide services to clients or access network resources. The system may utilize such approved remote environmental conditions as the initial or model baseline used to detect changes for a session. In addition to processing the remote environment data to detect a change to the multidimensional monitoring object, the server devices and monitoring environment may also process the data exchanged between the agent device and client device to detect significant changes or anomalies in the speech of the agent or client. Known techniques of sentiment and speech analysis may be applied to the communication data of both the client and the remote agent to determine the attitude or emotion of the respective speaker. Known techniques for sentiment analysis may include knowledge-based techniques, statistical methods, or hybrid approaches combining the two. The systems and methods disclosed herein also contemplate a level of machine learning such that the system may establish and tune models for detecting changes to remote environment data. For example, the system may signal a change to the remote environment data when a remote agent sneezes and covers his or her nose and mouth, which may be accompanied by drastic and substantial bodily movement for some individuals. However, the system may receive input instructing the system to not alarm when such movement is detected, save and model the movement, and use the model to in detecting changes to image data in the future. Continuing with the same example, the loud and disrupting noise accompanying a sneeze could trigger the system to signal a change to remote audio data from the remote microphone. The system may receive instructions that the sound of sneezing is not a risk, save and model the sound composition, and use that model in detecting changes to the audio data from the agent device microphone. With respect to detecting changes in the sentiment of the remote agent or the client, some remote agents may have harsh and loud voices, or high-pitched voices, which may trigger an angry or agitated tone in a traditional sentiment analysis. To teach the system and move the baseline, the system may receive an instruction that such sentiment is normal, and the system may adjust its sentiment analysis to account for the adjusted range of normality. In response to detecting a change to the multidimensional monitoring object, the systems and methods within the scope of the present disclosure may issue a risk alarm. If the agent device is accessing sensitive data via the connection with the server device, the risk alarm may represent a potential risk to that sensitive information. By detecting potential risks to sensitive data on at least two dimensions via the multidimensional monitoring object, the systems and methods disclosed herein are able to signal two types of alarms: environment alarms and speech alarms. Environment alarms correspond to potential risks presented by changes to the remote agent environment. Referring toFIG.4, exemplary environment alarms are illustrated by emphasizing and shading pertaining to monitoring objects405,410, and415. Speech alarms correspond to issues related to mood or emotion of either the remote agent or the client. Referring toFIG.5, exemplary speech alarms are illustrated by overlaying or shading aspects of monitoring objects, as seen regarding monitoring objects505,525, and555. In some embodiments, the systems signals environment alarms and speech alarms differently, as can be seen inFIG.5, for example, comparing the speech alarms (515and510) pertaining to monitoring object505and the environment alarms pertaining to monitoring objects550and545. As a further example, consider an example of a multidimensional monitoring object presenting a live stream of video data captured by the camera at the agent device. Upon detecting a change in the remote environment data captured by the camera at the agent device, the system may cause a graphics controller to emphasize a portion of a graphical user interface (GUI) associated with the monitoring object by drawing a box around that respective portion of the GUI, such as that illustrated with respect to monitoring objects540,545, for example. The visual alarm may be accompanied by an audible alarm as well. Similarly, upon detecting a change in the sentiment of the conversation between the remote agent and the client, the system may cause a graphics controller to emphasize the portion of the graphical user interface (GUI) associated with the monitoring object by overlaying or tinting a portion of the GUI corresponding to either the sentiment of the agent or the client, as shown for example regarding monitoring objects505,555, and525. AsFIG.5illustrates, certain embodiments of the systems and methods disclosed herein may differentiate between the sentiment of the agent and client, and thus perform a dual or multifaceted sentiment analysis. For example, referring toFIG.5, monitoring object505shows light shading or tinting regarding the sentiment of the agent515and the client510. In some embodiments, this could be a color overlay, such as green, to depict that sentiment of both agent515and client510are normal. Monitoring object525, however, provides an example of differentiated sentiment analysis and alarm, and shows that the sentiment of the agent530is darker than the sentiment of the client535, thereby indicating that the agent may be angry or upset, while the client is calm and normal. In other examples, such as that shown regarding monitoring object555, both sides of the sentiment analysis may be dark and emphasized, thereby indicating that both the client and the agent are angry, such as if they are in a disagreement or argument. In certain embodiments, a multidimensional monitoring object, window, or unit may be created or available for each agent device, however, every monitoring object or unit may not be visible is displayed in the monitoring environment graphical user interface. Such a configuration is ideal in a situation where there are dozens of active agent devices and the monitoring environment would be overcrowded if every monitoring object was displayed. Therefore, in some embodiments, such as that illustrated inFIG.6, the graphical user interface of the monitoring environment only displays objects of interest, such as the monitoring objects experiencing an alarm or change. In such embodiments, the processing and detecting or changes to the monitoring object may occur in the “background,” and the monitoring object may only be displayed when the system detects a change or receives an instruction to display all or a specific monitoring object. In certain embodiments, then, the system responds to detecting a change in the monitoring object by formatting the monitoring object or unit (and the data comprising it) for display or presentation in the graphical user interface on a supervisor device or one or more server devices. In such an embodiment, monitoring objects may appear and disappear from a monitoring environment on a supervisor's console as the system detects and resolves changes or issues. For example, upon detecting a change in the remote environment data or the exchanged communication data, the system may cause a monitoring object associated with that agent to be appear on a supervisor's console within the monitoring environment. The monitoring environment may comprise one or more monitoring objects experiencing or having recently experienced change, or for which an alarm was raised in response to detecting a change. Given the potential for overcrowding of monitoring objects displayed in the monitoring environment, the system may present a thumbnail of the data captured by the agent camera associated with each respective monitoring object of interest, at least until a selection or instruction to present other remote environment data or the communication data exchanged between a client and agent is received by the system. In such a system, the system may receive a user input selecting a particular monitoring object by using the graphical user interface. Upon selecting a particular monitoring object, the system may present all or some of the remote environment data or communication data associated with that monitoring object. For example, upon selecting a monitoring object, the system may present the communication data between the agent and client through a speaker or other audio device at the supervisor or server device, allowing a supervisor to “listen in.” After a predetermined amount of time, or upon receive an instruction, the system may remove or hide a monitoring object from the graphical user interface. For example, the monitoring object may experience a change in remote environment data, causing the system to display it in the supervisor's monitoring environment. The supervisor may access the remote environment data or the data exchanged between the client and the agent, or he may not. In any event, if a remedial instruction is not received within a certain time after a change is no longer detected or the alarm expires, the thumbnail associated with the monitoring object may disappear from the monitoring environment because it is no longer experiencing change, and therefore is not a risk that warrants attention. In particular embodiments, the system may store the remote environment data and the data exchanged between agent device and the client device in response to issuing a risk alarm and may respond to the risk alarm according to a received instruction. Storing the remote environment data and the data exchanged between the agent device and the client device in response to the risk alarm enables the system to determine the precise data at risk of compromised integrity or misappropriation. In other embodiments, the system may replicate the desktop of an agent device upon detecting a change to the remote environment data. In such embodiments, the system may use this data to determine what sensitive data was accessed when the change occurred and/or the alarm issued. Cross referencing this data is one way to identify potentially misappropriated data. In certain embodiments, the systems and methods disclosed herein may obtain and monitor data associated with the first agent device and Customer Relationship Management (CRM) software, or other enterprise resources accessed via the remote connection. In such embodiments, enterprises may be able to cross reference the data the CSR accesses via the remote connection with the CRM software with contemporaneous detection of a potential data breach or security risk to identify the potentially compromised sensitive data. For example, if the systems and methods herein detect a change in the remote environment data, for example if a CSR raises their cell phone, and the system signals a security risk event, the system may also gather remote environment data representing the data that was viewable on the agent device's screen and corroborate that with the CRM software to identify which data is at risk of misappropriation. In such embodiments, the systems detect the potentially misappropriated data by cross referencing the data regarding the issued risk alert with the data that the CSR was accessing contemporaneous with the change in remote environment data that caused the alert to issue. The systems and methods within the scope of the present disclosure may use several different curative or remedial means for responding to a risk. In response to the alarm, the system may receive an instruction to establish a connection with the first client device, wherein the connection is between the server device or supervisor device and the client device. In certain embodiments, this may enable a supervisor at the supervisor device or server device to communicate with the client and client device. Another response to an alarm may include disabling the agent device, for example, by terminating a connection between the agent device and client device. Continuing with the example above, if sentiment analysis detects that both the client and the agent are speaking loudly and rapidly, the sentiment analysis software may determine that the agent and/or client are angry or arguing, and issue a risk alarm overlaying a red coloring over the monitoring object to signal the intensity of the conversation. In such a situation, the system may receive instructions to disable the remote agent device and/or establish a connection with the server or supervisor device so that the supervisor can intervene in the conversation to assuage the tension. Another response may include identifying the change in the remote environment data or the change in the data exchanged between the remote agent and the client device as an acceptable change or acceptable event. Continuing with the example of a sneeze discussed above, the system may receive an instruction that the risk alarm issued in response to detecting a user's violent bodily movement during a sneeze and/or the associated change in audio from the burst of sound, and that the behavior is acceptable and is not a risk. The instructions may be received as inputs from a supervisor selecting one or more options from a drop-down menu, for example. A further benefit of the systems and methods disclosed herein is that the server device or supervisor device may establish connections with several agent devices, and likewise create several multidimensional monitoring objects. Therefore, in particular embodiments, a monitoring environment, such as monitoring environments300,400, or550, or that illustrated inFIG.6, may be established on one or more server devices or supervisor devices comprising a matrix of multidimensional monitoring objects. This technology improves a server or computer's ability to protect and monitor the integrity of data exchanged over a remote connection by identifying and remediating potential and actual risks to the data introduced by factors in the remote environment which, prior to the systems and methods disclosed herein, were not adequately protected against. By providing a monitoring environment comprising a plurality of multidimensional monitoring objects, each associated with a different agent device, the computer and network are also improved by increasing scalability of the network as the number of agent devices increases and clients increase, and thereby provides scalability to the computer system that was previously unachievable. Therefore, certain embodiments of the systems and methods disclosed herein may comprise providing a second agent device remote access to a second server device, and creating a second multidimensional monitoring object associated with the second agent device. The flowcharts and diagrams inFIGS.1-6illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various aspects of the present disclosure. In this regard, each block in the flowcharts or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, may be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a,” “an,” and “the” are intended to comprise the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The corresponding structures, materials, acts, and equivalents of means or step plus function elements in the claims below are intended to comprise any disclosed structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. For example, this disclosure comprises possible combinations of the various elements and features disclosed herein, and the particular elements and features presented in the claims and disclosed above may be combined with each other in other ways within the scope of the application, such that the application should be recognized as also directed to other embodiments comprising other possible combinations. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated.
58,911
11863524
DETAILED DESCRIPTION The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. A virtual machine is a simulated computer that is simulated by physical computer resources (e.g., of a physical machine). As virtual machines are intended to accurately simulate individual computers, the virtual machines often have the same security vulnerabilities as physical computers. For example, virtual machines can be infected with malware and can suffer from other unauthorized accesses. To protect a virtual machine, and a computer simulating the virtual machine, from unauthorized access and infection, a firewall may be implemented. The firewall checks incoming and/or outgoing packets of data against an existing list of characteristics that determine whether the packets should be allowed to continue toward their destination or should be blocked. The firewall may be implemented on the computer simulating the virtual machine, may be implemented by a dedicated physical device, or may be a virtual firewall that is implemented on a virtual machine. A virtual firewall may be deployed as an untuned virtual firewall or a tuned virtual firewall. An untuned virtual firewall is a virtual firewall that uses existing hypervisor and virtual machine settings. However, these existing settings may not provide for optimal performance of the virtual firewall thereby resulting in reduced performance and/or higher latency relative to a tuned virtual firewall. A tuned virtual firewall is a virtual firewall that modifies existing hypervisor and virtual machine settings to optimize performance of the virtual firewall. However, tuning a virtual firewall is a complicated process that requires a user to determine characteristics of the virtual firewall, characteristics of the computing device on which the virtual firewall is to be implemented, characteristics of the hypervisor, and/or the like, to determine how the hypervisor and virtual machine settings are to be modified based on the determined characteristics, and to then modify the hypervisor and virtual machine settings. Because tuning a virtual firewall is a complicated process, a user may improperly, and/or fail to, modify one or more of the existing hypervisor and virtual machine settings thereby resulting in reduced performance and/or higher latency relative to a properly tuned virtual firewall and/or an untuned virtual firewall. This may also lead to consumption of computing resources to troubleshoot an improperly tuned virtual firewall in an attempt to improve performance. According to some implementations described herein, a host platform automatically tunes a virtual firewall. In some implementations, the host platform may receive an input associated with deploying a virtual firewall on a computing device. The host platform may determine a first set of characteristics associated with the virtual firewall and a second set of characteristics associated with a hypervisor associated with the computing device. The host platform may automatically tune the virtual firewall based on the first set of characteristics and the second set of characteristics. After tuning the virtual firewall, the host platform may deploy the virtual firewall on the computing device. In this way, the host platform optimizes a performance and/or a latency of the virtual firewall by automatically tuning the virtual firewall prior to the virtual firewall being deployed. Also, by automatically tuning the virtual firewall, the host platform may conserve computing resources that would have otherwise been used to troubleshoot an improperly tuned virtual firewall in an attempt to improve performance. FIGS.1A-1Fare diagrams of one or more example implementations100described herein. As shown inFIGS.1A-1F, a user may use an endpoint device (e.g., a laptop computer, a tablet computer, a handheld computer, a desktop computer, a mobile phone (e.g., a smart phone, a radiotelephone, etc.), a personal digital assistant, a network device (e.g., a router, a gateway, a firewall, a hub, a bridge, etc.), a telephone, and/or the like) to access a cloud computing service to cause a host platform (e.g., a server device, a collection of server devices, and/or the like) to automatically tune and/or deploy a virtual firewall. As shown inFIG.1A, and by reference number105, the host platform receives an input associated with deploying a virtual firewall. For example, a user may log in to a portal associated with a cloud computing service provided by a host platform to access a user interface for deploying a virtual firewall. The user interface may allow the user to input information indicating that the host platform is to implement a virtual firewall on a computing device. In some implementations, the input includes information indicating that the host platform is to automatically tune the virtual firewall prior to the virtual firewall being deployed. For example, the user interface may include a knob, a drop-down menu, a selectable icon, and/or the like that enables the user to input information indicating that the host platform is to automatically tune the virtual firewall. The host platform may automatically tune the virtual firewall prior to the virtual firewall being deployed based on the information input by the user. In some implementations, the host platform may deploy an untuned firewall and may automatically tune the deployed firewall, as described below in connection withFIG.1E. In some implementations, the host platform determines a computing device on which the virtual firewall is to be deployed. As shown inFIG.1B, the computing device includes a hardware layer, a hypervisor layer, a virtual machine layer, and/or the like. The hardware layer includes the physical hardware of the computing device such as physical network interface cards (NICs), central processing units (CPUs), memory, and/or the like. The hypervisor layer is provided on top of the physical layer and includes one or more hypervisors. A hypervisor manages and controls the physical resources of the computing device and creates and manages a guest virtual machine (e.g., a virtual firewall) implemented on the computing device. The hypervisor may be a Type 1 hypervisor that runs directly on the physical hardware of the computing device with no host operating system or a Type 2 hypervisor that runs on top of a host operating system. The virtual machine layer implements the virtual firewall. The virtual machine layer includes a guest operating system that may implement a routing module and a packet forwarding module. The routing module may include a management daemon (MGD) and a routing protocol daemon (RPD). The MGD may enable communication between processes associated with the virtual firewall, may provide an interface to a configuration database, and/or the like. The RPD may define how routing protocols select routes, maintain a forwarding table, and/or the like. The packet forwarding module may perform one or more security functions of the virtual machine. For example, the packet forwarding module may apply filters, routing policies, and/or other security features to data packets received by the virtual firewall. As shown inFIG.1B, the packet forwarding module includes an advanced services module, a flow processing module, a packet forwarding module, and a data plane development kit (DPDK) module. The advanced services module may include one or more security features relating to data packets received by the virtual firewall and/or connections established through the virtual firewall. For example, the advanced services module may apply an inbound rule to an incoming data packet (e.g. block all incoming data packets associated with a particular IP address), may apply an outbound rule to an outbound data packet (e.g., allow all outbound traffic associated with a particular device), and/or may apply a connection security rule (e.g., to require two peer computing devices to authenticate before establishing a connection). The flow processing module may control a flow of data packets through the virtual firewall. For example, the flow processing module may apply one or more filters to the input and/or the output of a virtual network interface to control the flow of data packets through the virtual firewall. The packet forwarding module may control the forwarding of data packets to a destination device. For example, the packet forwarding module may apply one or more routing policies to the input and/or the output of a virtual network interface to forward data packets processed by the virtual firewall toward a destination device. The DPDK module may perform one or more functions associated with data packet processing. For example, the DPDK module may include a set of data plane libraries and network interface controller drivers that may be used to accelerate packet processing workloads of the virtual firewall by implementing a lockless queue, pre-allocating fixed sized buffers, and/or the like. As shown inFIG.1C, and by reference number110, the host platform determines a set of characteristics associated with deploying the virtual firewall on the computing device. In some implementations, the host platform determines the set of characteristics based on information stored in a data structure in a memory associated with the host platform, information input by a user, information obtained from a device associated with a third party (e.g., a manufacturer associated with the physical hardware of the computing device, a manufacturer associated with the computing device, a manufacturer associated with the hypervisor, a manufacturer associated with the virtual firewall, and/or the like), and/or the like. As shown inFIG.1C, the characteristics include hardware characteristics, hypervisor/virtual machine characteristics, virtual firewall characteristics, and/or the like. The hardware characteristics may include one or more characteristics, properties, attributes, and/or the like associated with the hardware layer of the computing device. For example, the hardware characteristics may indicate a type of the computing device (e.g., a server device, an ×86 server, a Linux server, and/or the like), a type of the CPU (e.g., an ×86 32-bit CPU, an ×86 64-bit CPU, and/or the like), a number of cores associated with the CPU (e.g., 1 core, 2 cores, 4 cores, and/or the like), a processor speed (e.g., a number of cycles per second at which the CPU operates and is able to process information) associated with the CPU (e.g., 1.8 GHz, 2.3 GHz, 2.8 GHz, and/or the like), an amount of random access memory (RAM) available to the CPU, an amount of available memory, whether the physical hardware (e.g., NIC) is able to support single-root input/output virtualization (SR-IOV) and/or multiple-root input/output virtualization (MR-IOV), a type and/or version of host operating system running on the computing device, and/or the like. The above-listed hardware characteristics are intended to be merely examples of types of hardware characteristics that may be used. In practice, the hardware characteristics may include any one or more of the above-listed hardware characteristics and/or one or more other types of hardware characteristics not listed above. The hypervisor/virtual machine characteristics may include one or more characteristics, properties, attributes, settings, and/or the like associated with the hypervisor running on the computing device and/or deployment of the virtual firewall. For example, the hypervisor/virtual machine characteristics may indicate a type of the hypervisor (e.g., Type 1, Type 2, VMware, Hyper-V, vSphere, and/or the like), a number of SCSI controllers to be associated with the virtual firewall, a boot order associated with the virtual firewall (e.g., an order in which boot devices (e.g., hardware interface, network adapter, hard drive, and/or the like) are checked to start the guest operating system), whether a secure boot feature is enabled, a total amount of memory to be made available to the virtual firewall, whether a dynamic memory feature will be enabled, a minimum amount of RAM memory to be made available to the virtual firewall, a maximum amount of RAM memory to be made available to the virtual firewall, a size of a memory buffer associated with an increase in dynamic memory allocation, a memory assignment priority to be assigned to the virtual firewall, a quantity of virtual CPUs associated with the virtual firewall, a minimum amount of physical CPUs that will be available to the virtual firewall, a quantity of non-uniform memory access (NUMA) nodes that will be associated with the virtual firewall, a quantity of sockets that will be associated with the virtual firewall, a NUMA topology to be associated with the virtual firewall, a maximum number of virtual CPUs that can be associated with a NUMA node, a maximum size of a NUMA node, whether IP forwarding is enabled/disabled, whether an Irqbalance is enabled/disabled (e.g., a process that balances the CPU load generated by interrupts across a set of CPUs), whether a security module associated with the hypervisor is enabled/disabled, whether a process for randomizing address space is enabled/disabled, and/or the like. The above-listed hypervisor/virtual machine characteristics are intended to be merely examples of types of hypervisor/virtual machine characteristics that may be used. In practice, the hypervisor/virtual machine characteristics may include any one or more of the above-listed hypervisor/virtual machine characteristics and/or one or more other types of hypervisor/virtual machine characteristics not listed above. The virtual firewall characteristics may include one or more characteristics, properties, attributes, and/or the like associated with implementing a virtual firewall on the computing device. For example, the virtual firewall characteristics may indicate a manufacturer associated with the virtual firewall, a brand associated with the virtual firewall, a software version associated with the virtual firewall, a quantity of interfaces supported by the virtual firewall, a volume of traffic that the virtual firewall is capable of supporting, a maximum quantity of filters that the virtual firewall is capable of supporting, a rate at which the virtual firewall is capable of processing traffic, and/or the like. The above-listed virtual firewall characteristics are intended to be merely examples of types of virtual firewall characteristics that may be used. In practice, the virtual firewall characteristics may include any one or more of the above-listed virtual firewall characteristics and/or one or more other types of virtual firewall characteristics not listed above. As shown inFIG.1D, and by reference number115, the host platform may determine configuration settings for tuning the virtual firewall. For example, the host platform may determine a set of configuration settings associated with increasing a performance of the virtual firewall and/or decreasing a latency associated with the virtual firewall relative to a virtual firewall deployed based on a current, or default, set of configuration settings associated with the hypervisor layer and/or the virtual machine layer of the computing device. In some implementations, the host platform determines the configuration settings based on information stored in a data structure. The data structure may include a plurality of entries. An entry, in the data structure, may be associated with a particular type of virtual firewall, a particular type of computing device, and/or a particular type of hypervisor. The host platform may determine a type of the virtual firewall based on the virtual firewall characteristics. The host platform may determine a type of the computing device based on the hardware characteristics. The host platform may determine a type of the hypervisor based on the hypervisor/virtual machine characteristics. The host platform may identify an entry in the data structure associated with the type of virtual firewall, the type of computing device, and the type of hypervisor. The entry may include information identifying configuration settings for automatically tuning the virtual firewall. In some implementations, the entry may include a plurality of sets of configuration settings. Each set of configuration settings may be associated with an additional virtual firewall characteristic, an additional hardware characteristic, and/or an additional hypervisor/virtual machine characteristic. For example, a set of configuration settings may be associated with the type of virtual firewall, the type of computing device having a CPU having a particular quantity of cores, and the type of hypervisor. Another set of configuration settings may be associated with the type of virtual firewall, the type of computing device, the type of hypervisor, and a particular version of software associated with the virtual firewall and/or the hypervisor. In some implementations, the host platform may use a machine learning model to determine the configuration settings, as described in more detail below. For example, the host platform may train the machine learning model based on one or more parameters associated with tuning a virtual firewall, such as one or more hardware characteristics, one or more hypervisor/virtual machine characteristics, one or more virtual firewall characteristics, and/or the like. The host platform may train the machine learning model, according to the one or more parameters, using historical data associated with determining the configuration settings. Using the one or more parameters as inputs to the machine learning model, the host platform may determine the configuration settings to be used to automatically tune the virtual firewall. As shown inFIG.1E, and by reference number120, the host platform may tune the virtual firewall by configuring the hypervisor and/or the virtual machine based on the configuration settings. For example, the hypervisor may have a quantity of virtual CPUs setting set to a default value. The host platform may modify the quantity of virtual CPUs setting to change the quantity of virtual CPUs setting from the default value to a value indicated by the configuration settings. In some implementations, the host platform may tune the virtual firewall based on a priority associated with the virtual firewall. For example, the user may input information, via the user interface, indicating that the virtual firewall is associated with a high priority relative to other virtual firewalls associated with the host platform. The host platform may tune the virtual firewall based on the virtual firewall being associated with the high priority. In some implementations, a high priority may be a default priority associated with the virtual firewall. For example, the host platform may associate each virtual firewall to be deployed by the host platform with a high priority unless a user provides an input (e.g., via the user interface) indicating that the virtual firewall is to be associated with a low priority. In some implementations, the host platform may prevent the virtual firewall from being tuned when the virtual firewall is associated with the low priority. For example, the virtual firewall may be one of a plurality of virtual firewalls being deployed by the host platform. The user may provide information identifying a particular virtual firewall as being associated with a high priority relative to the other virtual firewalls and/or may provide information identifying the other virtual firewalls as being associated with a low priority relative to the particular virtual firewall. The host platform may automatically tune the particular virtual firewall based on the particular virtual firewall being associated with the high priority. The host platform may not tune the other virtual firewalls and/or may deploy the other virtual firewalls based on the configuration settings determined for the particular virtual firewall based on the other virtual firewalls being associated with the low priority. In some implementations, the host platform may determine that the virtual firewall cannot be tuned. For example, the host platform may determine that the user is not associated with a privilege that allows for the modification of the hypervisor settings and/or the virtual machine settings, that the host platform is unable to access a particular hardware component, and/or the like. The host platform may provide information identifying an issue preventing the host platform from tuning the virtual firewall, information identifying a corrective action to be taken by the user or the host platform to enable the host platform to tune the virtual firewall, and/or the like. In some implementations, the host platform may deploy the virtual firewall as an untuned firewall based on the host platform determining that the virtual firewall cannot be tuned. In some implementations, the host platform may determine that the user or the host platform has performed the corrective action and tune the virtual firewall, either prior to deploying the virtual firewall or after deploying the virtual firewall, based on the user or the host platform performing the corrective action. In some implementations, the host platform may perform a resource availability check after tuning the virtual firewall and/or after determining that the virtual firewall cannot be tuned. The resource availability check may be performed to determine that the computing device on which the virtual firewall is to be implemented satisfies certain minimum requirements for implementing the virtual firewall. For example, the resource availability check may determine whether NUMA socket/hyper-threading is enabled/disabled, whether the virtual firewall is to be implemented on a NUMA associated with a NIC port, and/or the like. If the resource availability check is not successfully performed (e.g., the host platform determines that the computing device does not satisfy the minimum requirements), the host platform may determine to deploy the virtual firewall on a different computing device, output information indicating that the computing device does not satisfy the minimum requirements, prevent the virtual firewall from being deployed on the computing device, and/or the like. As shown inFIG.1F, and by reference number125, the host platform may deploy the tuned virtual firewall on the computing device. In some implementations, the host platform may deploy the tuned virtual firewall based on successfully performing the resource availability check. The computing device may implement the virtual firewall based on the configured hypervisor and/or virtual machine. In this way, the host platform optimizes a performance and/or a latency of the virtual firewall by automatically tuning the virtual firewall prior to the virtual firewall being deployed. Further, by automatically tuning the virtual firewall, the host device may prevent the virtual firewall from having a reduced performance and/or increased latency relative to a tuned firewall as a result of a user improperly, and/or failing to, modify one or more of the existing hypervisor and/or virtual machine settings. As indicated above,FIGS.1A-1Fare provided merely as one or more examples. Other examples may differ from what is described with regard toFIGS.1A-1F. FIG.2is a diagram illustrating an example200of training a machine learning model. The machine learning model training described herein may be performed using a machine learning system. The machine learning system may include a computing device, a server, a cloud computing environment, and/or the like, such as the host platform. As shown by reference number205, a machine learning model may be trained using a set of observations. The set of observations may be obtained and/or input from historical data, such as data gathered during one or more processes described herein. For example, the set of observations may include data gathered from user interaction with and/or user input to the host platform and/or the endpoint device, as described elsewhere herein. In some implementations, the machine learning system may receive the set of observations (e.g., as input) from the host platform. As shown by reference number210, a feature set may be derived from the set of observations. The feature set may include a set of variable types. A variable type may be referred to as a feature. A specific observation may include a set of variable values corresponding to the set of variable types. A set of variable values may be specific to an observation. In some cases, different observations may be associated with different sets of variable values, sometimes referred to as feature values. In some implementations, the machine learning system may determine variable values for a specific observation based on input received from the host platform. For example, the machine learning system may identify a feature set (e.g., one or more features and/or corresponding feature values) from structured data input to the machine learning system, such as by extracting data from a particular column of a table, extracting data from a particular field of a form, extracting data from a particular field of a message, extracting data received in a structured data format, and/or the like. In some implementations, the machine learning system may determine features (e.g., variables types) for a feature set based on input received from the host platform and/or the endpoint device, such as by extracting or generating a name for a column, extracting or generating a name for a field of a form and/or a message, extracting or generating a name based on a structured data format, and/or the like. Additionally, or alternatively, the machine learning system may receive input from an operator to determine features and/or feature values. In some implementations, the machine learning system may perform natural language processing and/or another feature identification technique to extract features (e.g., variable types) and/or feature values (e.g., variable values) from text (e.g., unstructured data) input to the machine learning system, such as by identifying keywords and/or values associated with those keywords from the text. As an example, a feature set for a set of observations may include a first feature of a virtual firewall (VF) characteristic, a second feature of a hypervisor characteristic, a third feature of a host device (e.g., the computing device that the virtual firewall is to be implemented on) characteristic, and so on. As shown, for a first observation, the first feature may have a value of VF1 (e.g., a first type, brand, version, and/or the like of virtual firewall), the second feature may have a value of Type 1, the third feature may have a value of ×86 server, and so on. These features and feature values are provided as examples, and may differ in practice. For example, the feature set may include one or more of the following features: a particular virtual machine characteristic, a particular virtual firewall characteristic, a particular hypervisor characteristic, a particular host device characteristic, and/or the like. In some implementations, the machine learning system may pre-process and/or perform dimensionality reduction to reduce the feature set and/or combine features of the feature set to a minimum feature set. A machine learning model may be trained on the minimum feature set, thereby conserving resources of the machine learning system (e.g., processing resources, memory resources, and/or the like) used to train the machine learning model. As shown by reference number215, the set of observations may be associated with a target variable type. The target variable type may represent a variable having a numeric value (e.g., an integer value, a floating point value, and/or the like), may represent a variable having a numeric value that falls within a range of values or has some discrete possible values, may represent a variable that is selectable from one of multiple options (e.g., one of multiples classes, classifications, labels, and/or the like), may represent a variable having a Boolean value (e.g., 0 or 1, True or False, Yes or No), and/or the like. A target variable type may be associated with a target variable value, and a target variable value may be specific to an observation. In some cases, different observations may be associated with different target variable values. The target variable may represent a value that a machine learning model is being trained to predict, and the feature set may represent the variables that are input to a trained machine learning model to predict a value for the target variable. The set of observations may include target variable values so that the machine learning model can be trained to recognize patterns in the feature set that lead to a target variable value. A machine learning model that is trained to predict a target variable value may be referred to as a supervised learning model, a predictive model, and/or the like. When the target variable type is associated with continuous target variable values (e.g., a range of numbers and/or the like), the machine learning model may employ a regression technique. When the target variable type is associated with categorical target variable values (e.g., classes, labels, and/or the like), the machine learning model may employ a classification technique. In some implementations, the machine learning model may be trained on a set of observations that do not include a target variable (or that include a target variable, but the machine learning model is not being executed to predict the target variable). This may be referred to as an unsupervised learning model, an automated data analysis model, an automated signal extraction model, and/or the like. In this case, the machine learning model may learn patterns from the set of observations without labeling or supervision, and may provide output that indicates such patterns, such as by using clustering and/or association to identify related groups of items within the set of observations. As further shown, the machine learning system may partition the set of observations into a training set220that includes a first subset of observations, of the set of observations, and a test set225that includes a second subset of observations of the set of observations. The training set220may be used to train (e.g., fit, tune, and/or the like) the machine learning model, while the test set225may be used to evaluate a machine learning model that is trained using the training set220. For example, for supervised learning, the test set225may be used for initial model training using the first subset of observations, and the test set225may be used to test whether the trained model accurately predicts target variables in the second subset of observations. In some implementations, the machine learning system may partition the set of observations into the training set220and the test set225by including a first portion or a first percentage of the set of observations in the training set220(e.g., 75%, 80%, or 85%, among other examples) and including a second portion or a second percentage of the set of observations in the test set225(e.g., 25%, 20%, or 15%, among other examples). In some implementations, the machine learning system may randomly select observations to be included in the training set220and/or the test set225. As shown by reference number230, the machine learning system may train a machine learning model using the training set220. This training may include executing, by the machine learning system, a machine learning algorithm to determine a set of model parameters based on the training set220. In some implementations, the machine learning algorithm may include a regression algorithm (e.g., linear regression, logistic regression, and/or the like), which may include a regularized regression algorithm (e.g., Lasso regression, Ridge regression, Elastic-Net regression, and/or the like). Additionally, or alternatively, the machine learning algorithm may include a decision tree algorithm, which may include a tree ensemble algorithm (e.g., generated using bagging and/or boosting), a random forest algorithm, a boosted trees algorithm, and/or the like. A model parameter may include an attribute of a machine learning model that is learned from data input into the model (e.g., the training set220). For example, for a regression algorithm, a model parameter may include a regression coefficient (e.g., a weight). For a decision tree algorithm, a model parameter may include a decision tree split location, as an example. As shown by reference number235, the machine learning system may use one or more hyperparameter sets240to tune the machine learning model. A hyperparameter may include a structural parameter that controls execution of a machine learning algorithm by the machine learning system, such as a constraint applied to the machine learning algorithm. Unlike a model parameter, a hyperparameter is not learned from data input into the model. An example hyperparameter for a regularized regression algorithm includes a strength (e.g., a weight) of a penalty applied to a regression coefficient to mitigate overfitting of the machine learning model to the training set220. The penalty may be applied based on a size of a coefficient value (e.g., for Lasso regression, such as to penalize large coefficient values), may be applied based on a squared size of a coefficient value (e.g., for Ridge regression, such as to penalize large squared coefficient values), may be applied based on a ratio of the size and the squared size (e.g., for Elastic-Net regression), may be applied by setting one or more feature values to zero (e.g., for automatic feature selection), and/or the like. Example hyperparameters for a decision tree algorithm include a tree ensemble technique to be applied (e.g., bagging, boosting, a random forest algorithm, a boosted trees algorithm, and/or the like), a number of features to evaluate, a number of observations to use, a maximum depth of each decision tree (e.g., a number of branches permitted for the decision tree), a number of decision trees to include in a random forest algorithm, and/or the like. To train a machine learning model, the machine learning system may identify a set of machine learning algorithms to be trained (e.g., based on operator input that identifies the one or more machine learning algorithms, based on random selection of a set of machine learning algorithms, and/or the like), and may train the set of machine learning algorithms (e.g., independently for each machine learning algorithm in the set) using the training set220. The machine learning system may tune each machine learning algorithm using one or more hyperparameter sets240(e.g., based on operator input that identifies hyperparameter sets240to be used, based on randomly generating hyperparameter values, and/or the like). The machine learning system may train a particular machine learning model using a specific machine learning algorithm and a corresponding hyperparameter set240. In some implementations, the machine learning system may train multiple machine learning models to generate a set of model parameters for each machine learning model, where each machine learning model corresponds to a different combination of a machine learning algorithm and a hyperparameter set240for that machine learning algorithm. In some implementations, the machine learning system may perform cross-validation when training a machine learning model. Cross validation can be used to obtain a reliable estimate of machine learning model performance using only the training set220, and without using the test set225, such as by splitting the training set220into a number of groups (e.g., based on operator input that identifies the number of groups, based on randomly selecting a number of groups, and/or the like) and using those groups to estimate model performance. For example, using k-fold cross-validation, observations in the training set220may be split into k groups (e.g., in order or at random). For a training procedure, one group may be marked as a hold-out group, and the remaining groups may be marked as training groups. For the training procedure, the machine learning system may train a machine learning model on the training groups and then test the machine learning model on the hold-out group to generate a cross-validation score. The machine learning system may repeat this training procedure using different hold-out groups and different test groups to generate a cross-validation score for each training procedure. In some implementations, the machine learning system may independently train the machine learning model k times, with each individual group being used as a hold-out group once and being used as a training group k−1 times. The machine learning system may combine the cross-validation scores for each training procedure to generate an overall cross-validation score for the machine learning model. The overall cross-validation score may include, for example, an average cross-validation score (e.g., across all training procedures), a standard deviation across cross-validation scores, a standard error across cross-validation scores, and/or the like. In some implementations, the machine learning system may perform cross-validation when training a machine learning model by splitting the training set into a number of groups (e.g., based on operator input that identifies the number of groups, based on randomly selecting a number of groups, and/or the like). The machine learning system may perform multiple training procedures and may generate a cross-validation score for each training procedure. The machine learning system may generate an overall cross-validation score for each hyperparameter set240associated with a particular machine learning algorithm. The machine learning system may compare the overall cross-validation scores for different hyperparameter sets240associated with the particular machine learning algorithm, and may select the hyperparameter set240with the best (e.g., highest accuracy, lowest error, closest to a desired threshold, and/or the like) overall cross-validation score for training the machine learning model. The machine learning system may then train the machine learning model using the selected hyperparameter set240, without cross-validation (e.g., using all of data in the training set220without any hold-out groups), to generate a single machine learning model for a particular machine learning algorithm. The machine learning system may then test this machine learning model using the test set225to generate a performance score, such as a mean squared error (e.g., for regression), a mean absolute error (e.g., for regression), an area under receiver operating characteristic curve (e.g., for classification), and/or the like. If the machine learning model performs adequately (e.g., with a performance score that satisfies a threshold), then the machine learning system may store that machine learning model as a trained machine learning model245to be used to analyze new observations, as described below in connection withFIG.3. In some implementations, the machine learning system may perform cross-validation, as described above, for multiple machine learning algorithms (e.g., independently), such as a regularized regression algorithm, different types of regularized regression algorithms, a decision tree algorithm, different types of decision tree algorithms, and/or the like. Based on performing cross-validation for multiple machine learning algorithms, the machine learning system may generate multiple machine learning models, where each machine learning model has the best overall cross-validation score for a corresponding machine learning algorithm. The machine learning system may then train each machine learning model using the entire training set220(e.g., without cross-validation), and may test each machine learning model using the test set225to generate a corresponding performance score for each machine learning model. The machine learning model may compare the performance scores for each machine learning model, and may select the machine learning model with the best (e.g., highest accuracy, lowest error, closest to a desired threshold, and/or the like) performance score as the trained machine learning model245. As indicated above,FIG.2is provided as an example. Other examples may differ from what is described in connection withFIG.2. For example, the machine learning model may be trained using a different process than what is described in connection withFIG.2. Additionally, or alternatively, the machine learning model may employ a different machine learning algorithm than what is described in connection withFIG.2, such as a Bayesian estimation algorithm, a k-nearest neighbor algorithm, an a priori algorithm, a k-means algorithm, a support vector machine algorithm, a neural network algorithm (e.g., a convolutional neural network algorithm), a deep learning algorithm, and/or the like. FIG.3is a diagram illustrating an example300of applying a trained machine learning model to a new observation. The new observation may be input to a machine learning system that stores a trained machine learning model305. In some implementations, the trained machine learning model305may be the trained machine learning model245described above in connection withFIG.2. The machine learning system may include a computing device, a server, a cloud computing environment, and/or the like, such as the host platform. As shown by reference number310, the machine learning system may receive a new observation (or a set of new observations), and may input the new observation to the machine learning model305. As shown, the new observation may include a first feature of a virtual firewall (VF) characteristic, a second feature of a hypervisor characteristic, a third feature of a host device characteristic, and so on, as an example. The machine learning system may apply the trained machine learning model305to the new observation to generate an output (e.g., a result). The type of output may depend on the type of machine learning model and/or the type of machine learning task being performed. For example, the output may include a predicted (e.g., estimated) value of target variable (e.g., a value within a continuous range of values, a discrete value, a label, a class, a classification, and/or the like), such as when supervised learning is employed. Additionally, or alternatively, the output may include information that identifies a cluster to which the new observation belongs, information that indicates a degree of similarity between the new observation and one or more prior observations (e.g., which may have previously been new observations input to the machine learning model and/or observations used to train the machine learning model), and/or the like, such as when unsupervised learning is employed. In some implementations, the trained machine learning model305may predict a set of configuration settings (shown as Setting Y) for the target variable of Configuration Setting for the new observation, as shown by reference number315. Based on this prediction, the machine learning system may perform an automated action and/or may cause an automated action to be performed (e.g., by instructing another device to perform the automated action), such as tuning a virtual firewall associated with the new observation based on the configuration settings determined for the new observation. In some implementations, the trained machine learning model305may classify (e.g., cluster) the new observation in a cluster associated with a type of virtual firewall, as shown by reference number320. The observations within a cluster may have a threshold degree of similarity. Based on classifying the new observation in the cluster, the machine learning system may provide a recommendation, such as recommending the configuration settings to be used to tune the virtual firewall and/or the like. Additionally, or alternatively, the machine learning system may perform an automated action and/or may cause an automated action to be performed (e.g., by instructing another device to perform the automated action), such as deploying the virtual firewall on to a particular host device. In this way, the machine learning system may apply a rigorous and automated process to determine configuration settings for tuning a virtual firewall. The machine learning system enables recognition and/or identification of tens, hundreds, thousands, or millions of features and/or feature values for tens, hundreds, thousands, or millions of observations, thereby increasing an accuracy and consistency of tuning a virtual firewall relative to requiring computing resources to be allocated for operators to manually determine configuration settings for tuning a virtual firewall using the features or feature values. As indicated above,FIG.3is provided as an example. Other examples may differ from what is described in connection withFIG.3. FIG.4is a diagram is a diagram of an example environment400in which systems and/or methods described herein may be implemented. As shown inFIG.4, environment400may include an endpoint device405, a host platform410implemented within cloud computing environment420, and a network425. Devices of environment400may interconnect via wired connections, wireless connections, or a combination of wired and wireless connections. Endpoint device405includes one or more devices capable of receiving and/or providing information over a network (e.g., network425), and/or capable of generating, storing, and/or processing information received and/or provided over the network. For example, endpoint device405may include a computing device, such as a laptop computer, a tablet computer, a handheld computer, a desktop computer, a mobile phone (e.g., a smart phone, a radiotelephone, etc.), a personal digital assistant, a network device (e.g., a router, a gateway, a firewall, a hub, a bridge, etc.), a telephone, or a similar device Host platform410includes one or more computing resources assigned to support and/or automatically tune a virtual firewall. For example, host platform410may be a platform implemented by cloud computing environment420that may automatically tune a virtual firewall. In some implementations, host platform410is implemented by computing resources415of cloud computing environment420. Host platform410may include a server device or a group of server devices. In some implementations, host platform410may be hosted in cloud computing environment420. Notably, while implementations described herein may describe host platform410as being hosted in cloud computing environment420, in some implementations, host platform410may be non-cloud-based or may be partially cloud-based. Cloud computing environment420includes an environment that delivers computing as a service, whereby shared resources, services, and/or the like may be provided to host platform410and/or endpoint device405. Cloud computing environment420may provide computation, software, data access, storage, and/or other services that do not require end-user knowledge of a physical location and configuration of a system and/or a device that delivers the services. As shown, cloud computing environment420may include host platform410and computing resource415. Computing resource415includes one or more personal computers, workstation computers, server devices, or another type of computation and/or communication device. In some implementations, computing resource415may host host platform410. The cloud resources may include compute instances executing in computing resource415, storage devices provided in computing resource415, data transfer devices provided by computing resource415, and/or the like. In some implementations, computing resource415may communicate with other computing resources415via wired connections, wireless connections, or a combination of wired and wireless connections. As further shown inFIG.4, computing resource415may include a group of cloud resources, such as one or more applications (“APPs”)415-1, one or more virtual machines (“VMs”)415-2, virtualized storage (“VSs”)415-3, one or more hypervisors (“HYPs”)415-4, or the like. Application415-1includes one or more software applications that may be provided to or accessed by endpoint device405. Application415-1may eliminate a need to install and execute the software applications on endpoint device405. For example, application415-1may include software associated with host platform410and/or any other software capable of being provided via cloud computing environment420. In some implementations, one application415-1may send/receive information to/from one or more other applications415-1, via virtual machine415-2. Virtual machine415-2includes a software implementation of a machine (e.g., a computer) that executes programs like a physical machine. Virtual machine415-2may be either a system virtual machine or a process virtual machine, depending upon use and degree of correspondence to any real machine by virtual machine415-2. A system virtual machine may provide a complete system platform that supports execution of a complete operating system (“OS”). A process virtual machine may execute a single program and may support a single process. In some implementations, virtual machine415-2may execute on behalf of a user (e.g., endpoint device405), and may manage infrastructure of cloud computing environment420, such as data management, synchronization, or long-duration data transfers. Virtualized storage415-3includes one or more storage systems and/or one or more devices that use virtualization techniques within the storage systems or devices of computing resource415. In some implementations, within the context of a storage system, types of virtualizations may include block virtualization and file virtualization. Block virtualization may refer to abstraction (or separation) of logical storage from physical storage so that the storage system may be accessed without regard to physical storage or heterogeneous structure. The separation may permit administrators of the storage system flexibility in how the administrators manage storage for end users. File virtualization may eliminate dependencies between data accessed at a file level and a location where files are physically stored. This may enable optimization of storage use, server consolidation, and/or performance of non-disruptive file migrations. Hypervisor415-4provides hardware virtualization techniques that allow multiple operating systems (e.g., “guest operating systems”) to execute concurrently on a host computer, such as computing resource415. Hypervisor415-4may present a virtual operating platform to the “guest operating systems” and may manage the execution of the guest operating systems. Multiple instances of a variety of operating systems may share virtualized hardware resources. Network425includes one or more wired and/or wireless networks. For example, network425may include a cellular network (e.g., a long-term evolution (LTE) network, a code division multiple access (CDMA) network, a 3G network, a 4G network, a 5G network, another type of next generation network, and/or the like), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, and/or the like, and/or a combination of these or other types of networks. The number and arrangement of devices and networks shown inFIG.4are provided as one or more examples. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown inFIG.4. Furthermore, two or more devices shown inFIG.4may be implemented within a single device, or a single device shown inFIG.4may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of environment400may perform one or more functions described as being performed by another set of devices of environment400. FIG.5is a diagram of example components of a device500. Device500may correspond to endpoint device405, host platform410, and/or computing resource415. In some implementations, endpoint device405, host platform410, and/or computing resource415may include one or more devices500and/or one or more components of device500. As shown inFIG.5, device500may include a bus510, a processor520, a memory530, a storage component540, an input component550, an output component560, and a communication interface570. Bus510includes a component that permits communication among multiple components of device500. Processor520is implemented in hardware, firmware, and/or a combination of hardware and software. Processor520takes the form of a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component. In some implementations, processor520includes one or more processors capable of being programmed to perform a function. Memory530includes a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use by processor520. Storage component540stores information and/or software related to the operation and use of device500. For example, storage component540may include a hard disk (e.g., a magnetic disk, an optical disk, and/or a magneto-optic disk), a solid state drive (SSD), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, along with a corresponding drive. Input component550includes a component that permits device500to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, and/or a microphone). Additionally, or alternatively, input component550may include a component for determining location (e.g., a global positioning system (GPS) component) and/or a sensor (e.g., an accelerometer, a gyroscope, an actuator, another type of positional or environmental sensor, and/or the like). Output component560includes a component that provides output information from device500(via, e.g., a display, a speaker, a haptic feedback component, an audio or visual indicator, and/or the like). Communication interface570includes a transceiver-like component (e.g., a transceiver, a separate receiver, a separate transmitter, and/or the like) that enables device500to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. Communication interface570may permit device500to receive information from another device and/or provide information to another device. For example, communication interface570may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, and/or the like. Device500may perform one or more processes described herein. Device500may perform these processes based on processor520executing software instructions stored by a non-transitory computer-readable medium, such as memory530and/or storage component540. As used herein, the term “computer-readable medium” refers to a non-transitory memory device. A memory device includes memory space within a single physical storage device or memory space spread across multiple physical storage devices. Software instructions may be read into memory530and/or storage component540from another computer-readable medium or from another device via communication interface570. When executed, software instructions stored in memory530and/or storage component540may cause processor520to perform one or more processes described herein. Additionally, or alternatively, hardware circuitry may be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. The number and arrangement of components shown inFIG.5are provided as an example. In practice, device500may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.5. Additionally, or alternatively, a set of components (e.g., one or more components) of device500may perform one or more functions described as being performed by another set of components of device500. FIG.6is a flow chart of an example process600for automatically tuning a virtual firewall. In some implementations, one or more process blocks ofFIG.6may be performed by a device (e.g., host platform410). In some implementations, one or more process blocks ofFIG.6may be performed by another device or a group of devices separate from or including the device, such as an endpoint device (e.g., endpoint device405), and/or the like. As shown inFIG.6, process600may include receiving an input associated with deploying a virtual firewall on a computing device (block610). For example, the device (e.g., using computing resource415, processor520, memory530, storage component540, input component550, output component560, communication interface570, and/or the like) may receive an input associated with deploying a virtual firewall on a computing device, as described above. As further shown inFIG.6, process600may include determining a first set of characteristics associated with the virtual firewall and a second set of characteristics associated with a hypervisor associated with the computing device (block620). For example, the device (e.g., using computing resource415, processor520, memory530, storage component540, input component550, output component560, communication interface570, and/or the like) may determine a first set of characteristics associated with the virtual firewall and a second set of characteristics associated with a hypervisor associated with the computing device, as described above. As further shown inFIG.6, process600may include automatically tuning the virtual firewall based on the first set of characteristics and the second set of characteristics (block630). For example, the device (e.g., using computing resource415, processor520, memory530, storage component540, input component550, output component560, communication interface570, and/or the like) may automatically tune, by the device, the virtual firewall based on the first set of characteristics and the second set of characteristics, as described above. As further shown inFIG.6, process600may include deploying the virtual firewall after tuning the virtual firewall (block640). For example, the device (e.g., using computing resource415, processor520, memory530, storage component540, input component550, output component560, communication interface570, and/or the like) may deploy the virtual firewall after tuning the virtual firewall, as described above. Process600may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. In a first implementation, automatically tuning the virtual firewall comprises: modifying a hypervisor setting based on the first set of characteristics and the second set of characteristics. In a second implementation, alone or in combination with the first implementation, automatically tuning the virtual firewall comprises: modifying a virtual machine setting based on the first set of characteristics and the second set of characteristics. In a third implementation, alone or in combination with one or more of the first and second implementations, the virtual firewall is a first virtual firewall, the method further comprising: determining to deploy a second virtual firewall; determining that a priority associated with the first virtual firewall is a higher priority relative to a priority associated with the second virtual firewall, and deploying the second virtual firewall based on the first set of characteristics and the second set of characteristics based on the priority associated with the first virtual firewall being the higher priority. In a fourth implementation, alone or in combination with one or more of the first through third implementations, receiving the input comprises: receiving, via the user interface, an input indicating that the device is to automatically tune the virtual firewall. In a fifth implementation, alone or in combination with one or more of the first through fourth implementations, process600includes performing a resource availability check to determine whether the virtual firewall is able to be deployed on the computing device, wherein the virtual firewall is being deployed based on determining whether the virtual firewall is able to be deployed on the computing device. In a sixth implementation, alone or in combination with one or more of the first through fifth implementations, the device determines that the virtual firewall is not able to be deployed on the computing device, the method further comprises: providing information identifying a group of settings, associated with the computing device, to be modified to enable the virtual firewall to be deployed on the computing device; determining that the group of settings have been modified, and deploying the virtual firewall based on the modified group of settings, the virtual firewall is deployed based on performing the resource availability check. AlthoughFIG.6shows example blocks of process600, in some implementations, process600may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.6. Additionally, or alternatively, two or more of the blocks of process600may be performed in parallel. FIG.7is a flow chart of an example process700for automatically tuning a virtual firewall. In some implementations, one or more process blocks ofFIG.7may be performed by a device (e.g., host platform410). In some implementations, one or more process blocks ofFIG.7may be performed by another device or a group of devices separate from or including the device, such as an endpoint device (e.g., endpoint device405), and/or the like. As shown inFIG.7, process700may include receiving an input associated with deploying a virtual firewall (block710). For example, the device (e.g., using computing resource415, processor520, memory530, storage component540, input component550, output component560, communication interface570, and/or the like) may receive an input associated with deploying a virtual firewall, as described above. As further shown inFIG.7, process700may include performing a process to tune the virtual firewall based on the input, configure a hypervisor associated with the virtual firewall based on one or more characteristics of the virtual firewall (block720). For example, the device (e.g., using computing resource415, processor520, memory530, storage component540, input component550, output component560, communication interface570, and/or the like) may perform a process to tune the virtual firewall based on the input, as described above. As further shown inFIG.7, process700may include deploying the virtual firewall after tuning the virtual firewall (block730). For example, the device (e.g., using computing resource415, processor520, memory530, storage component540, input component550, output component560, communication interface570, and/or the like) may deploy the virtual firewall after tuning the virtual firewall, as described above. Process700may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. In a first implementation, process700includes determining whether non-uniform memory access (NUMA) socket/hyperthreading is enabled, wherein the virtual firewall is being deployed further based on whether the NUMA socket/hyperthreading is enabled. In a second implementation, alone or in combination with the first implementation, process700includes configuring a virtual machine setting associated with the virtual firewall based on the one or more characteristics of the virtual firewall. In a third implementation, alone or in combination with one or more of the first and second implementations, process700includes determining that a plurality of virtual firewalls is to be deployed based on the input, wherein the plurality of virtual firewalls is including the virtual firewall; and enabling a user to set a priority setting for each virtual firewall, of the plurality of virtual firewalls, based on the input indicating that the plurality of virtual firewalls is to be deployed. In a fourth implementation, alone or in combination with one or more of the first through third implementations, process700includes determining that a priority setting associated with the virtual firewall is set to a highest priority setting relative to priority settings associated with other virtual firewalls included in the plurality of virtual firewalls, wherein the process to is tuning the virtual firewall is performed based on the priority setting associated with the virtual firewall being set to the highest priority setting. In a fifth implementation, alone or in combination with one or more of the first through fourth implementations, process700includes determining a failure of a resource availability check associated with deploying the virtual firewall; and modifying a priority setting associated with the virtual machine based on the failure of the resource availability check, wherein the virtual firewall is being to be deployed based on the modified priority setting. In a sixth implementation, alone or in combination with one or more of the first through fifth implementations, process700includes causing the virtual firewall to utilize a multilayer virtual switch that enables virtual networking of virtual machines. AlthoughFIG.7shows example blocks of process700, in some implementations, process700may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.7. Additionally, or alternatively, two or more of the blocks of process700may be performed in parallel. FIG.8is a flow chart of an example process800for automatically tuning a virtual firewall. In some implementations, one or more process blocks ofFIG.8may be performed by a device (e.g., host platform410). In some implementations, one or more process blocks ofFIG.8may be performed by another device or a group of devices separate from or including the device, such as an endpoint device (e.g., endpoint device405), and/or the like. As shown inFIG.8, process800may include receiving an input associated with deploying a virtual firewall (block810). For example, the device (e.g., using computing resource415, processor520, memory530, storage component540, input component550, output component560, communication interface570, and/or the like) may receive an input associated with deploying a virtual firewall, as described above. As further shown inFIG.8, process800may include determining a type of the virtual firewall based on the input (block820). For example, the device (e.g., using computing resource415, processor520, memory530, storage component540, input component550, output component560, communication interface570, and/or the like) may determine a type of the virtual firewall based on the input, as described above. As further shown inFIG.8, process800may include determining a configuration setting associated with the virtual firewall based on the type of the virtual firewall (block830). For example, the device (e.g., using computing resource415, processor520, memory530, storage component540, input component550, output component560, communication interface570, and/or the like) may determine a configuration setting associated with the virtual firewall based on the type of the virtual firewall, as described above. As further shown inFIG.8, process800may include automatically tuning the virtual firewall based on the configuration setting (block840). For example, the device (e.g., using computing resource415, processor520, memory530, storage component540, input component550, output component560, communication interface570, and/or the like) may automatically tune the virtual firewall based on the configuration setting, as described above. As further shown inFIG.8, process800may include deploying the virtual firewall after tuning the virtual firewall (block850). For example, the device (e.g., using computing resource415, processor520, memory530, storage component540, input component550, output component560, communication interface570, and/or the like) may deploy the virtual firewall after tuning the virtual firewall, as described above. Process800may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. In a first implementation, process800includes determining that a network interface card port associated with the virtual firewall is associated with a non-uniform memory access (NUMA) node associated with the virtual firewall, wherein the virtual firewall is being to be deployed further based on the network interface card port being associated with the NUMA node associated with the virtual firewall. In a second implementation, alone or in combination with the first implementation, process800includes disabling a hyper-threading is functioning based on the type of virtual firewall. In a third implementation, alone or in combination with one or more of the first and second implementations, process800includes causing a physical network interface card and the virtual firewall to be attached to a same non-uniform memory access node. In a fourth implementation, alone or in combination with one or more of the first through third implementations, process800includes utilizing machine learning to determine the configuration setting. In a fifth implementation, alone or in combination with one or more of the first through fourth implementations, process800includes identifying a non-uniform memory access (NUMA) node associated with the virtual firewall; and causing the virtual firewall to be associated with a virtual central processing unit (vCPU) associated with the NUMA node. AlthoughFIG.8shows example blocks of process800, in some implementations, process800may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.8. Additionally, or alternatively, two or more of the blocks of process800may be performed in parallel. The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise forms disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the implementations. As used herein, the term “component” is intended to be broadly construed as hardware, firmware, and/or a combination of hardware and software. Certain user interfaces have been described herein and/or shown in the figures. A user interface may include a graphical user interface, a non-graphical user interface, a text-based user interface, and/or the like. A user interface may provide information for display. In some implementations, a user may interact with the information, such as by providing input via an input component of a device that provides the user interface for display. In some implementations, a user interface may be configurable by a device and/or a user (e.g., a user may change the size of the user interface, information provided via the user interface, a position of information provided via the user interface, etc.). Additionally, or alternatively, a user interface may be pre-configured to a standard configuration, a specific configuration based on a type of device on which the user interface is displayed, and/or a set of configurations based on capabilities and/or specifications associated with a device on which the user interface is displayed. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be designed to implement the systems and/or methods based on the description herein. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).
75,930
11863525
In the drawings, like reference numbers generally indicate identical or similar elements. Additionally, generally, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears. DETAILED DESCRIPTION Provided herein are system, apparatus, device, method and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for managing contact passlists and blocklists across multiple communication channels using customer relationship management (CRM) tools. Traditionally, CRM implementations have been built around customers as records, with the goal of gathering up and storing these contacts in furtherance of business goals (e.g., as leads for future business). However, this approach runs the risk of burying important customer and potential customer communications. These communications, if promptly addressed, could result in converting new or additional business. However, in a high traffic environment for communications such as sales or service, ensuring that communications from customers or potential customers are promptly reviewed and addressed is not necessarily a straightforward task. And while some tools have been developed in the context of email communications to prioritize certain emails, these tools lack functionality that allows for CRM integration and cross-channel communications. FIG.1illustrates a communication architecture100, in accordance with an embodiment. Various communication channels102are provided by which customers or potential customers may reach out to a business (e.g., via contact with sales or service relationship managers). These include, by way of non-limiting example, emails102a, voice calls102b, web forms102c, social postings102d(e.g., communications via social media platforms such as Facebook, Twitter, YouTube, Instagram, LinkedIn, Google+, Sina Weibo, etc.), and real-time channels102e(e.g., live agent chats, SMS, Facebook messages, WhatsApp, WeChat, Apple Business Chat, etc.). Channel communication stacks104provide software code configured to interface with each of channels102as necessary (e.g., via an application programming interface (API) for the channel). Channel communication stacks104are configured to read a variety of data fields for each type of communication (often unique to each channel102) and provide them to message handler108of CRM system106. Channel communication stacks104are connected during operation to each communication channel102source, such as an email server, social media API, or SMS service, by way of non-limiting example. Table 1 below illustrates example information that can be captured for an incoming communication by a possible contact for different channels102, in addition to the incoming communication itself: TABLE 1Channel TypeInformation Captured for a ContactEmailEmail address, last/first names,title/company from user signatureSocial media postingUser name, user avatarFacebook MessengerUser name, user avatarWeChatUser name, user avatarPhone callPhone numberText (SMS)Phone numberWhatsAppPhone number, profile names, user avatar One skilled in the relevant arts will recognize that these exemplary information fields for channels102are not limiting, and that additional information can be captured from a variety of channels102(e.g., header information) or derived (e.g., information in a signature block) from the communication itself. With this information about the contact and the communication provided to message handler108, a message associated with the communication can be stored to messages110. With messages110, it is possible for a message reader (e.g., an email inbox application) to read messages110and present them for display to an end user. Waiting room handler112is able to receive the incoming communication and the captured information for the contact into a “waiting room,” from which it can be determined whether the contact should be added to a passlist or a blocklist, by way of non-limiting example. These contacts can be stored as contacts114in CRM system106, either separately or together as needed. In one embodiment, contacts added to the passlist are added as CRM customer records, and a separate blocklist is maintained. A blocklist may also be termed an ignore list, or other similar name, referencing that communications from contacts added to the block list are omitted (ignored, or blocked). As illustrated further below, a messaging application (e.g., email client, or other message viewer) can cross-reference the CRM customer records of contacts114and messages110in order to prioritize messages110that are from CRM customer records in contacts114. For example, messages110may be prioritized by only displaying those messages that are from customers with CRM customer records in contacts114. In effect, when a contact communicates across any channel102, their messages (from messages110) are shown in the recipient's view of the messages (such as an inbox) if the contact has an established corresponding CRM customer record in contacts114. However, if the contact is not an existing contact, they are put into a “waiting room” by waiting room handler112, where the contact can be approved (passlisted) or skipped (blocklisted). When the contact is either approved or skipped, it is removed from the waiting room and handled as passlisted or blocklisted. When passlisted, a CRM customer record is created for them so that future communications across any channel from that contact (for which there is a match) is automatically permitted. As a result, users (e.g., sales and support relationship managers) can enjoy an inbox view (either for individual channels102a-eor a unified inbox across channels102) that shows only communications from passlisted contacts that are high value (i.e., they have corresponding CRM customer records in contacts114). When blocklisted, the contact is placed in the blocklist where communications from that contact are not delivered to the inbox. FIG.2is a flowchart200illustrating steps by which a waiting room handler, such as waiting room handler112ofFIG.1, handles communications from senders, in accordance with an embodiment. Initially, if an incoming communication is from a sender that has a matching CRM customer record, the incoming communication is passed along directly to the inbox at step202. This can be handled by, for example, cross referencing the incoming communication with a list of CRM customer records and only showing those communications that have such correspondence to an existing CRM customer record. In another example, the incoming communications may be included in an approved list only if the sender matches a CRM customer record. One skilled in the relevant arts will appreciate that the manner in which the sender is verified against the CRM customer records may vary by application, and these approaches are non-limiting. In the case of incoming communications from an unknown sender, the communication is added to a waiting room list at step204. A user may display the waiting room list from which they may select whether to pass or block a given sender. The waiting room handler determines, at step206, whether the sender has been passlisted or blocklisted. If the sender is passlisted (e.g., by a user clicking a checkmark or thumbs-up symbol within the waiting room list display), the sender contact information is captured in a new CRM customer record at step208, and the inbox is updated to include communications (such as the incoming communication) from the new CRM customer record at step210. The sender is then removed from the waiting room list at step214. Alternatively, if the sender is blocklisted (e.g., by a user clicking an ‘X’ or thumbs-down symbol within the waiting room list display) at step206, the sender contact information is added to a blocklist contact list at step212. And, here as well, the sender is then removed from the waiting room list at step214. With a sender passlisted following steps206,208, and210, the new CRM customer record that is created for the sender simplifies further communication with that sender. In the event that an additional communication is received from that sender, it is matched to the CRM customer record at a further iteration of step202and automatically placed in the inbox. If the additional communication is received via a different channel from the original communication, overlapping information from Table 1 that is present in the CRM customer record can be used to determine that the new sender is the same as the previous sender for which the CRM customer record was created. For example, if the original communication that was passlisted and used to create the new CRM customer record was an email, a cell phone number might be retrieved from the sender's signature block in the email and included in the new CRM customer record. Subsequently, if a communication is received via SMS from that number, the SMS communication can be automatically moved to the inbox at step202based on the association with the CRM customer record bearing that number (as previously obtained from the email signature block). When a communication is placed in the waiting room, a user may call up a user interface (UI) to view unknown senders and their communication, and passlist or blocklist the individual senders, as discussed above.FIG.3is a user interface300for a waiting room, in accordance with an embodiment. In an embodiment, a side panel302for a communication channel (corresponding to, e.g., email102aofFIG.1) includes a link to a waiting room for that channel, and one or more message folders for viewing communications (e.g., inbox, sent, trash, etc.). In the example UI300, waiting room list display304is shown, containing communications304a,304b, and304cwhich have been received for the given channel. One skilled in the relevant arts will appreciate that a unified channel waiting room can be shown instead, with communications304a-cbeing drawn from across multiple channels102ofFIG.1. For a given communication, such as communication304c, UI300shows information useful for the purpose of determining whether to passlist or blocklist the given sender. The information shown varies depending on the channel from which the message was received, and can include information such as shown in Table 1 above. In the example of304c, the sender's name and email address is shown, sourced from an email communication corresponding to communication304c. Additionally, a user of UI300may optionally expand an interface element corresponding to communication304cin order to view the underlying communication (e.g., an email message). Upon reviewing the information for a given communication304a-c, the user may then select whether to passlist or blocklist the sender by clicking the checkmark (or other similar interface element, e.g., a thumbs up) to passlist the sender (corresponding to, e.g., elements206,208, and210ofFIG.2), or clicking the ‘X’ (or other similar interface element, e.g., a thumbs down) to blocklist the sender (corresponding to, e.g., elements206and212ofFIG.2). UI300additionally includes access to a blocked senders list306, in accordance with an embodiment. In exemplary UI300, a button is used to invoke blocked senders list306, and when selected can display an additional UI element listing all senders that have been blocklisted. From there, a user may select blocked senders for removal (unblocklisting) from the blocked senders list, which would add the senders to the passlist. In accordance with an embodiment, the addition of a sender to the passlist is done by creating a new CRM customer record corresponding to the contact (i.e., the passlist is the inclusion of the contact in the CRM customer record set). Further details are discussed below with respect toFIG.5. UI300also includes filter options308(illustrated as “VIP keywords”), in accordance with an embodiment. In exemplary UI300, a button is used to invoke a menu in which keywords can be entered to allow matching communications to be automatically passlisted. One skilled in the relevant arts will appreciate that other filtering criteria can be provided and used to automatically passlist communications. Further details are discussed below with respect toFIG.6. FIG.4is a user interface400for an inbox, in accordance with an embodiment. As shown in UI400, a side bar402allows for the selection of various folders (e.g., inbox, sent, trash) associated with a communication channel102ofFIG.1. Additionally, side bar may allow for the selection of a “unified” inbox that shows received communications across all channels102, in accordance with an embodiment. UI400includes a communication list404, which shows available communications. The communications depicted in communication list404are each passlisted, and correspond to a CRM customer record within an underlying CRM system such as CRM system106ofFIG.1. In an embodiment, any communications received from a sender that does not correspond to a CRM customer record is omitted from communication list404. When a communication is selected from communication list404, a copy of the communication is shown in UI400for the purpose of reading the communication, responding to the communication, reading related communications (e.g., a threaded view), and carrying out other functions related to the selected communication. FIG.5is a flowchart500illustrating steps by which a contact may be removed from a sender blocklist, in accordance with an embodiment. As previously described, a user may call the functionality of flowchart500by requesting to view the sender blocklist, such as by the button corresponding to block senders list306ofFIG.3. In response, the sender blocklist is displayed at step502. This blocklist includes all senders that have been previously blocklisted such as, e.g., by the process described inFIG.2at elements206and212. One skilled in the relevant art will appreciate that this list can be reviewed by a number of mechanisms, including by sorting the senders on any available field. Additionally, as shown at step502, a search function may be displayed to allow a user to search through the blocklist. If a user selects a sender from the blocklist for removal, such request is received at step504. Consequently, at step506, the sender is passlisted and thereby a corresponding new CRM customer record for the sender is created. At step508, the inbox is updated to include any communications corresponding to this new CRM customer record, consistent with passlisting as in steps206,208, and210ofFIG.2. While in the foregoing approach removing the sender from the blocklist includes adding the sender to the passlist, alternative approaches may be included. For example, in an alternative embodiment, the sender may be removed from the blocklist and instead added back to the waiting room list. From here, the user may return to the waiting room UI and passlist or blocklist the sender as per usual (e.g., step206ofFIG.2). However, by automatically adding the sender to the passlist upon removal from the blocklist, this step can be saved. As previously discussed with reference to button308ofFIG.3, adding a sender to the passlist can be automated by using, for example, filters.FIG.6is a flowchart600illustrating steps by which a contact may be passlisted using a filter, in accordance with an embodiment. One skilled in the relevant arts will appreciate that a filter may include any approach that allows for automatically matching a communication from a sender to filter criteria for the purpose of passlisting, such as, for example, rule-based processing, keyword matching, or even machine learning (ML) approaches. A filter panel is displayed at step602that allows a user the option to configure filters (e.g., rules, keywords, ML parameters, etc.). Regardless of how the filter is implemented, these options are received at step604and applied to incoming communications at step606. One skilled in the relevant arts will appreciate that filters may also be applied to automatically blocklist a sender, using a similar mechanic to that described here. However, since in the exemplary embodiment communications are only shown for passlisted senders, flowchart600automates only the inclusion of senders in the passlist. Therefore, at step608, a determination is made as to whether the sender is passlisted or not. If the sender is not passlisted, then at step614the sender is added to the waiting room list. From here, the waiting room list approach of flowchart200ofFIG.2can be used to passlist or blocklist the sender from step206ofFIG.2. If the filter automatically passlists the sender, then the sender contact information is captured into a new CRM customer record at step610and the inbox is updated to include communications corresponding to the new CRM customer record at step612(corresponding to passlisting as in steps206,208, and210ofFIG.2. Various embodiments may be implemented, for example, using one or more well-known computer systems, such as computer system700shown inFIG.7. One or more computer systems700may be used, for example, to implement any of the embodiments discussed herein, as well as combinations and sub-combinations thereof. Computer system700may include one or more processors (also called central processing units, or CPUs), such as a processor704. Processor704may be connected to a communication infrastructure or bus706. Computer system700may also include customer input/output device(s)703, such as monitors, keyboards, pointing devices, etc., which may communicate with communication infrastructure706through customer input/output interface(s)702. One or more of processors704may be a graphics processing unit (GPU). In an embodiment, a GPU may be a processor that is a specialized electronic circuit designed to process mathematically intensive applications. The GPU may have a parallel structure that is efficient for parallel processing of large blocks of data, such as mathematically intensive data common to computer graphics applications, images, videos, etc. Computer system700may also include a main or primary memory708, such as random access memory (RAM). Main memory708may include one or more levels of cache. Main memory708may have stored therein control logic (i.e., computer software) and/or data. Computer system700may also include one or more secondary storage devices or memory710. Secondary memory710may include, for example, a hard disk drive712and/or a removable storage device or drive714. Removable storage drive714may be a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup device, and/or any other storage device/drive. Removable storage drive714may interact with a removable storage unit718. Removable storage unit718may include a computer usable or readable storage device having stored thereon computer software (control logic) and/or data. Removable storage unit718may be a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, and/any other computer data storage device. Removable storage drive714may read from and/or write to removable storage unit718. Secondary memory710may include other means, devices, components, instrumentalities or other approaches for allowing computer programs and/or other instructions and/or data to be accessed by computer system700. Such means, devices, components, instrumentalities or other approaches may include, for example, a removable storage unit722and an interface720. Examples of the removable storage unit722and the interface720may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM or PROM) and associated socket, a memory stick and USB port, a memory card and associated memory card slot, and/or any other removable storage unit and associated interface. Computer system700may further include a communication or network interface724. Communication interface724may enable computer system700to communicate and interact with any combination of external devices, external networks, external entities, etc. (individually and collectively referenced by reference number728). For example, communication interface724may allow computer system700to communicate with external or remote devices728over communications path726, which may be wired and/or wireless (or a combination thereof), and which may include any combination of LANs, WANs, the Internet, etc. Control logic and/or data may be transmitted to and from computer system700via communication path726. Computer system700may also be any of a personal digital assistant (PDA), desktop workstation, laptop or notebook computer, netbook, tablet, smart phone, smart watch or other wearable, appliance, part of the Internet-of-Things, and/or embedded system, to name a few non-limiting examples, or any combination thereof. Computer system700may be a client or server, accessing or hosting any applications and/or data through any delivery paradigm, including but not limited to remote or distributed cloud computing solutions; local or on-premises software (“on-premise” cloud-based solutions); “as a service” models (e.g., content as a service (CaaS), digital content as a service (DCaaS), software as a service (SaaS), managed software as a service (MSaaS), platform as a service (PaaS), desktop as a service (DaaS), framework as a service (FaaS), backend as a service (BaaS), mobile backend as a service (MBaaS), infrastructure as a service (IaaS), etc.); and/or a hybrid model including any combination of the foregoing examples or other services or delivery paradigms. Any applicable data structures, file formats, and schemas in computer system700may be derived from standards including but not limited to JavaScript Object Notation (JSON), Extensible Markup Language (XML), Yet Another Markup Language (YAML), Extensible Hypertext Markup Language (XHTML), Wireless Markup Language (WML), MessagePack, XML User Interface Language (XUL), or any other functionally similar representations alone or in combination. Alternatively, proprietary data structures, formats or schemas may be used, either exclusively or in combination with known or open standards. In some embodiments, a tangible, non-transitory apparatus or article of manufacture comprising a tangible, non-transitory computer useable or readable medium having control logic (software) stored thereon may also be referred to herein as a computer program product or program storage device. This includes, but is not limited to, computer system700, main memory708, secondary memory710, and removable storage units718and722, as well as tangible articles of manufacture embodying any combination of the foregoing. Such control logic, when executed by one or more data processing devices (such as computer system700), may cause such data processing devices to operate as described herein. Based on the teachings contained in this disclosure, it will be apparent to persons skilled in the relevant art(s) how to make and use embodiments of this disclosure using data processing devices, computer systems and/or computer architectures other than that shown inFIG.7. In particular, embodiments can operate with software, hardware, and/or operating system implementations other than those described herein. It is to be appreciated that the Detailed Description section, and not any other section, is intended to be used to interpret the claims. Other sections can set forth one or more but not all exemplary embodiments as contemplated by the inventor(s), and thus, are not intended to limit this disclosure or the appended claims in any way. While this disclosure describes exemplary embodiments for exemplary fields and applications, it should be understood that the disclosure is not limited thereto. Other embodiments and modifications thereto are possible, and are within the scope and spirit of this disclosure. For example, and without limiting the generality of this paragraph, embodiments are not limited to the software, hardware, firmware, and/or entities illustrated in the figures and/or described herein. Further, embodiments (whether or not explicitly described herein) have significant utility to fields and applications beyond the examples described herein. Embodiments have been described herein with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined as long as the specified functions and relationships (or equivalents thereof) are appropriately performed. Also, alternative embodiments can perform functional blocks, steps, operations, methods, etc. using orderings different than those described herein. References herein to “one embodiment,” “an embodiment,” “an example embodiment,” or similar phrases, indicate that the embodiment described can include a particular feature, structure, or characteristic, but every embodiment can not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of persons skilled in the relevant art(s) to incorporate such feature, structure, or characteristic into other embodiments whether or not explicitly mentioned or described herein. Additionally, some embodiments can be described using the expression “coupled” and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some embodiments can be described using the terms “connected” and/or “coupled” to indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, can also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The breadth and scope of this disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
26,461
11863526
DETAILED DESCRIPTION Server systems may utilize various techniques to detect and mitigate different types of network attacks. For example, in providing its services (e.g., web-based services), a server system may be subjected to various types of network attacks (e.g., SQL injection attacks, password spraying attacks, etc.) from malicious users. Accordingly, when a server system receives a request that it deems likely to correspond to a particular type of network attack, the server system may route that request to one or more “defense layers,” which, as used herein, refers to a network security mechanism that may implement one or more defensive operations in an effort to determine whether a given request is a network attack and, if so, take an appropriate mitigating action (e.g., denying the request). Once a request has been identified as potentially corresponding to a particular type of network attack, one common technique in prior systems is to route that request to a single, dedicated defense layer that is believed to be capable of accurately identifying and stopping that particular type of network attack. Such an approach presents various technical shortcomings, however, exposing the server system to increased risks. For example, while a server system's defenses to a particular type of network attack may initially be a “black box,” attackers often attempt to guess and map out a server system's defenses by sending multiple (and, possibly, many) malicious attack attempts to the server system in an effort to glean useful details about the defense layers being utilized by the server system. For instance, a malicious user may find code or output snippets to determine the backend defenses that the server system has put in place. As a non-limiting example, in an instance in which the server system uses a web application firewall (“WAF”) to prevent a particular type of network attacks, the malicious user may hit the server system with various payloads to determine the WAF blocking signature and, having done so, attempt to find ways to bypass this defense. Accordingly, using prior techniques in which a server system uses static defense layers, malicious users may engage in testing operations to determine the limitations of the defense layer so that these defense mechanisms can be overcome, presenting a significant security concern for the server system. In various embodiments, however, the disclosed techniques may address these technical problems by dynamically routing network traffic between various defense layers. That is, rather than using a static and single-threaded defensive approach that is predictable to malicious users, the disclosed techniques include dynamically shuffling the distribution of traffic between multiple different defenses, which makes it difficult for an attacker to predict the potential defense that a target system may utilize, and further improves the effectiveness of the server system's defenses as a whole. For example, in some embodiments, the disclosed techniques include using a traffic distribution module that is operable to dynamically distribute network traffic (previously identified as being indicative of a particular type of network attack) between multiple different defense layers based on a set of distribution weightage values. In some such embodiments, each of the different defense layers may utilize one or more (potentially different) defensive operations. In various embodiments, based on outcome information indicative of an effectiveness of the defense layers to which the network traffic was routed, the disclosed techniques may update the set of distribution weightage values, thereby modifying the manner in which the traffic distribution module routes subsequent requests that have been identified as being indicative of the particular type of network attack. In various embodiments, the distribution weightage values may be updated based on a selected optimization goal, such as effectiveness of the defense layers, time-to-mitigation, accuracy, etc. The disclosed techniques may provide various technical benefits. For example, by dynamically distributing network traffic that is suspected to correspond to a particular type of network attack, multiple different defense layers may be used to handle an attacker's stream of attacks. In such a scenario, the attacker is no longer receiving consistent test results, making the defenses less predictable by the attacker and interfering with the attacker's ability to glean useful information about the server system's defense layers and to figure out potential weaknesses that the attacker may exploit. Additionally, since multiple different defense layers may be utilized simultaneously for the same type of network attack, the disclosed techniques may quickly identify which of these defense layers (and, within a defense layer, the particular defensive operations) are most effective at identifying and preventing the particular type of network attack. This, in turn, may allow the disclosed systems and methods to route more of the network traffic through defense layers (or defensive operations) that are more effective in preventing the particular type of network attack, improving the network security of the server system as a whole. Referring now toFIG.1, block diagram100depicts an example server system102that includes distribution module104(also referred to herein as “traffic distribution module104”), feedback module112, and analytics module114. (Note that, although shown as separate elements inFIG.1, distribution module104, feedback module112, and analytics module114may be combined in any suitable combination, as desired.) In various embodiments, distribution module104, feedback module112, and analytics module114are operable to dynamically route network traffic between multiple different defensive layers106. For example, as noted above, in the course of providing its service(s), server system102may receive, in addition to legitimate requests, requests that are associated with various types of network attacks. In various embodiments, requests150that have been identified as potential network attacks are directed to the distribution module104so that they may be routed to one of multiple different defense layers106. As shown inFIG.1, the disclosed techniques may utilize any suitable number of defense layers106A-106N to identify and mitigate various types of network attacks. In various embodiments, distribution module104is operable to determine the defense layer106to which to route the requests150that are identified as potential network attacks. For example, in the depicted embodiment, distribution module104includes (or has access to) weightage values120(also referred to herein as “distribution weightage values120”) that the distribution module104may use to determine how to distribute the requests150it receives. In the depicted embodiment, for instance, the weightage values120specify that, for a particular type of network attack, the distribution module104is to route 30% of the requests150to defense layer106A and 70% of the requests150to defense layer106B. Note, however, that this embodiment is depicted merely as one non-limiting example and, as described in more detail below, the weightage values120may start with any suitable initial values and may be modified as determined by analytics module114so as to improve the ability of the disclosed techniques to identify and mitigate network attacks. In various embodiments, in addition to determining the defense layer106to which a given request150is routed, the distribution module104may also control (either directly or indirectly) the particular defensive operation(s)108that are applied for a given request150. In the depicted embodiment, for example, defense layer106A has access to three defensive operations108A-108C and defense layer106B has access to one defensive operation108D. In the current example, from defense layer106A, 10% of the total requests150(associated with a particular type of network attack) are routed to each of defensive operations108A,108B, and108C, while the remaining 70% of the request150are routed to defensive operation108D. Defense layers106and defensive operations108are described in more detail below with reference toFIG.4, according to some non-limiting embodiments. For the purposes of the present discussion, however, note that the defense layers106may be any of various suitable network security mechanisms implemented using hardware, software, or both. For example, in some embodiments, a defense layer106may be a proxy server (implemented within server system102or by a third party), an SSO, an Active Directory (“AD”), an email gateway, a firewall, an intrusion prevention system (“IPS”), etc. Further note that, in various embodiments, each defense layer106may implement any suitable number of defensive operations108, which may be any of various suitable defense operations. For example, in some embodiments, defense operations108may include applying WAF signatures, pattern-based blocks, block attempts, rate-limiting, AD password protection, email filtering, signature-based blocks, stateful and stateless packet filtering, custom signatures or custom signature groups or categories, etc. Additionally, note that, in various embodiments, the particular defense layers106and defensive operations108to which a given request150is routed may depend on the particular type of network attack of which the request150is suspected of being. For example, in various embodiments, the disclosed techniques may be used to handle multiple different types of network attacks and, in some such embodiments, the disclosed techniques may dynamically route requests150that potentially correspond to multiple different types of network attacks to different defense layers106or defensive operations108depending on the particular type of network attack with which a given request150is potentially associated. For example, in some embodiments, a first subset of defense layers106(e.g., defense layers106A-106D) may be used for a first type of network attack (e.g., SQL injection attacks), a second set of defense layers106(e.g., defense layers106E-106G) may be used for a second type of network attack (e.g., password spraying attacks), etc. Further note that, in some such embodiments, such subsets of defense layers106may overlap such that the same defense layer(s)106(and, potentially, the same defensive operation(s)108) are used for multiple different types of network attacks. InFIG.1, once the defensive operation108has been applied for a given request150, outcome information109is provided to a supplemental mitigation module110. In various embodiments, the supplemental mitigation module110is operable to perform one or more additional mitigation operations (in addition to any mitigation operations that may have been performed as part of the defensive operations108) for a given request150. Note that the information included in outcome information109may vary depending, for example, on the type of network attack with which the request150is associated, the defense layer106or defensive operation108to which the request150was routed, etc. In some embodiments, for example, the outcome information109for a given request may specify an outcome of the defensive operation108that was applied for the given request150, indicating whether the request was blocked by the defensive operation108. In some embodiments, the outcome information109may also include a status code, such as an Hypertext Transfer Protocol (“HTTP”) response status code (e.g., HTTP 403 (“forbidden”)), a custom code for a particular WAF used, a challenge result (e.g., a CAPTCHA challenge result), or any other suitable type of status code. The mitigation operations performed by supplemental mitigation module110(if any) may also vary depending, for example, on the type of potential network attack involved. Non-limiting examples of defensive operations that may be performed by supplemental mitigation module110include adding an IP address of the client device from which the request originated to a block list, adding a device fingerprint for the client device to a block list, adding a password to a block list, forcing a password reset, limiting or restricting the user account(s) involved (e.g., customer or employee accounts), adding traffic patterns or components to a block list, setting lockout periods to temporarily suspend services to a particular IP or account, etc. Note that, instead of or in addition to performing an additional mitigation operation, the supplemental mitigation module110, in some embodiments, is operable to use one or more threat vectors associated with a given request150to identify other threat vectors that may not otherwise be detected by the server system102. This process, according to some non-limiting embodiments, is described in more detail below with reference toFIG.7. Further note that, in various embodiments, the particular supplemental mitigation operation selected may be designed to mitigate the threat itself, rather than the simply blocking the specific request150. For example, by adding an IP address (or range of IP addresses) associated with the client device that sent the request150to a block list, the supplemental mitigation module(s)110may help prevent or mitigate future instances of this same network threat. In the depicted embodiment, the supplemental mitigation module110passes tracking information122, corresponding to the request150, to the feedback module112. Tracking information122is discussed in more detail below with reference toFIG.2. For the purposes of the present discussion, note that the tracking information122, for a given request150, may include an identifier that uniquely identifies the request150as well as information indicating the defense layer106or defensive operation(s)108to which the request150was routed, the outcome information109, an indication of any additional mitigation operations applied by the supplemental mitigation module110, etc. InFIG.1, the tracking information122is then passed to the analytics module114, which, in various embodiments, is operable to determine updated weightage values124based on the tracking information122. The operation of analytics module114, according to some embodiments, is described in detail below with reference toFIG.3. Note, however, that in various embodiments the analytics module114may determine the updated weightage values124based on an optimization goal selected for the particular type of network attack. In various embodiments, the updated weightage values124may then be provided (e.g., via the feedback module112) to the distribution module104, which may use the updated weightage values124to modify the manner in which it routes requests150potentially associated with the particular type of network attack. Note that, in various embodiments, the process described above may be repeated as desired (e.g., periodically, after a predetermined number of requests have been analyzed, etc.), allowing the disclosed techniques to change the manner in which it distributes traffic to adapt to changes in network attacks. Further note that, in various embodiments, the disclosed techniques may be used to determine updated weightage values124corresponding to multiple different types of network attacks, allowing the disclosed system to customize its defensive approach to each of these different types of threats in a manner that matches a desired optimization goal. As one non-limiting example, assume that the optimization objective selected (e.g., by a security engineer associated with server system102) for a particular type of network attack (e.g., SQL injection attacks) is to improve the effectiveness of the applied defensive measures so that, in determining the updated weightage values124, the analytics module may modify the manner in which the requests150are distributed such that more traffic is routed through the more effective defense layers106and defensive operations108. Further, in this example, assume that, by analyzing the tracking information122, analytics module114determines that defensive operation108C has been the most effective (e.g., strictest) in blocking requests150that are deemed to be SQL injection attacks. In this example, the analytics module114may generate the updated weightage values124so as to increase the percentage of the requests150that have been identified as potential SQL injection attacks routed to the defensive operation108C. For instance, updated weightage values124may specify that for subsequent requests150that are identified as potential SQL injection attacks, 80% of those requests150are to be routed to defense layer106A and 20% to defense layer106B, and that, of those requests routed to defense layer106A, 75% (60% of the total requests150identified as potentially being SQL injection attacks) are sent to defense operation108C, 10% to defense operation108A, and 10% to defense operation108C (with the remaining 20% of the total requests150identified as potential SQL injection attacks being routed to defense operation108D). Using updated weightage values124, the distribution module104may then determine how to route subsequent requests150corresponding to potential SQL attacks. Turning now toFIG.2, block diagram200depicts an example distribution module104, according to some embodiments. As noted above, in various embodiments, the distribution module104is operable to route requests150that have been identified as a potential network attack between various defense layers106. In the depicted embodiment, for example, the distribution module104includes (or has access to) weightage value store202, which stores the weightage values120that the distribution module104may use to determine the defense layer106(or defensive operation108) to which to route the various requests150. As noted above, the manner in which the distribution module104distributes traffic may change over time as the analytics module114determines the updated weightage values124so as to improve the performance of the various defense layers106and defensive operations108relative to a selected optimization goal. In various embodiments, the weightage values120may be initialized using various suitable techniques. For example, in some embodiments, the weightage values120for a particular type of network attack may be initialized so as to evenly distribute the requests150across the available defense layers106or defensive operations108. Referring again to the embodiment depicted inFIG.1, as a non-limiting example, the weightage values120may initially be chosen such that requests150potentially associated with a particular type of network attack are distributed evenly between defensive operations108A-108D. Note, however, that this embodiment is provided merely as one non-limiting example. In other embodiments, for example, the weightage values120may be initialized so as to distribute the requests150across the defense layers106or defensive operations108using any suitable percentages. With reference again to the embodiment ofFIG.1, consider an instance in which defensive operations108A, defensive operations108B, and defensive operations108D are established defense operations that have been tested and are known to be effective in detecting and preventing a particular type of network attack, while defensive operations108C is a new defensive operation whose effectiveness is not yet known. In such an embodiment, the weightage values120may be established so as to initially route only a small amount (e.g., 5%) of the network traffic for the particular type of network attack to the defensive operation108C. After some period of use, the analytics module114may determine updated weightage values124, increasing or decreasing the amount of traffic routed to defensive operation108C depending on its performance. Accordingly, in various embodiments, the disclosed techniques may be used to introduce new defensive operations108into production for testing and refinement without subjecting the server system102to undue levels of risk, allowing the server system102to track and evaluate the effectiveness of its defense layers106and defensive operations108. In the depicted embodiment, the distribution module104is shown routing three requests150between three different defense layers106(not shown separately, for clarity). More specifically, inFIG.2, the distribution module104is shown routing request150A to defense layer106A, request150B to defense layer106D, and request150C to defense layer106K. Note that, in the depicted embodiment, in addition to sending the requests150to the selected defense layers106, the distribution module104may include one or more items of additional information, such as routing information203. For example, as noted above, the distribution module104may also control the defensive operation108to which a given request150is routed. In some embodiments, the distribution module104may include routing information203so as to instruct the defense layer106as to the defensive operation108to use for the given request150. As a non-limiting example, for request150A that the distribution module104routes to defense layer106A, the routing information203A may instruct the defense layer106A to utilize defensive operation108A. Note that, in some embodiments, rather than instructing the defense layer106as to the particular defensive operation108to apply for each of the requests150, the distribution module104may instead provide routing information203(either periodically or with some (or all) of the requests150) instructing that defense layer106as to the manner in which to split the traffic between the defensive operations108to which that defense layer106has access (e.g., using one or more percentage values or using any other suitable format). Further note that, in the depicted embodiment, the distribution module104includes tracking information122along with the request150as the request150is routed to its selected path. For example, inFIG.2, the distribution module104includes tracker generation module204, which, in various embodiments, is operable to generate one or more items of tracking information122for the various requests150. In some such embodiments, this tracking information122will be routed with the request150as it flows through the defense layers106and defensive operations108. For example, in various embodiments, the tracker generation module204will generate, for each request150, a corresponding identifier value. Note that the identifier value for a given request150may be generated using any of various suitable techniques. In some embodiments, the manner in which the identifier value is generated may depend on the technique used by the distribution module104to split similar types of potential network attacks between the various defense layers106. For example, in instances in which similar streams of network traffic are split or grouped by signature (e.g., brute-forcing attacks, SQL injection attacks, etc.), then, in some embodiments, the entire stream of the network traffic may be given a single identifier. As a non-limiting example, in some embodiments, such an identifier may be formatted as follows: signatureType_<endpoint>_date. Further, in embodiments in which the distribution module104splits similar types of potential network attacks based on the IP address of its originating device (e.g., instances in which there are IP addresses that are relatively static throughout an attack and happen to share IP addresses with legitimate users such that the malicious IP address cannot simply be blocked), the identifier value for a given request150may include one or more temporal components. As a non-limiting example, in some embodiments, such an identifier may be formatted as IP_address+endpoint (e.g., “1.2.3.4.login,” “2.3.4.5.login,” etc.), though other formats may also be used. In instances in which the distribution module104splits similar types of potential network attacks based on patterns (e.g., URLs containing 100 re-used session keys), then the identifier value for a given request150could be formatted as <sessionKey>_endpoint (e.g., “123456_login,” “098765_login,” etc.), though other formats may also be used. In instances in which the distribution module splits similar types of potential network attacks based on types of traffic (e.g., web traffic, API traffic, etc.), the identifier values for a given request150may include one or more header values. In various embodiments, as a request150is routed through the disclosed system, one or more items of additional information may be appended or otherwise added to the tracking information. For example, in some embodiments, as a request150passes through the disclosed system, the tracking information122may be updated so as to identify one or more of the defense layer106to which the request150was routed, the defensive operation(s)108that were applied for the request150, the outcome information109for the defensive operation(s)108, etc. State differently, in various embodiments, the tracking information may be incrementally constructed as a corresponding request150is processed such that the tracking information122is usable (e.g., by the analytics module114) to identify a request150, the defense layer106and defensive operation108that request150was routed through, and the corresponding outcome information109. In various embodiments, the tracking information122may be used by the analytics module114, as described in more detail below with reference toFIG.3, to evaluate the performance of the system for the request150so as to determine the updated weightage values124. In various embodiments, as the distribution module104receives the updated weightage values124from the analytics module114, the distribution module104may update the manner in which it distributes traffic, enabling the system to adapt to changes in network attacks and improve network security of the server system102. Referring now toFIG.3, block diagram300depicts an example analytics module114, according to some embodiments. InFIG.3, the analytics module114includes weightage value determination module302, which, in various embodiments, is operable to determine updated weightage values124based on tracking information122. For example, in the depicted embodiment, the analytics module114is shown receiving tracking information122(e.g., from feedback module112), which, as noted above, tracking information122may include various items of information to identify a request150, the defense layer106to which it was routed, the defensive operation108that was applied to it, the outcome109of that defensive operation108, whether any additional mitigation operations were performed (e.g., by supplemental mitigation module110), etc. Further, in the depicted embodiment, the analytics module114includes (or has access to) optimization goal information304, which specifies optimization goals for one or more types of network attacks. For example, in various embodiments, a user associated with the server system102(e.g., a security engineer) may select or specify an optimization goal for each (or some subset) of the different types of network attacks for which the disclosed techniques are operable to route traffic. Non-limiting examples of optimization goals include effectiveness of defenses, efficiency, time-to-remediation, comprehensiveness, reduction of abuse duration or volume, time-to-mitigation, model resources used, etc. The “effectiveness of defenses” optimization goal, for example, may aim to minimize the amount of successful attempts by an attacker. For example, if the attacker sent100abusive attempts, the “effectiveness of defenses” optimization goal may consider the percentage of those attempts that went through the various defenses of server system102without being blocked. The “efficiency” optimization goal, in some embodiments, may consider the amount of input of resources (e.g., hardware or software resources) or time. Stated differently, the “efficiency” optimization goal may consider how to maximize or distribute traffic so as to achieve maximum efficiency while accounting for the various hardware or software limitations of the defense layers106or defensive operations108being utilized. As a non-limiting example, if a block list of a certain defensive path can only hold 1,000 IP addresses, the “efficiency” optimization goal may consider how to utilize this limited resource such that the list keeps the 1,000 most abusive IP addresses. The “time-to-remediation/mitigation” optimization goals, in some embodiments, considers the time period from when the system decides to impose a block to the actual time that the block is accomplished or performed. For example, different defense layers106or defensive operations108may have differing latencies and throughput (e.g., depending on the load, the time of the day, etc.). Accordingly, in some embodiments, the “time-to-remediation/mitigation” optimization goals may take these latency and throughput considerations into account when determining how to adjust the weightage values so as to block potentially abusive traffic as quickly as possible. The “reduction in abuse duration” optimization goal, in some embodiments, considers the total duration of the abuse or attack and the available techniques that may be used to prevent these attacks. For example, by making the defenses stricter and reducing threshold limits to a minimum, the server system102may effectively make attackers' efforts futile and encourage the attackers' to stop their operations. Note that, in some such embodiments, the extent to which the defenses are made stricter may be balanced against increased friction or impact on legitimate traffic. The “accuracy” optimization goal, in some embodiments, may focus on maximizing the percentage of abusive traffic that is blocked, while minimizing the percent of legitimate traffic that is blocked. That is, the accuracy optimization goal may attempt to balance the two such that the overall accuracy score is the highest. For example, in an instance in which there is only a small percentage of abusive traffic in high-flowing legitimate traffic, the accuracy optimization goal may attempt to block out the abuse less than other optimization goals would. Note that, in various embodiments, the optimization goal selected may depend on the negative impacts associated with a particular type of network attack. For example, if each instance of abuse carries a corresponding financial penalty, the “reduce in abuse volume” or “effectiveness of defenses” optimization goals may be selected. If, for example, there are hardware or software limitations on the defenses, such as the case in a DDoS attack, the “efficiency” optimization goal may be selected. Further, if the particular type of network attack results in a leak of sensitive data, the “time-to-mitigation” optimization goal may be selected for that particular type of network attack, etc. Additionally, note that, in various embodiments, each different type of network attack may have its own distinct optimization goal while, in other embodiments, the same optimization goal may be shared between two or more different types of network attack. Further note that, in some embodiments, a given type of network attack may have more than one optimization goal (e.g., a primary optimization goal and a secondary optimization goal, a scale in which different amounts of emphasis are placed on multiple different optimization goals for a particular type of network attack, etc.). In various embodiments, the weightage value determination module302may determine updated weightage values124based on various factors, such as the particular type of network attack involved, the optimization goal for the particular type of network attack, effectiveness of the various defense layers106and defensive operations108, etc. For example, in various embodiments, the weightage value determination module302may analyze the tracking information122corresponding to multiple (and, potentially, many) requests150and, depending on the optimization goal and how well that goal is met using the existing weightage values120, the weightage value determination module302will modify the weightages in an attempt to increase the success rate of the selected goal. As a non-limiting example, if the optimization goal is to reduce abuse volume, the weightage value determination module302may measure, for a tracked stream of traffic, how long the abuse persists or how much abusive traffic is sent before the specific abuse level subsides, which may be measured by the amount of login failures per time period for a given identified list of users. Based on this information, the weightage value determination module302may generate the updated weightage values124so as to increase the percentage of traffic that is routed to the defense layers106and the defensive operations108that are deemed more successful in reducing the abuse volume. As a non-limiting example, consider an instance in which the optimization goal for a particular type of network attack is to minimize abuse volume and, using the existing weightage values120, 10% of the abusive traffic is passing through defensive operation108A unblocked, 20% of the abusive traffic is passing through defensive operation108B unblocked, and 30% of the abusive traffic is passing through defensive operation108C unblocked. In this non-limiting example, the analytics module114may calculate the updated weightage values124so as to route a higher percentage of the traffic to those defenses that are performing better (e.g., defensive operation108A) at that point in time and may dynamically adjust the weightage values if conditions change. For example, defensive operations108B and108C may consume fewer resources or may perform the blocks at a faster rate, though may be less accurate. Note that, in various embodiments, one or more (or each) of the defense layers106or defensive operations108may have one or more threshold values, such as a maximum threshold value indicating a maximum amount of network traffic that may be routed to the defense layer106or defensive operation108or a minimum threshold value indicating a minimum amount of network traffic that is to be routed to the defense layer106or defensive operation108, or both. In various embodiments, such threshold values may be beneficial in balancing the load of traffic between the various defense layers106and defensive operations108so as to prevent the system from becoming predictable to malicious users that would aim to discover details about the server system102's network security. In some embodiments, the weightage value determination module302limits the manner in which it modifies the distribution weightage values so as to stay within the maximum or minimum threshold values for the various defense layers106or defensive operations108. Note, however, that, in some embodiments, the weightage value determination module302may also modify the threshold values for the various defense layers106or defensive operations108. For example, if the analytics module114determines that a particular defensive operation108is performing poorly for a particular type of network traffic (e.g., SQL injection attacks), it may modify the threshold value(s) for that defensive operation108such that requests150potentially belonging to that type of network attack are no longer routed to that particular defensive operation108. Accordingly, in various embodiments, the weightage value determination module302may determine the updated weightage values124so as to modify the distribution of requests150, to the various defense layers106and defensive operations108, in such a way that improves the system's performance relative to a selected optimization goal. Since the tracking information122indicates the success rate of the different defensive operations108, the weightage value determination module302may generate the updated weightage values124so as to steer an increased amount of requests150to the defensive operations108that are more successful relative to the selected optimization goal(s). Further, note that, in various embodiments, the disclosed system may continue to collect and monitor tracking information, allowing the analytics module114to iteratively refine the weightage values for various different types of network attacks over time. Turning now toFIG.4, block diagram400depicts an example system implementing the disclosed dynamic routing techniques, according to one non-limiting embodiment. In the depicted embodiment, distribution module104receives various requests150that have been identified as potential SQL injection attacks. InFIG.4, defense layers106E-106G are implemented using proxy servers, as one non-limiting example of potential defense layers106that may be used according to some embodiments. Note that, in various embodiments, one or more of the proxy servers may be internal to server system102, external to the server system102, or implemented using a third-party service. Further, inFIG.4, specific non-limiting examples of defensive operations108E-108G are shown. More specifically, in the depicted embodiment, Proxy Server 1 applies a first WAF signature, Proxy Server 2 applies a second WAF signature, and Proxy Server 3 applies a rate-limiting pattern block. Note that, in some embodiments, each WAF provider may utilize different WAF signatures, which may or may not overlap. For example, some WAF signatures may be specific and targeted to a specific instance of a particular type of network attack (e.g., a WAF signature designed to cover specific types of SQL injection attacks) while other WAF signatures may be more generic (e.g., a WAF signature designed to cover all SQL injection attacks). Once the proxy servers have performed their respective defensive operations, the outcome information109is provided to the supplemental mitigation module110. InFIG.4, for example, Proxy Server 1 provides an HTTP status code of 403, Proxy Server 2 provides a custom WAF status code416, and Proxy Server 3 provides an HTTP status code of 200. As described above, once it receives this outcome information109, supplemental mitigation module110may perform one or more additional mitigation operations, such as adding a client's IP address or device fingerprint to a block list. The disclosed system may then, as described above, analyze the performance of the various defense layers106and defensive operations108and modify the weightage values as desired to improve the performance of the network security relative to the selected goal for the relevant type of attack (e.g., SQL injection attacks, in the current non-limiting example). Note that, in various embodiments, the disclosed techniques further include providing a workflow framework for network security, such as investigations and findings, which may assist users (e.g., defense engineers) in understanding how many threats are caught by the server system102's existing defenses and to identify those areas in which further defensive work is needed. In various embodiments, monitoring and maintaining such data may provide valuable insight by allowing the defense engineers associated with the server system102to learn from past defensive work. For example, using such data, in the next recurrence of an incident, insights gleaned from past incidents may be leveraged such that gaps in the defensive posture of the server system102may be anticipated and the probability of certain accompanying threats ascertained. The following is a non-limiting example in the context of incident response where incident notes (made, for example, by an analyst investigating a network attack incident) may be used to obtain data and build a data model relating to network attacks performed or attempted against the server system102. For example, in various embodiments, the disclosed techniques may include collecting data as to common threat indicators, the number of pertinent detection alerts for each of those threat indicators, the confidence levels for those threats based on the defensive strength of the server system102to the respective threat indicators (e.g., the amount of relevant defenses against the specific threats), and corresponding counters of various types of incidents (e.g., coin mining, banking Trojan, credential stuffing, etc.) that match one or more of those threat indicators. Non-limiting examples of such threat indicators include: vulnerability scanning, keys or credentials exposed, accounts created, process tampered, malware downloaded, brute-forcing, availability affected, command and control (“C2”) traffic established, lateral movement, backup tainted, malicious email, C2 beaconing observed, virus spread, user access removed, remote code execution (“RCE”), encryption involved, data exfiltration, and admin compromised. In various embodiments, this data may be used (e.g., by a defense engineer) to identify the high threat indicators that have low defense-confidence and alert numbers so as to prioritize more work to be done on these threat indicators to which the server system102is potentially vulnerable. In some such embodiments, for each new security incident that is investigated, the investigation steps taken and findings made by the security analyst may be logged. For example, in various embodiments, the disclose techniques may provide a standardized format to capture the investigation steps and findings such that a base of valid threats are logged against different types and instances of incidents. Further, in some embodiments, this data may be used to train one or more machine learning models that may be used to further assist in detecting and mitigating potential network attacks. In various embodiments, these disclosed techniques may provide various technical benefits. For example, in various embodiments, the model may learn and be more accurate in predicting which threats would likely be present in any new incidents and prompt an analyst as to those threats that are potentially being missed. Further, in various embodiments, these techniques may further assist in automating the investigation and handling of security incidents and providing an up-to-date view of the detection coverage against incoming threats, including those areas in which further defensive work should be prioritized (e.g., to work on lateral movement and process tampering as top priority, then exposed keys/credentials, and so on). Additionally, in some embodiments, these disclosed techniques may reduce repeated mistakes such as undermining the incident severity, or failing to verify or investigate certain areas. In various embodiments, these disclosed techniques for building a data model relating to network attacks may also be integrated with the disclosed techniques for dynamically routing network traffic between defense layers. For example, in some embodiments, the various threat indicators above could be used to identify potential network attacks, which could then be routed to the distribution module104or further divided into sub-tracks. For example, the threat indicator of “brute-forcing” could be subdivided into sub-tracks for “password spraying,” “credentials stuffing,” and “dictionary attacks.” As a further example, the threat indicator of “vulnerability scanning” could be subdivided into sub-tracks for “port scanning” and “host scans.” Each of these tracks or sub-tracks may feed traffic into the distribution module104to determine the optimal manner in which to route these various sub-tracks of potential network attacks, which, in turn, may help increase the “defense confidence” levels in the model above. Additionally note that, in various embodiments, the above model may also help identify those areas that are lacking in detection alerts or rules, which may be used to identify the types of threats before feeding into the distribution module104. Example Methods Referring now toFIG.5, a flow diagram illustrating an example method500for dynamically routing network traffic between various defense layers is depicted, according to some embodiments. In various embodiments, method500may be performed by one or more of distribution module104, feedback module112, and analytics module114ofFIG.1to dynamically route one or more requests150between defense layers106. For example, server system102may include (or have access to) a non-transitory, computer-readable medium having program instructions stored thereon that are executable by one or more computing devices within the server system102to cause the operations described with reference toFIG.5. InFIG.5, method500includes elements502-510. While these elements are shown in a particular order for ease of understanding, other orders may be used. In various embodiments, some of the method elements may be performed concurrently, in a different order than shown, or may be omitted. Additional method elements may also be performed as desired. At502, in the illustrated embodiment, the traffic distribution module104receives a first request150that has been identified as being indicative of a particular type of network attack. As a non-limiting example, the first request150may be one that has been identified as a potential password spraying attack. At504, in the illustrated embodiment, the traffic distribution module104routes the first request150to a selected one of a plurality of different defense layers106. In various embodiments, the selected defense layer106may include one or more on-premises network devices that are implemented within the server system102or may include one or more third-party systems that are implemented outside of the server system102. As noted above, in various embodiments, the distribution module104may determine the manner in which to route the first request150based on one or more weightage values120. For example, in some embodiments, the weightage values120may include values (e.g., provided as percentages or in any other suitable representation) that indicates the relative amount of traffic that should be distributed amongst the various defense layers106A-106N. Further, in some embodiments, for a given defense layer106, the weightage values120may specify the relative amount of the traffic to direct to individual ones of the defensive operations108available to that defense layer106. Additionally, in various embodiments, for network traffic that is identified as being indicative of a particular type of network attack, the set of distribution weightage values120may include an upper threshold value indicating a maximum percentage of the network traffic to route to a particular network layer, and a lower threshold value indicating a minimum percentage of the particular type of the network traffic to route to the particular defense layer. Note that, in some embodiments, the plurality of defense layers106may include a first defense layer (e.g., defense layer106A) that is operable to perform a first set of one or more defense operations108, such as applying a WAF signature to determine whether the first request is of the particular type of network attack. At506, in the illustrated embodiment, the feedback module112receives outcome information109indicative of an effectiveness of one or more defensive operations108performed, by the selected defense layer106, on the first request150. For example, as discussed above, the defensive operation(s)108applied for a given request150may produce outcome information109(e.g., an HTTP response status code) that is passed (e.g., as part of the tracking information122) to the feedback module112. At508, in the illustrated embodiment, based on the outcome information109, the analytics module114determines an updated set of distribution weightage values124. In some embodiments, determining the updated set of distribution weightage values124includes determining, based on the outcome information109, that the first request was blocked by at least one of the one or more defensive operations108performed by the selected defense layer106, and generating the updated set of distribution weightage values124such that a higher percentage of network traffic is routed, by the distribution module104, to the selected defense layer106. Further, in some embodiments, the updated set of distribution weightage values124may indicate a first percentage of the subsequent requests150to route to individual ones of the plurality of different defense layers106and, for a given defense layer106that provides a plurality of defensive operations108, a second percentage of the subsequent requests150that are routed to the given defense layer106to route to individual ones of the plurality of defensive operations108. At510, in the illustrated embodiment, the traffic distribution module104routes subsequent requests150that are identified as being indicative of the particular type of network attack (e.g., password spraying attacks) based on the updated set of distribution weightage values124. Note that, in some embodiments, tracking information122may be created for, and routed with, the requests150. For example, in some embodiments, the traffic distribution module104may add a metadata value to the first request150, where the metadata value is used to track the first request150as it is processed by the selected defense layer106. In some such embodiments, the selected defense layer may select a particular defensive operation108for the first request150based on at least one of the set of distribution weightage values120and may update the metadata value to further identify the selected defense layer106and the particular defensive operation108by which the first request is processed. Note that, in some embodiments, the outcome information109may indicate that the first request150was blocked by the one or more defensive operations108applied by the selected defense layer106. In such embodiments, method500may further include performing one or more additional mitigation operations (e.g., by supplemental mitigation module110), such as adding an IP address of the client device that sent the first request150to a block list or adding, to a block list, a device fingerprint corresponding to the client device that sent the first request150. Turning now toFIG.6, a flow diagram illustrating an additional example method600for dynamically routing network traffic between various defense layers is depicted, according to some embodiments. In various embodiments, method600may be performed by one or more of distribution module104, feedback module112, and analytics module114ofFIG.1to dynamically route one or more requests150between defense layers106. For example, server system102may include (or have access to) a non-transitory, computer-readable medium having program instructions stored thereon that are executable by one or more computing devices within the server system102to cause the operations described with reference toFIG.6. InFIG.6, method600includes elements602-610. While these elements are shown in a particular order for ease of understanding, other orders may be used. In various embodiments, some of the method elements may be performed concurrently, in a different order than shown, or may be omitted. Additional method elements may also be performed as desired. At602, in the illustrated embodiment, the server system102implements a traffic distribution module104that is operable to distribute a particular type of network traffic across a plurality of different defense layers106. In the depicted embodiment, the plurality of different defense layers include a first defense layer that is operable to perform a first set of one or more defensive operations, and a second defense layer that is operable to perform a second set of one or more defensive operations. In some embodiments, at least one of the first and second sets of one or more defensive operations include applying a WAF signature. At604, in the illustrated embodiment, the traffic distribution module104receives a first plurality of requests150A-150J that have been identified as indicative of the particular type of network traffic. At606, in the illustrated embodiment, the traffic distribution module104routes the first plurality of requests150A-150J across the plurality of different defense layers106, where the routing is performed based on a set of distribution weightage values120. As a non-limiting example, in some embodiments the set of distribution weightage values120indicates a first percentage of the first plurality of requests150A-150J to route to the first defense layer106A and a second percentage of the first plurality of requests150A-150J to route to the second defense layer106B. In some such embodiments, method600may further include identifying the first plurality of requests150A-150J as being indicative of the particular type of network traffic based on a signature associated with the particular type of network traffic. For example, in some embodiments, the server system102may, prior to directing the requests150A-150J to the distribution module104, use one or more signatures associated with the particular type of network traffic to identify the requests150A-150J as potentially being associated with that particular type of network traffic and, in response to this identification, direct those requests150A-150J to the distribution module104. At608, in the illustrated embodiment, the analytics module114determines an updated set of distribution weightage values124based on an effectiveness of the plurality of different defense layers106in mitigating the particular type of network traffic. In some embodiments, the updated set of distribution weightage values124may be determined based on a particular optimization goal (e.g., reducing a time-to-mitigation) associated with the particular type of network traffic. Further, in some embodiments, determining the updated set of distribution weightage values at element608may include analyzing the effectiveness of the first and second defense layers106A-106B and, in response to determining that the second defense layer106B was more effective than the first defense layer106A in mitigating the particular type of network traffic for the first plurality of requests150A-150J, generating the updated set of distribution weightage values124such that, relative to the set of distribution weightage values120, a higher percentage of network traffic is routed to the second defense layer106B. At610, in the illustrated embodiment, the traffic distribution module routes a second plurality of requests150K-150M across the plurality of different defense layers106based on the updated set of distribution weightage values124. In some embodiments, the disclosed techniques may include using one or more threat vectors identified with regard to a particular request150to expand the manner in which the server system102identifies potential network attacks. For example, as requests are received by the server system102, that traffic may be analyzed to identify requests150that are deemed to potentially be network attacks. In various embodiments, the server system102may identify these potentially malicious requests150based on one or more indicators (also referred to herein as “threat vectors” or simply “vectors”) associated with the requests150. Non-limiting examples of threat vectors include IP address, device fingerprint, attack signature matching, pattern matching, etc. To identify web requests150that are potential network attacks, the requests may first pass through one of server system102's various content delivery network (“CDN”) nodes, which may be located at various geographic locations around the world. Once the requests150is parsed by the CDN node, it may be passed to WAF filtering where various different signatures are used to identify the type(s) of network traffic to which the request relates. If a request150is identified (e.g., using one of these threat vectors) as potentially being a particular type of network attack, that request150(along with an identifier of the particular type of network attack with which it is potentially associated) may be routed to the distribution module104and other, legitimate traffic (that has not been identified by the various threat rules and vectors as potentially malicious) may then be routed to the appropriate services within the server system102. For potential SQL injection attacks, for instance, the server system102may compare the requests150to an attack signature for SQL injection attacks, which could include a regular expression (“RegEx”) pattern that includes one or more SQL keywords. If, in this scenario, the server system102determines that a given request150is potentially a SQL injection attack based on a match with the corresponding attack signature, that request150may be routed to the distribution module104. In various embodiments, such a process may be utilized to route requests150corresponding to (potentially many) different types of network attacks to the distribution module104, where the distribution module104may dynamically distribution the requests150across the various defense layers106and defensive operations108as described herein. Note, however, that server system102's ability to identify potentially malicious requests150and route them to the distribution module104depend, in at least some embodiments, on the efficacy of the threat vectors that the server system102uses. Accordingly, if the server system102is unaware of a particular threat vector for a given type of network attack, the server system102will be more susceptible to attacks of that type. Such is true even for embodiments in which machine learning-based anomaly detection algorithms are used to detect potential network threats, as attacks that are not identified as anomalous using such systems are not caught. In various embodiments, however, the disclosed techniques may be used to identify new threat vectors, which, in turn, may be used by the system to identify malicious traffic that may have otherwise gone undetected. For example, in some embodiments, from an initial set of threat vectors that are caught by server system102's rules, in addition to (or instead of) one or more mitigation operations that may be taken by supplemental mitigation module110, the disclosed techniques may include expanding the initial set of threat vectors to query for more traffic patterns based on commonalities (e.g., IP address, device fingerprint, user-agent string/detected, hostname, etc.). Stated differently, in some embodiments the disclose techniques include identifying one or more vectors (e.g., IP address) associated with a particular type of attack and then using those one or more threat vectors as a profile to perform a broader search. In doing so, such embodiments may be used to detect other malicious activity performed by the malicious user(s) that the existing threat-detection rules may not be catching. For example, instead of blocking a user-agent that is linked to an exploit kit, the disclosed techniques may include use that user-agent as a threat vector to query a larger set of traffic logs and to obtain a larger pool of logs that are representative of the originating threat source. Consider, for instance, a situation in which a particular malicious user is hitting the server system102with multiple (e.g., seven) different attack vectors and the existing defenses are only blocking three of these attack patterns. Using the disclosed techniques, the server system102may take the three identified vectors and perform an expansion to find the remaining four vectors (or a subset thereof, which may be used to fine the remaining vectors), allowing the server system102to now be capable of detecting all seven attack vectors. Referring now toFIG.7, a flow diagram illustrating an additional example method700for identifying new threat vectors and using these new threat vectors to detect potentially malicious network traffic is depicted, according to some embodiments. In various embodiments, method700may be performed by supplemental mitigation module110ofFIG.1to identify one or more threat vectors associated with a particular type of network attack. For example, server system102may include (or have access to) a non-transitory, computer-readable medium having program instructions stored thereon that are executable by one or more computing devices within the server system102to cause the operations described with reference toFIG.7. InFIG.7, method700includes elements702-710. While these elements are shown in a particular order for ease of understanding, other orders may be used. In various embodiments, some of the method elements may be performed concurrently, in a different order than shown, or may be omitted. Additional method elements may also be performed as desired. At702, in the illustrated embodiment, the supplemental mitigation module110identifies one or more initial threat vector(s) from an initial set of threat-detection rules. For example, the disclosed techniques may take one or more threat sources (e.g., IP addresses) from the initial set of detection rules that are used by server system102to route requests150to the distribution module104. At704, in the illustrated embodiment, the supplemental mitigation module110queries those findings using a new set of traffic logs (e.g., network traffic, web traffic, API traffic, etc.) associated with the server system102to determine additional correlations. At706, in the illustrated embodiment, the supplemental mitigation module110determines criteria and logical filters to distinguish legitimate traffic from malicious traffic. As non-limiting examples, the following is a list of logical filters that may be applied in one example embodiment: the status codes of web responses (e.g., included in outcome information109) should be greater than 20% erroneous, the success rates of API calls should not be lower than 70% for payment or login requests, if an IP address is not leased or owned by merchant or partner, the rate of requests/API calls should not exceed five requests per second within a 1-minute frame, an IP address should not hit more than 30 endpoints in a three minute time period, the same type of request should not see more than 20 variations of payloads within a two minute period, etc. At708, in the illustrated embodiment, the supplemental mitigation module110exports new threat vector findings as a new source of internal threat intelligence. For example, in various embodiments, the new threat vectors may be used for various purposes, such as IP address scoring, context for downstream systems, campaigns attribution etc. Further, in some embodiments, this intelligence may also be used as a loopback to help us build a database of threat patterns to more accurately identify new threats. For example, at710, in the illustrated embodiment, the new threat vectors are used to identify potentially malicious traffic. In some embodiments, these new threat vectors can be integrated with existing threat-detection rules, allowing the system to identify previously undetected threats. For example, in some embodiments, the new threat vectors may be used to identify traffic received by the server system102as requests150that are potential network attacks and, accordingly, the server system102may route such requests150to the distribution module104, as described above. Note that, in some embodiments, the new threat vectors may also be used (e.g., as one of multiple factors) by the analytics module114in determining updated weightage values124. Example Computer System Referring now toFIG.8, a block diagram of an example computer system800is depicted, which may implement one or more computer systems, such as one or more computer systems within server system102ofFIG.1, according to various embodiments. Computer system800includes a processor subsystem820that is coupled to a system memory840and I/O interfaces(s)860via an interconnect880(e.g., a system bus). I/O interface(s)860is coupled to one or more I/O devices870. Computer system800may be any of various types of devices, including, but not limited to, a server computer system, personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, server computer system operating in a datacenter facility, tablet computer, handheld computer, workstation, network computer, etc. Although a single computer system800is shown inFIG.8for convenience, computer system800may also be implemented as two or more computer systems operating together. Processor subsystem820may include one or more processors or processing units. In various embodiments of computer system800, multiple instances of processor subsystem820may be coupled to interconnect880. In various embodiments, processor subsystem820(or each processor unit within820) may contain a cache or other form of on-board memory. System memory840is usable to store program instructions executable by processor subsystem820to cause system800perform various operations described herein. System memory840may be implemented using different physical, non-transitory memory media, such as hard disk storage, floppy disk storage, removable disk storage, flash memory, random access memory (RAM—SRAM, EDO RAM, SDRAM, DDR SDRAM, RAMBUS RAM, etc.), read only memory (PROM, EEPROM, etc.), and so on. Memory in computer system800is not limited to primary storage such as system memory840. Rather, computer system800may also include other forms of storage such as cache memory in processor subsystem820and secondary storage on I/O devices870(e.g., a hard drive, storage array, etc.). In some embodiments, these other forms of storage may also store program instructions executable by processor subsystem820. I/O interfaces860may be any of various types of interfaces configured to couple to and communicate with other devices, according to various embodiments. In one embodiment, I/O interface860is a bridge chip (e.g., Southbridge) from a front-side to one or more back-side buses. I/O interfaces860may be coupled to one or more I/O devices870via one or more corresponding buses or other interfaces. Examples of I/O devices870include storage devices (hard drive, optical drive, removable flash drive, storage array, SAN, or their associated controller), network interface devices (e.g., to a local or wide-area network), or other devices (e.g., graphics, user interface devices, etc.). In one embodiment, I/O devices870includes a network interface device (e.g., configured to communicate over WiFi, Bluetooth, Ethernet, etc.), and computer system800is coupled to a network via the network interface device. The present disclosure includes references to “embodiments,” which are non-limiting implementations of the disclosed concepts. References to “an embodiment,” “one embodiment,” “a particular embodiment,” “some embodiments,” “various embodiments,” and the like do not necessarily refer to the same embodiment. A large number of possible embodiments are contemplated, including specific embodiments described in detail, as well as modifications or alternatives that fall within the spirit or scope of the disclosure. Not all embodiments will necessarily manifest any or all of the potential advantages described herein. Unless stated otherwise, the specific embodiments described herein are not intended to limit the scope of claims that are drafted based on this disclosure to the disclosed forms, even where only a single example is described with respect to a particular feature. The disclosed embodiments are thus intended to be illustrative rather than restrictive, absent any statements to the contrary. The application is intended to cover such alternatives, modifications, and equivalents that would be apparent to a person skilled in the art having the benefit of this disclosure. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure. The disclosure is thus intended to include any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims. For example, while the appended dependent claims are drafted such that each depends on a single other claim, additional dependencies are also contemplated, including the following: Claim3(could depend from any of claims1-2); claim4(any preceding claim); claim5(claim4), etc. Where appropriate, it is also contemplated that claims drafted in one statutory type (e.g., apparatus) suggest corresponding claims of another statutory type (e.g., method). Because this disclosure is a legal document, various terms and phrases may be subject to administrative and judicial interpretation. Public notice is hereby given that the following paragraphs, as well as definitions provided throughout the disclosure, are to be used in determining how to interpret claims that are drafted based on this disclosure. References to the singular forms such “a,” “an,” and “the” are intended to mean “one or more” unless the context clearly dictates otherwise. Reference to “an item” in a claim thus does not preclude additional instances of the item. The word “may” is used herein in a permissive sense (i.e., having the potential to, being able to) and not in a mandatory sense (i.e., must). The terms “comprising” and “including,” and forms thereof, are open-ended and mean “including, but not limited to.” When the term “or” is used in this disclosure with respect to a list of options, it will generally be understood to be used in the inclusive sense unless the context provides otherwise. Thus, a recitation of “x or y” is equivalent to “x or y, or both,” covering x but not y, y but not x, and both x and y. On the other hand, a phrase such as “either x or y, but not both” makes clear that “or” is being used in the exclusive sense. A recitation of “w, x, y, or z, or any combination thereof” or “at least one of . . . w, x, y, and z” is intended to cover all possibilities involving a single element up to the total number of elements in the set. For example, given the set [w, x, y, z], these phrasings cover any single element of the set (e.g., w but not x, y, or z), any two elements (e.g., w and x, but not y or z), any three elements (e.g., w, x, and y, but not z), and all four elements. The phrase “at least one of . . . w, x, y, and z” thus refers to at least one of element of the set [w, x, y, z], thereby covering all possible combinations in this list of options. This phrase is not to be interpreted to require that there is at least one instance of w, at least one instance of x, at least one instance of y, and at least one instance of z. Various “labels” may proceed nouns in this disclosure. Unless context provides otherwise, different labels used for a feature (e.g., “first circuit,” “second circuit,” “particular circuit,” “given circuit,” etc.) refer to different instances of the feature. The labels “first,” “second,” and “third” when applied to a particular feature do not imply any type of ordering (e.g., spatial, temporal, logical, etc.), unless stated otherwise. Within this disclosure, different entities (which may variously be referred to as “units,” “circuits,” other components, etc.) may be described or claimed as “configured” to perform one or more tasks or operations. This formulation—“[entity] configured to [perform one or more tasks]”—is used herein to refer to structure (i.e., something physical). More specifically, this formulation is used to indicate that this structure is arranged to perform the one or more tasks during operation. A structure can be said to be “configured to” perform some task even if the structure is not currently being operated. A “memory device configured to store data” is intended to cover, for example, an integrated circuit that has circuitry that performs this function during operation, even if the integrated circuit in question is not currently being used (e.g., a power supply is not connected to it). Thus, an entity described or recited as “configured to” perform some task refers to something physical, such as a device, circuit, memory storing program instructions executable to implement the task, etc. This phrase is not used herein to refer to something intangible. The term “configured to” is not intended to mean “configurable to.” An unprogrammed FPGA, for example, would not be considered to be “configured to” perform some specific function. This unprogrammed FPGA may be “configurable to” perform that function, however. Reciting in the appended claims that a structure is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112(f) for that claim element. Should Applicant wish to invoke Section 112(f) during prosecution, it will recite claim elements using the “means for [performing a function]” construct. The phrase “based on” is used to describe one or more factors that affect a determination. This term does not foreclose the possibility that additional factors may affect the determination. That is, a determination may be solely based on specified factors or based on the specified factors as well as other, unspecified factors. Consider the phrase “determine A based on B.” This phrase specifies that B is a factor that is used to determine A or that affects the determination of A. This phrase does not foreclose that the determination of A may also be based on some other factor, such as C. This phrase is also intended to cover an embodiment in which A is determined based solely on B. As used herein, the phrase “based on” is synonymous with the phrase “based at least in part on.” The phrase “in response to” describes one or more factors that trigger an effect. This phrase does not foreclose the possibility that additional factors may affect or otherwise trigger the effect. That is, an effect may be solely in response to those factors, or may be in response to the specified factors as well as other, unspecified factors. Consider the phrase “perform A in response to B.” This phrase specifies that B is a factor that triggers the performance of A. This phrase does not foreclose that performing A may also be in response to some other factor, such as C. This phrase is also intended to cover an embodiment in which A is performed solely in response to B. In this disclosure, various “modules” operable to perform designated functions are shown in the figures and described in detail (e.g., distribution module104, feedback module112, analytics module114, etc.). As used herein, a “module” refers to software or hardware that is operable to perform a specified set of operations. A module may refer to a set of software instructions that are executable by a computer system to perform the set of operations. A module may also refer to hardware that is configured to perform the set of operations. A hardware module may constitute general-purpose hardware as well as a non-transitory computer-readable medium that stores program instructions, or specialized hardware such as a customized ASIC. Accordingly, a module that is described as being “executable” to perform operations refers to a software module, while a module that is described as being “configured” to perform operations refers to a hardware module. A module that is described as “operable” to perform operations refers to a software module, a hardware module, or some combination thereof. Further, for any discussion herein that refers to a module that is “executable” to perform certain operations, it is to be understood that those operations may be implemented, in other embodiments, by a hardware module “configured” to perform the operations, and vice versa.
76,468
11863527
DETAILED DESCRIPTION Intra network devices, such as computer and notebook devices, and servers, enjoy free inter network device, such as a Layer 2 switch, port-to-port movement within a network. In certain modes of certain secure networks for robust secure communication, however, even if pre-authenticated, intra network devices may be prohibited from making a similar move, e.g., from an existing switch port to a new switch port (a local-to-local move), prior to reauthenticating at the new port. Analogously, a virtual inter network device, such as a virtual machine, may not be free to move from one virtual switch to another virtual switch (a local-to-remote move) without reauthentication at the latter switch despite pre-authentication at the former switch. In another virtualization context, a physical network device may be similarly limited in movement from a virtual switch port to another virtual switch port (a local-to-local move) even when pre-authorized at the former port without subsequent reauthorization at the latter port. Secure networks generally conform to network protocols of defined-industry standards, an example of which is the industry-adopted IEEE 802.1x and WIFI (with or without encryption) standards. Noteworthy, the 802.1x standard was in part adopted to enhance network security, such as in data center clouds, by preventing certain bad actor scenarios. The all too commonplace practice of Media Access Control (MAC) address spoofing, also known as a “denial of service attack”, exemplifies the incentive behind adopting the 802.1x protocol. Denial of service attacks cause collisions between distinct sets of intra network devices, such as virtual host machines, with a common MAC address and between inter network devices, such as physical and virtual switches, with a common MAC address on a common network switch. MAC address spoofing (or denial of service attack) is perhaps best appreciated by the following bad actor example. Suppose a business entity conference room is equipped with multiple switch ports, all routed to a common switch. The switch ports provide authorized conference room attendee devices, such as pre-authenticated computers, servers, and iPad devices, of conference room attendees with portable access to the remaining network devices of a shared network when attendee devices are plugged into the switch ports. To recognize an authenticated device, the switch, or a remotely located inter network device, may compare a MAC address, uniquely identifying the authenticated device, with a list of known MAC addresses, each uniquely identifying a respective authenticated device of the shared network. The device may be successfully authenticated based on a positive MAC address comparison outcome. Now suppose a bad actor, Mr. Deceitful, plugs his unauthorized notebook into a conference room switch port and spoofs Mr. Honest's authenticated laptop MAC address effectively disguising his notebook as Mr. Honest's laptop to gain unauthorized access to the network traffic intended for Mr. Honest. Mr. Deceitful is clearly engaging in wrongful and potentially dangerous interference with network traffic by spoofing an authenticated device MAC address. Pre-802.1x, fooled by the spoofed MAC address, the switch would have likely failed to notice that Mr. Deceitful's notebook is in fact not Mr. Honest's laptop, happily forwarding packets intended for Mr. Honest to Mr. Deceitful. Certain features of the 802.1x standard were adopted to prevent precisely this type of bad actor scenario, among others, by requiring inter network device re-validation with each inter network device movement. To guard against bad actor scenarios, some network security protocols, like IEEE 802.1x, require authentication of an inter network device before allowing the inter network device to reliably communicate with remaining inter network devices of a shared network. As previously noted, in certain modes, the 802.1x protocol requires the added security measure of reauthenticating an inter network device each time, without exception, the device desires to forward network traffic through a different physical or virtual switch port and further requires the added stringent measure each time an intra network device desires to forward network traffic through a different virtual switch despite prior authentication at a current virtual switch. Similarly, reauthentication is a perquisite to forwarding network traffic when a physical device attempts to move from a current virtual switch to a different virtual switch. The network device is blocked from communicating with the remaining network devices of a shared local network through switch ports (physical or virtual) other than the pre-authenticated switch port. Intra network device movement as well as inter network device movement of a physical or a virtual device therefore both require device reauthentication despite preexisting authentication as a prerequisite to a successful movement. Host “authentication” on a physical or a virtual device port generally signifies: 1) packets from the host (e.g., server) are allowed onto the authenticated device port without experiencing packet drops, and 2) packets from the host are not allowed onto any other port of the host. That is, a “denial of access” is issued against all ports other than the authenticated port and a device attempting to move from port A to port B risks experiencing packet drops at port B prior to reauthenticating at port B. In a conventional switch, a moving device first disconnects from port A and authentication of the device at port A is thereafter terminated by the switch and the device then moves from port A to port B. The device disconnection from port A may be noticed by the switch in one of three ways: a “link down” event on the port (physical disconnection of the device from the port), device “sign off” (software command-driven port disengagement), and time out. Typically, when a device physically disconnects from a port—“link down”—the switch takes notice of the device disconnection and terminates the previously granted authentication pre-disengagement from the port. But in typical applications, an intermediary hub is generally positioned between the device and the switch port, a relatively effective hurdle to a direct device-to-port connection. With the hub effectively acting as a barrier to the moving device, the switch fails to notice a device link down event. Accordingly, even in the face of physical device disconnection from port A, the switch effectively does not notice a link down on port A and knows of no device authentication removal at port A. Consequently, device movement to port B results in failed traffic forwarding attempts. Similarly, the switch fails to take note of a sign off event because no sign off command is issued by the device to the switch to disengage—an expected device action in the context of a switch port disengagement. The switch remains ignorant of the device disconnection and does not know to terminate the existing authentication session. In a timeout scenario, the switch presumes disengagement after the expiration of a predetermined time period of undetected device communication and terminates the existing authentication session assuming the device has disconnected. While a time out option for removing an existing device authentication is technically a viable authentication termination option, it is nonetheless an impractical one given the associated unreasonable delays for expiration of a time period, which while in some cases configurable, in some cases, can be 3600 seconds (1 hour). As they do with physical port-to-port network device movement, existing security protocol-compliant networks may constrain virtual switch-to-virtual switch device movement and host-to-host movement in virtualized environments. In accordance with the IEEE 802.1x protocol, in an ethernet virtual private network (EVPN) environment, a pre-validated virtual machine, at a virtual extensible local area network (VXLAN) network tunnel endpoint (VTEP), for example, cannot move from a current VTEP to a new VTEP and expect to resume or start reliable communication through the new VTEP before re-validating at the new VTEP. Similarly, a pre-validated virtual machine cannot move from a current multiprotocol network switching (MPLS) network to a new MPLS network before re-validating at the new MPLS network. The same can be said of other encapsulation methods, such as Generic Routing encapsulation (GRE), Control and Provisioning of Wireless Access Points (CAPWAP), to name a few examples. Existing networking practices, lacking in certain capabilities, fail to meet certain secure network protocol requirements. Take the case of a conventional virtual machine (VM) static entry into the new VTEP. The manual address configuration settings of static address entries grant static addresses one of the highest configuration priority rankings among their peers. While impressive, this very feature precludes statically addressed devices from protocol-compliant mobility because their priority ranking conflicts with address changes. Therefore, the priority ranking of static assignments coupled with stringent network authentication requirements restrict voluntary VM movement between and within VTEPs even in the face of pre-authentication. On the other hand, conventional dynamic entry into a new VTEP can all too freely accommodate VM mobility to the new VTEP, but it does so at the risk of violating certain security protocols—an unacceptable outcome to private enterprises competing to meet privacy law compliances and customer privacy concerns. In a specific 802.1x mode, for example, independent multi-device authentication is a pre-requisite to any VM-initiated movement from an old VTEP to a new VTEP but the new VTEP lacks knowledge of the VM pre-authentication status at the old VTEP and is equally ignorant of the requisite 802.1x protocol reauthentication at the new VTEP. Consequently, VM network traffic attempts at the new VTEP result in failed traffic forwarding because VM-sourced network traffic packets at the new VTEP will fail to reach their intended destination. Dynamic entry is therefore too permissive to meet the requirements of certain secure protocol modes. In summary, while static entries inherently lack the capability to accommodate proper virtual machine movement, dynamic entries are too permissive to meet some of the most robust security protocol requirements of certain network security protocol standards. In disclosed non-virtualized embodiments and methods, movement of an intra network device, such as a computer, a notebook, or a server, is facilitated between ports of an inter network device, such as switch, by re-authenticating the intra network device at a new switch port. Port-to-port inter and intra network device mobility proves compliant with certain robust secure network protocol measures. For example, when alerted, a physical or virtual network switch may re-authenticate a pre-authenticated inter network device (physical or virtual) at an existing switch port in part facilitated by an authentication agent executing on the switch, at a different switch port. In a non-virtualized network, for example, the authentication agent central to the switch, may be responsible, in large part, for the entire port-to-port authentication process. In a virtual switch device move, the authentication agent of the destination switch may be responsible for initiating reauthentication of the moving device at the destination switch. The switch may receive an acknowledgment or notification from an authentication host in response to successful completion of a new port reauthentication. In such cases, the switch may update the device-port association of the pre-authenticated network device at the old port (or switch) in a corresponding forwarding table with an association of the reauthenticated network device at the desired port (or switch). Instead of replacing the association, the switch device may remove the old association from the forwarding table entirely and add the new association to the forwarding table. Various embodiments and methods of the disclosure include a system for provisionally authenticating a device desirous of moving from one physical switch port to another physical switch port or from one physical switch to another physical switch. A software-based mechanism achieves port-to-port and switch-to-switch migration, sending packets to a different switch port or a different switch without packet loss risk. In some embodiments, namely, virtual device moves, the authentication (e.g., 802.1X) semantics are preserved across a wider overlay network with a combined EVPN environment and a network protocol authentication (e.g., 802.1X). For example, the system can extend EVPN to carry the notion of a “secure” MAC address—a pre-authenticated device address at a source device or port—between routers. In accordance with disclosed provisional authentication methods, an authentication agent of a destination device has added responsibilities relative to conventional approaches. In effect, the destination device authentication agent facilitates a software-based authentication procedure in lieu of a conventional hardware-based approach, such as physical unplugging (e.g., link down, sign off, and time out) procedures, as earlier discussed. Accordingly, no timeout expiration period for disconnecting with the moving device is awaited by the source device. In some embodiments, the authentication agent executing on a switch initiates a provisional authentication session (a new session) to effect reauthentication of the moving device at an unauthenticated switch port. It is within this context that the embodiments ofFIGS.1and3-10are discussed subsequently below. In contrast to a non-virtualized port-to-port hop, in a virtualized network, orchestration of an intra network device-to-intra network device (e.g., virtual switch-to-virtual switch) authentication process may be a shared activity between the two devices. In effect, the two intra network device engagement begins with the source intra network device assuming the lead role and ultimately giving it up to the destination intra network device in response to successful reauthentication at the destination intra network device The existing authentication session at the source device remains opaque to the destination device to reduce additional and unnecessary destination device duties. For example, a host (e.g., virtual machine) desires to join a new hypervisor (destination hypervisor) connected to destination switch device at which the host is unauthenticated, nevertheless, the host is initially authenticated at a source switch device and a part of a source hypervisor. Initially, the host is physically moved from the source switch device to the destination switch device, but the two switches are not necessarily yet aware of the host move. The host forwards traffic—authentication packets—headed for the destination device which may serve as notification from the host for an endpoint-to-endpoint host hop (e.g., from the source device to the destination device). The source device advertises (e.g., in compliance with border gateway protocol (BGP)) an authentication route (e.g., Type 2) and in response to the advertised route, the destination device initiates a reauthentication session at the destination device. In response to the source device advertisement, the destination device may make a request of an independent authentication host similar to the authentication process described above in relation to physical port-to-port hops to authenticate the host at the destination device. The destination device takes over the moving host and informs the source device (e.g., BGP) accordingly. In response, the source device terminates the existing authentication session, wholly releasing the host to the destination device and updates its association maps accordingly. The device-to-device reauthentication process is therefore a shared responsibility between and carried out promptly by the source and the destination devices. In both the non-virtualized and the virtualized processes, the reauthentication process prevents packet loss exposure. The authentication session handoff from one physical switch port to another physical switch port, from one physical switch to another physical switch, from a virtual switch port to another virtual switch port, or from a virtual switch to another virtual switch, is effectively one continuous process in each scenario with an initial authentication session continuing onto a subsequent session seamlessly and the initial session terminating only in the presence of a subsequent successful authentication. In some virtual device movement cases, due to an inherent network traffic delay, from the time a moving device is authenticated by an authentication host at the destination and the source switch port (or source switch) receiving an updated message from the destination switch port (or destination switch) but before traffic is blocked at the source, authentication at the source switch port and reauthentication at the destined switch port overlap and the network device static address configuration is morphed into a dynamic address configuration. In response to reauthentication, the switch blocks communication between the network device and local network devices through the source switch port and the network device address configuration becomes static again. In some embodiments, after a moving device has physically moved but before the moving device has been reauthenticated, a source network device advertises a route through a network, such as an EVPN with border gateway protocol (BGP), to a destination network device. The advertised route includes a payload with an authentication extension signaling an authentication type using a new extended community. The authentication extension is programmably extendable for universal accommodation of various industry standard protocols. A system and method for reauthenticating a host moving from one BGP router to another BGP router is disclosed. A host is initially authenticated at the first BGP router, for example, and free to communicate with other devices of the network to which the first BGP router belongs through the first BGP router but blocked from communicating with other devices of the network to which the second BGP router belongs through the second BGP router. The host is physically moved from the first BGP router to the second BGP router, a router to which the host is desirous to move but the two routers are unaware of the host move. This discovery is advertised to the second BGP router with a new extended community indicating authentication (or pre-authentication) of the host at the first BGP router. In response to the advertisement, an authentication session is consummated at the second BGP router. In response to a successful completion of the authentication session, the host is authorized to transmit network traffic on the second BGP router and subsequently blocked from doing the same at the first BGP router. Networks are generally required to maintain network element connection associations, such as associations between MAC addresses and forwarding address ports for proper packet routing procurement between the network elements. For example, in a non-virtual network, a switch maintains associations between inter network devices and switch ports to which the inter network devices may be linked. Egress and ingress network traffic between the inter network devices is typically facilitated by use of one or more tables. For example, a layer 2 switch of a typical local area network (LAN) may maintain a forwarding table of associations between uniquely identifying authenticated inter network device MAC addresses and corresponding switch port identifiers. In some embodiments, to facilitate a successful VM VTEP-to-VTEP move, respective VTEP forwarding tables, some of which may include an aggregate of software forwarding tables, are updated. What starts out as a “local” secure MAC address, identifying the VM at a forwarding table entry of the authenticated source VTEP port, is ultimately the subject of a “remote” secure MAC address table entry of the forwarding table of the source VTEP and what starts out as a “remote” secure MAC address, identifying the VM at an forwarding table entry of the unauthenticated destination VTEP port is ultimately the subject of a “local” secure MAC address of the destination VTEP forwarding table of the destination VTEP—a MAC mobility feature. In an example embodiment, the MAC mobility feature is 802.1x-compliant. It is understood however, that the MAC mobility feature may be compliant with other suitable networking authentication protocol standards. In virtual networks, similarly, each of the source and destination hosts, hypervisors, for example, may maintain a similar address-to-host topology and each hypervisor may modify a respective table as discussed above relative to switch port movements. Various disclosed embodiments and methods herein present reliable and robust networking authentication approaches and techniques to meet existing industry privacy concerns and adopted privacy governances. Be it in the physical space or the virtual space, some of today's strictest protocol authentication requirements are addressed by various disclosed embodiments and processes with robust system reliability achieved by enforcement of risk averse measures without undue reauthentication time delays. Reauthentication of a moving device is consummated at a desired destination device without packet loss. The system offers flexibility of movement from physical or virtual switch port-to-switch port and physical or virtual switch-to-switch by efficiently reauthenticating the moving device at each network stop. Features are built into the device movement process to avoid network device and architectural redesigns in favor of supporting legacy compatibility. Applications of various disclosed embodiments and methods are large in number and wide in scope. Nonlimiting network applications include a network device moving between switch ports of a physical switch, between switch ports of a virtual switch, between virtual switch ports and between virtual switches. The network device movement may be within a network cloud, across network clouds, within a data center, across data centers, within a LAN, across LANs, within a WAN, across WANs, or among a heterogenous combination of networks. In disclosed non-virtualized embodiments and methods, an inter network device may be a physical or a virtual switch, a router, or a switch with router capability or other suitable network devices capable of facilitating a network device movement with successful reauthentication at each network stop using reauthentication techniques of various embodiments disclosed herein. Nonlimiting examples of a moving network device are a computer, a notebook, tablet, smart devices, a router, and a server. In disclosed virtualized embodiments and methods, a network device move may make a move in an EVPN environment, for example, between a virtual switch (e.g., VTEP) or across virtual switches and between hosts and servers. FIG.1is an illustrative example of a block diagram of a network system100implementing various disclosed inter network device reauthentication systems and methods. In accordance with some embodiments, network system100includes a network switch102arranged in a networking configuration to facilitate implementation of various device reauthentication techniques disclosed herein. In accordance with provisional authentication practices, network switch100facilitates prompt host port-to-port transfers in the absence of packet loss to achieve network performance optimization and robustness. InFIG.1, switch102is presumed a part of a network such as, without limitation, a local area network (LAN) or a wide area network (WAN). It is understood that the embodiment ofFIG.1is merely a nonlimiting example of an inter network device with lossless port-to-port inter network device movement capability and other embodiments suitable for effecting similar port-to-port device transfers are contemplated. For example, switch102is an example of a network host device and may be replaced with any host device configurable with requisite features to effect successful and prompt device port transfers. Referring still toFIG.1, in accordance with various embodiments and methods of the disclosure, switch102includes switch ports104, security agent114, and forwarding table122. In the interest of simplicity of illustration, the example scenario ofFIG.1, as explained below, is carried to subsequent embodiments shown inFIGS.3-8. Switch102is an example of a network device and may be a layer 2 (“Layer 2”) type of switch although switch102need not be a Layer 2 switch and may be a Layer 3 switch with router capabilities. A network host device, such as switch102, implementing the authentication and provisioning functions of the disclosure, may operate at any suitable network layer. Switch ports104are shown to include various ports among which are switch ports104A and104B. While switch102is shown to include 9 switch ports in each ofFIGS.1and3-8, it is understood that switch102or a host in general is not restricted to a limited number (for example, 9) ports and may instead have any number of ports. Switch102maintains associations between intra network devices and switch ports104, in forwarding table122of Host1, to which the inter network devices may be linked for facilitating egress and ingress network traffic between the inter network devices. In some embodiments, table122may include or can be incorporated in or is a part of one or more other tables. In some embodiments, Host1 is an intra network device, a device outside of the network to which switch102belongs. In some embodiments, Host1 is an inter network device, positioned within a network common to switch102. In some embodiments, forwarding table122is a MAC address table, commonly referred to as a “Content Addressable Memory (CAM) table”, used by switch102to determine where to forward traffic on a corresponding network. For example, assuming switch102to be a Layer 2 switch of a LAN, switch102may maintain associations between uniquely identifying authenticated network device MAC addresses and corresponding uniquely identifying switch port identifiers in forwarding table122. In some embodiments, security agent114is a software program, code, or routine that when executed carries out certain security authentication and reauthentication processes disclosed herein. Agent114, when executed, typically manages all entries (or ports) of all inter network devices, such as switch102, and determines which intra/inter network device, such as Host1, is authenticated on which port. But in the various embodiments and methods disclosed herein, agent114of switch102, when executed by a switch processor, such as a central processor of switch102, has the added responsibility of implementing a temporary session—provisional authentication. For example, agent114may implement an 802.1x-compliant authentication session. Accordingly, agent114is not limited to executing processes for achieving compliance with the 802.1x protocol and can be programmed to carry out processes for meeting alternate networking authentication protocol requirements. Agent114is discussed herein primarily as a software program but agent114may, in part or in whole, carry out various authentication processes, as disclosed herein, in hardware. Still alternatively, agent114may direct another software- or hardware-based entity to carry out such processes. For the purpose of simplicity of illustration, switch102is presumed a multiport Layer 2 switch configured to use MAC addresses to forward data at the data link layer of the open systems interconnection (OSI) model https://en.wikipedia.org/wiki/Data_link_layer. As earlier indicated, it is understood that switch102may be, for example, a Layer 3 switch with incorporated routing functionality configured to forward data at the network layer. System100ofFIG.1is further shown to include a Host1 initially connected to switch port104A of switch102through link110. Link110connects the nodes of the network of which switch102is a participating component. In the embodiment ofFIG.1, link110is a physical link although, link116may be a virtual link. Host1 may be initially connected to any of the ports104of switch102. For purposes of discussion, Host1 is presumed connected initially to port104A and by way of example, with aspirations to disconnect from port104A and connect to port104B of switch102through a link116not yet established. Link116, when established, connects the port104B of switch102to Host1 in the example embodiment ofFIG.1. Analogously to link110, link116may be a physical or a virtual link. InFIG.1, for the purpose of discussion, link116is presumed a physical link. Host1 may be any network device suitably configurable to perform various processes and functions of provisional authentication at switch102port to which Host1 wishes to attach, forwarding network traffic upon completion of authentication reliably and without experiencing packet loss. InFIG.1, Host1 is labeled Host1106when attached via a link to port104A labeled Host1108when attached via a link to port104B. Host1 is similarly labeled inFIGS.3-8. In the example ofFIGS.1and3-8, Host1 is a server desirous to move from an existing authenticated connection to port104A, of switch102, to a new unauthenticated connection, at port104B of switch102. Initially and prior to making its move, Host1 is pre-authorized to communicate with switch102at port104A. In some embodiments, pre-authorization includes successful authentication of Host1 at port104A by an authentication server, such as without limitation, a Radius server. In some embodiments, initial authentication of Host1106, at port104A is accomplished through execution of agent114although authentication of Host1106at port104A may be achieved by execution of other agents or hardware implementation, or a combination thereof. Regardless of who successfully authenticates Host1106and how Host1106is authenticated at port104A, without proper authentication, switch102will not recognize packets sourced from Host1, at port104A. In some cases, Host1 can be authenticated at a single port of switch102at a given time. Switch102has hardware port entries (not shown), at each distinct corresponding port, programmable to block packet entry onto a corresponding port. In the configuration ofFIG.1, switch102has a hardware entry (not shown) at a location where port104B would meet a physically connected (or attached) network device, such as Host1. The hardware entry at port104B, marked by “X” inFIG.1, is programmed by switch102to block packets from Host1 to switch102at port104B. InFIGS.1and3-8, the letter “X” designates a respective blocked port entry. For example, inFIG.1, all but port104A of switch ports104are blocked to incoming packets—unauthenticated— pre- and at the beginning of the provisional authentication process through to prior to the completion of the provisional authentication process, as will be discussed below. But even if hardware entries were programmed to allow reliable packet access through port104B, conventional authentication processes would nevertheless fail because of packet drops. None of the three traditional mechanisms for breaking link110offers a practical and reliable authentication option, as previously noted, therefore, the existing authentication session carries on with no foreseeable new authentication session at port104B. In accordance with various disclosed mechanism, a soft authentication process indeed facilitates a new authentication session (port104B) with an end to the existing session (at port104A) only after the new session is successfully established. Switch102may be featured with the capability to program hardware port entries for programmably receiving or blocking traffic through a corresponding port. But typically, a port entry hardware mechanism is designed to punt authentication packets to an internal switch central processor. In various embodiments and methods of the disclosure, the switch authentication agent sits in a favorable position to intervene at this point, steering the authentication packets toward a software-driven new authentication session—a soft reauthentication approach—to prevent otherwise dropped packets. With continued reference to the above example, as further discussed relative toFIG.4, in accordance with some embodiments and methods of the disclosure, authentication packets, such as without limitation, 802.1x packets, are steered around the hardware blocked entry at port104B onto a provisional connection serving as a temporary tunnel to an independent authentication device. Pre-authentication, authentication packets are provided with an exclusive right of way at port104B while non-authentication packets are kept out and authentication at port104A remains uninterrupted. Successful authentication, as reported by the authentication server, triggers a Host1 move from port104A to port104B. In association with the embodiments ofFIGS.1and3-8, forwarding table122maintains correspondences between authenticated network devices of a network, such as Host1, and corresponding switch ports of switch102, such as ports104, in a forwarding table, such as forwarding table122. In response to completion of the authentication process, switch102determines the table entry for port104A as obsolete, removes the entry for port104A from forwarding table122, and adds an entry to forwarding table122for port104B corresponding to the destination and newly authenticated switch port. In some embodiments, switch102may replace an existing device-to-port104A entry with the device-to-newly authenticated port entry. Switch102programs hardware port entries for ports104A and104B accordingly removing the block at port104B to allow Host1 regular network traffic to the rest of the network through port104B and blocking regular network traffic through port104A. The provisional authentication process completes, and a successful make-before-break process is achieved. That is, link110remains in effect and Host1 remains authenticated at port104A until provisional tunneling and successful authentication of the authentication packets is consummated. It is only after successful software authentication at port104B that port104A is blocked. Accordingly, regular packet traffic between Host1 at port104B, through link116, to the remaining network elements of the corresponding network can begin. In summary, with continued reference toFIG.1, switch102provisionally authenticates Host1 through authentication agent114executing on a processor of switch102. But initially, pre-provisional authentication, Host1 is authenticated for regular traffic communication with remaining network devices of the network only at port104A through link110. A provisional authentication session starts when authentication packets from Host1 are intercepted and redirected by a software mechanism for authentication of Host1 at port104B using an independent authentication device. Indeed, all ports104with the exception of port104A are initially blocked to Host1 thereby preventing Host1 from forwarding network traffic to the remaining network devices. Agent114causes switch102to intercept authentication packets, e.g., 802.1x packets, sourced from Host1 and headed for port104B. The authentication packets are directed by switch102to an authentication host device for reauthenticating Host1 during a new authentication session while the existing session remains to effect traffic flow to the remaining network devices through port104B. In some embodiments, as previously noted, device authentication at a particular switch port prevents authentication of the same device at another switch port. Stated differently, a device is authenticated at a single switch port at any given time and in turn, only a single switch port may have control of a device MAC address at any given time. When receiving authentication packets from Host1, agent114facilitates soft authentication by causing switch102to send the received authentication packets to a remote authentication server. In response to receiving an acknowledgment of a successful authentication session from the independent authentication device can switch102program security policies on port104B. Accordingly, and in contrast to traditional authentication processes, a provisionally connected port is linked up even though communication from Host1 to any other host, but the existing linked up port, on the network continues to be blocked. Accordingly, while not possible previously, in the various embodiments and processes disclosed herein, the conventional authentication agent is extended to understand the new concept of a new session, at port104B, taking over from the old session, at port104A, a seamless reauthentication session. The packets during a period between the old session and the new session are intercepted by the authentication agent and instead of the process dropping the packets, the authentication packets are directed to an authentication server. In some embodiments, a precession state of authentication agent114is used to transfer authentication packets from Host1108(FIG.1) to the authentication server. In response to receiving an authentication acknowledgment from the authentication server, agent114creates the requisite security policies. FIG.2shows a generalized embodiment of a network device (or “network element”)200. As depicted, network device200may be a router, a switch, and/or any other network device configured to receive network traffic from a first device and forward the network traffic to a second device, such as by performing an address lookup in a forwarding table. Network device200, in a virtual world, may be a virtual switch, such as without limitation, a VTEP effecting reauthentication in a local move or a local-to-remote move, as further described relative to subsequent figures. Those skilled in the art will recognize that switches100and300-800ofFIGS.1and3-8, respectively, may be implemented as network device200. Network device200may receive network traffic (e.g., from Host1) via a network interface (e.g., link110or116), such as network interface210A, and provide the network traffic to control circuitry204, which includes processing circuitry206and storage208. While network device200is shown to include four network interfaces (e.g., network interfaces210A,210B,210C, and210D), this is merely illustrative, and it is contemplated that network device200may include any number of network interfaces, and that the network interfaces may be of any type of wired or wireless network interface, such as RJ45 ethernet ports, a coaxial ports, logical ports, wireless interfaces (e.g., 802.1x interfaces, WIFI, BLUETOOTH interfaces, cellular interfaces, etc. Control circuitry204may be based on any suitable processing circuitry, such as processing circuitry206. As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, octa-core, or any suitable number of cores). In some embodiments, processing circuitry is distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units (e.g., two INTEL CORE i7 processors) or multiple different processors (e.g., an INTEL CORE i5 processor and an INTEL CORE i7 processor). In some embodiments, control circuitry204executes instructions for provisional authentication and related operations, as described herein with reference toFIGS.1and2-8. Control circuitry204may further consummate route advertisement, such as discussed relative toFIGS.11-24, to other devices connected to network device200. Storage208may include volatile random-access memory (RAM)212, which does not retain its contents when power is turned off, and non-volatile RAM214, which does retain its contents when power is turned off. In some embodiments, storage308may be an electronic storage device that is part of control circuitry204. As referred to herein, the phrase “electronic storage device” or “storage device” should be understood to mean any device for storing electronic data, computer software, instructions, and/or firmware, such as random-access memory, content-addressable memory, hard drives, optical drives, solid state devices, quantum storage devices, or any other suitable fixed or removable storage devices, and/or any combination of the same. In some embodiments, one or more forwarding tables122,322,422,522,622,722,822,1732,1832, and table entries ofFIGS.12A-12Fof respectiveFIGS.1,3-8,17,18, and12A-12Fare stored in storage208. In some embodiments, one or more forwarding tables122,322,422,522,622,722,822,1732,1832, and table entries ofFIGS.12A-12Fof respectiveFIGS.1,3-8,17,18, and12A-12Fmay be stored on a separate device and a link to forwarding tables122,322,422,522,622,722,822,1732,1832, and table entries ofFIGS.12A-12Fof respectiveFIGS.1,3-8,17,18, and12A-12Fmay be stored in storage208. In some embodiments, destination VTEP forwarding tables with depicted entries, such as shown relative to VTEP2 switch1604,1704,1804,1904,2004,2104, and2204ofFIGS.16-22, respectively, may be stored, in part or in whole, in storage208, for example volatile memory212. The circuitry described herein may execute instructions included in software running on one or more general purpose or specialized processors. Multiple circuits may be provided to handle simultaneous processing functions. In some embodiments, storage208may maintain authentication agent program code. For example, program code for authentication agents114,314,414,514,614,714,814,1610,1616,1710,1716,1810,1816,1910,1916,2010,2016,2110,2116,2010, and2016may be stored in non-volatile memory214of storage208. In an embodiment, the network switch, physical or virtual, includes a processor, for example, processing circuitry206, and memory, for example, memory214. The processor executes program code stored in memory to implement the authentication agent and carry out provisional authentication processes from the time when the switch device starts to receive authentication packets, or a notification, from a mobile inter or intra network device to after successful reauthentication of the network device or after blocking the old port or after both events occur. A virtual switch of a destination host (e.g., a hypervisor) starts to receive authentication packets or a notification from a mobile intra network device and may successfully assist with or cause reauthentication of the intra network device. FIGS.3-8are illustrative block diagrams of an example network system, in accordance with some embodiments of the disclosure. Each of theFIGS.3-8illustrates the network system in a distinct state during a reauthentication process.FIG.3shows the reauthentication process relative to a network system300,FIGS.4-8each show successive states of the reauthentication process relative to respective systems400-800. The steps of the reauthentication process, which is analogous to the authentication process discussed with reference toFIG.1, is now discussed relative toFIGS.3-8. FIGS.3-8each include a network device configured analogously to switch102ofFIG.1. For example, each of the switches302,402,502,602,702, and802of respectiveFIGS.3-8, is configured as switch102. Similarly, in each of the figures,FIGS.3-8, a host, Host1, is presumed to plan for migration from a pre-authenticated switch port of a corresponding switch to a different switch port of the same corresponding switch where the host is yet to be authenticated. InFIG.3, Host1 begins its journey at306, initially connected to port302A, migrating to308to connect to port304B; inFIG.4, Host1 starts at406and is initially connected to port402A, migrating to408to connect to port404B; inFIG.5, Host1 starts at506and is initially connected to port502A, migrating to508to connect to port504B; inFIG.6, Host1 starts at606and is initially connected to port602A, migrating to608to connect to port604B, inFIG.7, Host1 starts at706and is initially connected to port702A, migrating to708to connect to port704B; and inFIG.8, Host1 starts at806and is initially connected to port804A, migrating to808to connect to port804B. Additionally, the switch in each of theFIGS.3-8, is presumed an example Layer 2 switch. Each switch includes a number of corresponding switch ports. For example, inFIG.3, switch302is equipped with switch ports304including switch ports304A and304B; inFIG.4, switch402is equipped with switch ports404including switch ports404A and404B; inFIG.5, switch502is equipped with switch ports504including switch ports504A and504B; inFIG.6, switch602is equipped with switch ports604including switch ports604A and604B; inFIG.7, switch702is equipped with switch ports704including switch ports704A and704B; and inFIG.8, switch802is equipped with switch ports304including switch ports804A and804B. In each ofFIGS.3-8, the corresponding switch is further shown to include a forwarding table, e.g., an association of authenticated (or secure) MAC addresses and ports, used for forwarding traffic, and an authentication agent to carry out provisional authentication processes for authenticating a new switch port. For example, switch302ofFIG.3is shown to include forwarding table322and authentication agent314; switch402ofFIG.4is shown to include forwarding table422and authentication agent414; switch502ofFIG.5is shown to include forwarding table522and authentication agent514; switch602ofFIG.6is shown to include forwarding table622and authentication agent614; switch702ofFIG.7is shown to include forwarding table722and authentication agent714; and switch802ofFIG.8is shown to include forwarding table822and authentication agent814. In some embodiments, the forwarding table ofFIGS.3-8is configured analogously to forwarding table122ofFIG.1, and the authentication agent in each of theFIGS.3-8is configured analogously to authentication agent114ofFIG.1. InFIG.3, Host1 is shown connected to port304A of switch302through a link310, analogous to link110ofFIG.1except that inFIG.3, an intermediary hub312is shown positioned between Host1306, and port304A of switch302. Hub312therefore interferes with a Host1 view into in possible “link up” and “link down” events by virtue of its intermediary connection with port304A and Host1, therefore, these events are opaque to switch302. For example, in a scenario where Host1 unplugs, switch302can remain unaware of the disconnection and continue to operate under the assumption that Host1 remains connected at port304A. A similar lack of transparency, given the intermediary position of hub312, is likely to occur in a “sign off” scenario as earlier discussed. But various disclosed methods and embodiments implement provisional authentication and bypass the intermediary hub obstacle. In some embodiments, hub312controls traffic flow from Host1306to port304A of switch30. In some embodiments using ethernet links, hub312connects multiple ethernet devices together to make them as a single network segment. Hub312may be any network hardware equipment that connects multiple network devices together to make them act as a single network segment. The reauthentication process starts atFIG.3where Host1306is shown authenticated at port304A of switch302and traffic flows from Host1 through link310and hub312to port304A of the switch, as previously discussed. In conventional methods, despite the desire to move from port304A to port304B, the hardware entry at port304B prevents Host1 from authenticating at port304B and any attempts to forward traffic through a new port by Host1 fall short. Traffic sourced by Host1 never finds its way to port304B because port304B lacks proper authorization to receive the traffic. In accordance with various embodiments and techniques disclosed herein, the Host1 attempt to move to port304B is facilitated through software reauthentication implementing a provisional authentication session compliant with one or more network security protocols, such as the 802.1x protocol. Initially (inFIG.3), forwarding table322of switch302includes an entry associating the Host1 MAC address with port304A and no entry exists for an association between Host1 and port304B. InFIG.4, Host1 generates and forwards authentication packets430intended for port404B of switch402but as previously noted, the hardware entry at port404B is not receptive of authentication packets, a task passed onto a central processor of switch402by hardware processes of switch402. In some embodiments, processing circuitry206ofFIG.2is configured as the central processor of switch402(and switches302,502,602,702, and802). In some embodiments, processing circuitry206ofFIG.2is configured as a central processor of a VTEP shown and disclosed herein, such as, without limitation, by VTEP A and VTEP B ofFIG.11, VTEP1 switches1606,1706,1806,1906,2006,2106, and2206ofFIGS.16-22, respectively, and VTEP2 switches1604,1704,1804,1904,2004,2104, and2204ofFIGS.16-22, respectively. Authentication agent414is therefore afforded the opportunity to intercept and redirect authentication packets from Host1 away from port404B and instead toward an authentication device. In essence, agent414establishes a provisional tunnel for implementing authentication of authentication packets430by an authentication device, as shown inFIG.5. Host1406regular network traffic remains flowing to other network devices through port404A given the existing authentication session at port404A but regular network traffic—non-authentication packets—remain blocked at port404B. InFIG.5, authentication packets430from Host1508are participants of a new session, a provisional authentication session, under the direction of the switch authentication agent. System500ofFIG.5is further shown to include an authentication server520, communicatively coupled to switch302for facilitating device authentication pursuant to an industry-adopted protocol standard. In some embodiments, server520is a centralized authentication server. For example, server520may be a remote-authentication dial-in user service (Radius) protocol-compliant server equipped to implement 802.1x authentication sessions. Server520may be any suitable host for carrying out authentication sessions in conformance with a network security protocol, such as without limitation, the IEEE 802.1x network protocol. Authentication agent514initiates a provisional authentication session by intercepting authentication packets530, sourced by Host1508and headed for port504B of switch502, to forward the packets instead to authentication server520as authentication packets are received from Host1508. Authentication agent514effectively implements the provisional authentication session through a provisional connection536to authentication server520. In some embodiments, authentication agent514transmits an authentication request532to authentication server520to cause server520the start the new authentication session. If in response to authentication request532, server520authenticates Host1 at port504B, Host1 is considered successfully authenticated at port504B (as shown inFIG.6), the new port to which Host1 is desirous to migrate, and traffic is allowed to flow between Host1508and other network devices through a link516and port504B. If the new authentication session is unsuccessful, port504B remains blocked to traffic from Host1. InFIG.6, server620transmits an authentication acknowledgement634to authentication agent614of switch602and hardware processes of switch602, generally under the control of the central processor in switch602, cause programming the hardware port entry at port604B to allow traffic sourced by Host1608. In some embodiments, an entry in forwarding table622of the association between the Host1 MAC (“secure”) address and port604B is recorded, an indication of properly-authorized port604B. In the meantime, Host1 continues to remain authenticated at port604A to ensure the new port is properly authenticated before removing authentication at port604A, a make-before-break reauthentication process. Thus, for a brief moment, Host1 is authenticated at the old port, port604A, and at the new port, port604B. During this brief time period, which is generally due to an inherent delay in network traffic, Host1 is theoretically allowed to communicate on both ports, but practically, Host1 has moved to the new port and cannot actually communicate on both ports. InFIG.7, switch702terminates the initial authentication session at the old port, port704A, in favor of the new authentication session at the new port, port704B, and Host1 traffic is blocked at port704A whereas Host1 traffic freely flows at port704B, and the effect is the scenario shown inFIG.8. Provisional authentication is completed. In some embodiments, removing the association between Host1 MAC address and port704A has the effect of terminating authentication at port704A. InFIG.8, a network838is shown to encompass switch802, Host1, Host20, Host3, and another Layer 2 (L2) device, in accordance with an example application embodiment. It is understood that while a total of 5 network elements are shown connected to the ports of switch802, as many network devices as there are available switch802ports may be connected to switch802. Additionally, the network devices shown connected to switch802are merely for illustrative purposes any other suitable network device types may be connected to the ports of switch802. In some embodiments, the Layer 2 device may be, without limitation, another Layer switch. Host1808is shown connected to port804B through a link816, which in the embodiment ofFIG.8is configured analogous to link116ofFIG.1. Host3 communicates with other network devices, such as Host1, Host20, and L2 Device, through port804E of switch802, Host20 communicates with other devices in network838, such as Host1, Host3, and L2 device, through port804D of switch802, and L2 device communicates with other devices in network838, such as Host1, Host3, and Host20, through port802C of switch802. Host1 is shown to communication to other network devices through the new port, port804B while the old port, port804A, remains blocked to Host1. For example, Host1 may communicate to L2 device, Host20, and Host3. FIG.9depicts a flowchart of a method for authenticating a network device, in accordance with an embodiment of the invention. The process ofFIG.9is now described with reference to the embodiment ofFIG.1. It is understood that process900is not limited to the embodiment ofFIG.1and can be practiced by other network systems requiring authentication in response to a physical network device move from a switch port to a second switch port. InFIG.9, an authentication process900starts with a pre-authenticated intra network device at a first switch device port, of a switch device. For example, Host1 ofFIG.1may be an intra network device that is pre-authenticated at port104A, at an old or existing (authentication) session. In some embodiments, the network device may be internal to the network to which switch102belongs, as previously noted. At step902, the network device is blocked from communicating at a second switch device port, the destination port, (e.g.,104B inFIG.1) of a switch device (e.g., switch102) common to the first and the second switch device ports. Next, at step904, process900awaits a new authentication session for authenticating the network device at the new port, for example, port104B of switch102. In some embodiments, the new session begins when the switch authentication agent intercepts authentication packets from the network device in response to the switch hardware processes punting the packets. In some embodiments, the new authentication session may kick off in response to other events or detections. At steps906and908, the switch authentication agent (e.g., authentication agent114) causes the switch device to redirect the intercepted packets away from the second switch device port, where they are headed, toward an authentication server to effect completion of a new authentication session. That is, during provisioning, the authentication agent causes the switch to forward the intercepted authentication packets to an authentication server (e.g. server120) for authentication, essentially bypassing the hardware entry at the new port. The switch authentication agent transmits a request to the authentication server for authentication of the received packets. Next, at step910, if authentication by the authentication server is successful, process900continues to step1002(FIG.10) and if authentication by the authentication server fails, i.e., the authentication server fails to successfully authenticate the received packets, process proceeds to and resumes from step902give re-authentication of the switch another try. At step912, the network device remains authenticated at the first switch device port and blocked from access at the second switch device port and process900ends. If at step910, authentication is determined to be successful, for example, the authentication server sends an authentication acknowledgment to the switch authentication agent, process900proceeds to step1002ofFIG.10. FIG.10is a flowchart of a method for continuing the authentication process900ofFIG.9, after step910. At step1002, the second switch device port is authenticated while the first switch device port remains authenticated. In some embodiments, a table entry with a correspondence between the network device MAC address (e.g., Host1 MAC address) and the second switch device port is recorded into the switch device forwarding table and at step1004, the entry associated with the network device MAC address and the first switch device port is removed. At the completion of step1004, the first switch device port blocks regular traffic flow from the old port, the first switch device port, and the newly authenticated second switch device port allows traffic flow and provisioning ends. Unlike some physical network devices, there is no link up or link down to indicate when virtual machines may attach to or break away from a switch. In some embodiments, for example, a virtual or a physical machine may make a move from a first virtual switch port to a second virtual switch port of a virtual switch in a BGP-compliant network environment. In some cases, the reauthentication process is applied to a network device, configured with a virtual overlay. Movement by a virtual or physical network device, such as a virtual machine or a router, for example, between two ports of a common virtual host (local move), such as within the same VTEP, or across multiple virtual hosts (local-to-remote move), such as between two VTEPs, entails reauthentication of the moving virtual/physical network device at the destination host or host port, as the case may be, despite prior authentication of the device at the source host or host port, as previously discussed. In a local move, the moving device, as done in the case of physical port-to-port movement above, is re-authenticated at the new (destination) port of a virtual switch while the connection between the moving device and the current (source) port of the virtual switch remains intact. At the behest of the authentication agent executing at the virtual device, a reauthentication session is initiated by the virtual switch (an example of a host) to secure reauthentication at the virtual switch destination port while the moving device remains connected and can forward all traffic flow (authentication traffic and non-authentication traffic) at the source port of the virtual switch. The moving device is blocked, however, from forwarding non-authentication traffic through the destination (or new) port but unblocked from forwarding authentication traffic through the destination port. That is, by virtue of a software authentication implementation by the virtual switch authentication agent, the switch initiates facilitating reauthentication of the moving device at the destination port. In response to successful completion of the reauthentication at the new port, the moving device establishes a connection with the new port for regular traffic flow through the new port and ultimately the switch blocks traffic flow at the source port by programming corresponding port entries accordingly. An example of a local move in the virtual space is provided subsequently below. In a local-to-remote embodiment of the disclosure, in a general context, to facilitate reauthentication at a new virtual switch while the new virtual switch remains effectively blocked to the moving device, the current pre-authenticated virtual switch advertises a route with a payload including a new community extension (an authentication extension) signifying a local “secure” MAC mobility address to the new virtual switch. The new extended community is intended to signify that the MAC address of the moving device is secure (or authenticated) pursuant to, for example, an industry-standard protocol (e.g., 802.1x standard). In accordance with the new extended community and in opposite to a static or dynamic addressing type, the advertised route carries an authentication type to specifically signify the authorized MAC address to the destination virtual switch. In response, the new virtual switch acknowledges the advertised route to the currently authenticated virtual switch, triggering a reauthentication session. As in physical switch port-to-port movements, during reauthentication, the new virtual switch authentication agent intercepts the authentication packets sourced by the moving device, the authentication packets are rerouted to an authentication host. In the meanwhile, and prior to the successful completion of authentication of the moving device using the authentication server, the existing authentication at the old (or current) virtual switch remains intact. While the moving device has made its physical move, the new virtual switch and the old virtual switch remain unaware of the device move. The new virtual switch posts a route to the old virtual switch including a secure (MAC mobility) authentication community extension. In response, the old virtual switch points the moving device secure (but remote) address entry to the new virtual switch and terminates the moving device existing authentication session blocking traffic from the moving device to the old virtual switch. In some embodiments, the old and new virtual switches communicate through BGP. Various features of some disclosed embodiments and methods are premised on the above described virtual authentication processes. Additionally, in a virtual application, EVPN is extended to carry the notion of a “secured” MAC address between the source and destination virtual devices. Traditionally, the “secured” MAC address is an intermediary level of “stickyness” between that of a pure dynamic address and that of a pure static address. In a disclosed method and embodiment however, a “secured” MAC address indication of an authentication extension, carried by an advertised route by an EVPN source virtual device to an EVPN destination virtual device signifies a corresponding authenticated (e.g., 802.1x-compliant) MAC address. In some embodiments, routes announced with the authentication extension are afforded a higher priority over routes announced without the authentication extension regardless of the MAC mobility sequence number. An authentication extension, as used in reference to and shown inFIGS.11-25, refers to an extension or an “extended community extension” or “extended community” as used herein. An example of an authentication extension is shown in an advertised route inFIG.18and an example authentication extension is shown inFIG.25. In some embodiments, authentication extensions may refer to authentication in compliance with any industry-adopted network authentication protocols, such as, without limitation, the IEEE 802.1x protocol. Further details of the above virtual device reauthentication processes are now discussed with reference to the embodiments ofFIGS.11-25.FIG.11is an illustrative example of a networking system1100, in accordance with various embodiments and methods of the disclosure. InFIG.11, system1100is shown to include a first network including VTEP A communicatively coupled to VTEP B of a (different) second network. For example, VTEP A and VTEP B may be in an EVPN environment, communicating through BGP, a practical example of which is shown relative to the VTEP1 switch and the VTEP2 switch ofFIGS.16-22. InFIG.11, VTEP A is shown coupled to a host, Hypervisor 1, in the first network, via a physical Ethernet link, Eth1. Analogously, VTEP B is shown coupled to a host, Hypervisor 3, of the second network, via a physical Ethernet link, Eth1. Hypervisor 1 is shown to include a host machine, for example, a virtual machine (VM). The network components ofFIG.11may be configured as Layer 2 components, it is understood however that in various embodiments, these components may be configured in accordance with other layers of the network model. Additionally, VTEP A and VTEP B may each be coupled to respective virtual machines through link types other than Ethernet links. While not shown, it is understood that each of VTEP A and VTEP B maintains a respective forwarding table of associations between host secure MAC addresses and corresponding ports (for port-to-port movement) and secure MAC addresses and corresponding hosts (for host-to-host movements). For example, VTEP A maintains a table (e.g., forwarding table) of cross-referenced authenticated MAC addresses to port (or host) entries including an initial (pre-move) entry for the VM MAC address and a VTEP A port (through the Eth1 link to Hypervisor 1) because VM is initially authenticated at VTEP A port but the table does not initially include an entry for a correspondence between the VM MAC address and a port that where VM is not authenticated. Similarly, the forwarding table of VTEP B initially does not have an entry corresponding to the VM MAC address. The embodiments of subsequent figures of the disclosure are presumed to include Layer 2 devices and authentication is presumed pursuant to the 802.x1 protocol standard although, as previously indicated, devices of alternate embodiments may operate in layers other than Layer 2 and authentication may be performed pursuant to other suitable network authentication standards. By virtue of the inherent behavior of Hypervisors 1, 2, and 3, there is no link down event in response to a VM move, consequently, VM is not provided with the opportunity to link down or “sign off” when moving from a virtual device to another device, be it a local move or a local-to-remote move. A timeout disengagement, as previously described, is too lengthy and inefficient. In some embodiments, a physical device, such as without limitation, a notebook or a laptop, instead of VM may be moving from a port on VTEP A to another port on VTEP A or from VTEP A to VTEP B. For simplicity of illustration, an example virtual machine is presumed to make a move in the embodiments ofFIGS.11-22. With continued reference toFIG.11, pursuant to a local move, VM wishes to leave its currently authenticated connection with VTEP A port, which is coupled to the VM through link Eth1, and establish a new connection through link Eth2 to another VTEP A port, connecting through link Eth2 when the link is established. But VM cannot reliably establish the new connection without successful authentication at the new port for proper connection through link Eth2 to the new VTEP port. The new port has no knowledge of the VM in the absence of a corresponding MAC address entry. The steps for effecting the VM local VTEP A move in FIG. are now described in relation to a process1300of the flow chart shown inFIG.13. FIG.13is a flow chart of a local device move reauthentication process, in accordance with some embodiments of the disclosure. InFIG.13, at step1304, process1300awaits a new authentication session. A new authentication session may be triggered by authentication packets forwarded from the new port of VTEP A in accordance with step1306ofFIG.13. As previously discussed, hardware processes controlling hardware port entries at VTEP A are expected to punt authentication packets to a central processor. Accordingly, the hardware port entries at the new port will not entertain opening the port to regular (non-authentication) network traffic. Instead, at step1308, a VTEP A authentication agent takes over and intercepts authentication packets from VM, requests authentication from an independent authentication host, and causes redirection of the authentication packets from VM to the dedicated authentication host. The process is then left up to the authentication host and the moving device to authenticate the latter at the new port of VTEP A. The authentication agent has affectively initiated a software-only authentication at the new port and bypassed conventional hardware-based packet processing. The VM MAC address entry of a VTEP A forwarding table remains the same, i.e., associated with the existing authenticated port, linked to VM through the link Eth1. Upon the completion of authentication of VM at the new port, the authentication agent may receive an acknowledgement from the independent authentication host and program the hardware entries to the new port on VTEP A allowing regular network traffic to be forwarded to the new VTEP A port. The authentication agent may alternatively or additionally direct hardware processes of VTEP A to program the new port hardware entries. At step1310, successful reauthentication (or authentication at the new port) must occur before process1300proceeds to step1314and in the event reauthentication is unsuccessful, process1300proceeds to step1312. At this juncture, VM authentication is implemented by VM authenticating itself at the VTEP A, new port, using the authentication server. At step1312, VTEP A retains the forwarding table MAC address entry for VM at the existing authenticated port of Hypervisor 1, (linked through Eth1) and remains blocked from connecting to the new port of Hypervisor 1, and process1300ends. VM effectively is denied its desired move. At step1314, VTEP A removes the existing VM MAC address forwarding table entry and adds an entry to the table for the VM MAC address at the new VTEP A port, the new port connects VM to Hypervisor 2 through link Eth2. VM has successfully made its desired move and Process1300. Pursuant to a local-to-remote VM move, with continued reference toFIG.11, VM wishes to leave its currently authenticated connection to VTEP A, through link Eth1 to Hypervisor 1, and establish a new connection at VTEP B, through a link Eth1 to Hypervisor 3. But VM cannot reliably establish a connection at VTEP B without first undergoing successful authentication at VTEP B. The steps for effecting the VM local-to-remote VTEP move relative toFIG.11are now described relative to process1400of the flow chart shown inFIG.14. FIGS.14-15are flow charts of a local-to-remote device move reauthentication process, in accordance with some embodiments of the disclosure. At step1402, VM is pre-authenticated at VTEP A and a local MAC address at VTEP A is advertised as a secure MAC address by VTEP A to VTEP B. At step1404, process1400awaits a new authentication session and when one starts, process1400proceeds to step1406. the authentication session may be triggered in various ways. For example, hardware processes at VTEP B may punt authentication packets received from VM to a central processor at VTEP B and the central processor at VTEP B may call an authentication agent of VTEP B to facilitate the provisional authentication session. At step1406, the remote EVPN MAC address remains pointing to VTEP A in the forwarding table at VTEP B and the authentication agent at VTEP B allows packets from VM during the new authentication session. The remote EVPN MAC address pointing to VTEP A in the forwarding table at VTEP B may be hardware-programmed and in the forwarding table of VTEP B and updated with a local EVPN MAC address pointing to VTEP B when re-authentication of VM is completed. Next, at step1410, no change to the MAC addresses in respective forwarding tables of VTEP A and VTEP B is made while awaiting the new authentication session is in progress. If the new authentication is successful, process1400proceeds to step1414and if the new authentication is unsuccessful, process1400proceeds to step1412. At step1412, because VM is not re-authenticated, VTEP A retains its local secure MAC address table entry for VM pointing to the existing authenticated port at VTEP A and VM remains blocked from establishing a connection at Hypervisor 3, at VTEP B. At step1414, VTEP B replaces its remote MAC address table entry with a new local 802.1x-authenticated MAC address based on the new authentication session, therefore, taking over the new secure MAC address and process1400proceeds to step1502ofFIG.15. At step1502fFIG.15, VTEP B may advertise the new secure MAC address to VTEP A over EVPN1102using MAC mobility community (also referred to herein as “MAC mobility extension”) with the advertised route including a sequence number equal to a prior sequence number associated with the advertised route incremented by one, and a secured host (new extended) community signifying the 802.1x authentication protocol compliance. As discussed further below, in some embodiments, indication of 802.1x authentication may be implemented in a “type” field of the advertised route header. In some embodiments, processes1300and1400ofFIGS.13-14, respectively, and the first and second network devices ofFIGS.23-24may be executed by respective processing circuitry of the VTEP processors. For example, processing circuitry206ofFIG.2in each of VTEP A and VTEP B, may execute processes1300and1400, as appropriate, and first and second network devices in process2300may be executed by processing circuitry206ofFIG.2. Moreover, authentication agents of VTEP A and VTEP B and the first and second network devices ofFIGS.23-24may be implemented by processing circuitry206executing respective authentication agent program codes, stored in a respective storage208of VTEP A and VTEP B. Next, at step1504, in response to the EVPN advertised route of step1502, VTEP A replaces its local secure MAC address table entry with the new remote secure MAC address from VTEP B pointing to VTEP B consistent with VM's move to VTEP B and process1400ends. In some embodiments, routes, for example, at step1502ofFIG.15, may be encapsulated in accordance with a network tunneling protocol by an advertising network device. Examples of tunnel encapsulations that may be implemented in accordance with some embodiments of the disclosure include, without limitation, VXLAN, MPLS, Generic Routing Encapsulation (GRE), and Control and Provisioning of Wireless Access Points (CAPWAP). As earlier noted, in some embodiments, in a virtual networking environment, such as BGP, a virtual machine may break an established connection to a host source port to establish a new connection at a host destination port of a host common to both ports—an inter-network or local move—without the prerequisite to “sign off” or disconnect. In some embodiments, a virtual machine may break an established connection to a source host of a network to establish a new connection at a destination host of a different network—an intra network or local-to-remote move—without a prerequisite to “sign off” or disconnect. In a local move embodiment, a local secure MAC table (referred to commonly herein as a “local forwarding table” or “forwarding table” or “host table”) maintains an association of authenticated virtual machines to local ports. In a local-to-remote move embodiment, each host maintains a local secure MAC table of associations between authenticated virtual machines and local ports. In some embodiments, an entry of a local secure MAC address table of, for example, a VTEP, specifies the MAC address of a corresponding VM and all the allowed (authenticated) local interface identifiers (e.g. Eth1). If the authenticated VM can be moved remotely, the secure MAC address table entry also specifies either a wildcard IP address, for example 255.255.255.255, or all the allowed remote VTEP IP addresses. In the embodiment ofFIG.11, (authenticated) VM may move to a (1) different local port as well as a (2) remote VTEP port without the requirement to “sign off” or disconnect itself from a server, for example and without limitation, from a Radius server. The MAC address table (“MAT”) of each VTEP includes the layer 2 forwarding state on a corresponding VTEP switch. By way of example, the state of MAC address table entries for a VM move is presented in the tables ofFIGS.12A-12F. As discussed below, first, the VM makes a local move followed by a subsequent local-to-remote move within the virtual network environment ofFIG.11. In the tables ofFIGS.12A-12F, “M1” is presumed to be the MAC address of the moving device, i.e. VM; the Eth1 link to Hypervisor 1 is presumed to start at port “P1” of VTEP A, moves to port “P2” of VTEP A, and from “P2” of VTEP A to port “P10” of VTEP B. Given the foregoing presumption, VTEP A and VTEP B update their respective MAC address tables as the moving device moves from one port to another and from one virtual switch to another as follows. The row of each table ofFIGS.12A-12Frepresents the state of M1 for VTEP A and for VTEP B, as shown under a respective column. Under each VTEP column, a port number and a type are shown, where applicable. “Type” represents the authentication type, for example, 802.1x. Initially, M1 is unknown to both VTEP A and VTEP B, as indicated by “Not Present” in the table ofFIG.12A. Next, VM connects to and successfully authenticates itself on port P1 of VTEP A. Accordingly, the state of M1, as shown in the table ofFIG.12B, is port P1 and type 802.1x at VTEP A and none (Not Present) at VTEP B because VM is not present or lacks an established connection at VTEP B. Next, VTEP A advertises M1 to VTEP B with an EVPN type 2 route using the new extended (or authentication) community to signify the existing authentication of VM to VTEP B. Therefore, M1, in the table ofFIG.12C, at VTEP A, remains the same, i.e. at P1, Type 802.1x, but M1 at VTEP B becomes VTEP A for the port number and “EVPN remote authenticated” for the authentication type. “EVPN remote authenticated” represents M1 as a remotely authenticated MAC address to VTEP A and is a part of the new extended community carried by the advertised route. Next, VM makes a local move from port P1 to port P2 of VTEP A but has yet to authenticate at P2 of VTEP A. The MAT of both VTEPs remain unchanged while the authentication of VM at P2 is in progress, therefore, no table corresponding to the unchanged M1 is shown. Assuming authentication of VM at P2 completes successfully, the MAT on VTEP A is updated to reflect the newly authenticated port and no change is made to the state of M1 on VTEP B. Accordingly, the table ofFIG.12Dshows the port under VTEP B as P2 and the type as 802.1x and the state of M1 at VTEP B identical to that ofFIG.12C. Next, VM makes a local-to-remove move from port P2 of VTEP A to port P10 of VTEP B. VTEP B starts a new authentication session and no change is made to the MAT of either VTEP while the new authentication is in progress. Assuming the new authentication at P10 is successful, the VTEP B MAT is updated with P10 and the VTEP A MAT remains unchanged relative to the state shown at the table ofFIG.12D, as shown in the table ofFIG.12E. Notably, because an EVPN route has yet to be advertised by VTEP B to VTEP A, two ports experience overlapping authentication sessions and VTEP A and VTEP B are temporarily out of-sync. Next, VTEP B advertises to VTEP A the VM move to P10 with another type 2 EVPN route using a MAC mobility header and the authenticated (extended) community and VTEP A updates its MAT to match the advertised move. Therefore, as shown in the table ofFIG.12F, the state of M1, under the VTEP A column, is shown as VTEP B and the type is shown as EVPN remote authenticated. The state of M1 under the VTEP B column remains the same as that of the state of M1 in the tables ofFIGS.12D and12E. FIGS.16-21are illustrative block diagrams of example network systems, in accordance with some embodiments of the disclosure.FIG.16illustrates a block diagram of a network system1600, in accordance with an embodiment of the disclosure. System1600is shown to include an EVPN1602, a VTEP1 switch1606, a VTEP 2 switch1604, a Hypervisor1, and a Hypervisor2. VTEP 1 switch1606is shown to include an authentication agent1606and a VTEP1614, and VTEP2 switch1604is shown to include an authentication agent160and a VTEP1608. VTEP1614is shown to include ports1622and VTEP1608is shown to include ports1624. Subject to proper authentication, a VM of Hypervisor1 may physically connect to a port of ports1622of VTEP1614through a link and a VM of Hypervisor2 may physically connect to a port of ports1624of VTEP1608. For example, VMNof Hypervisor1 may connect to port1622A through a link1618and a VM of Hypervisor2 may connect to port1624B through a link1612. It is understood that a VM of Hypervisor1 may connect with any port of ports1622of VTEP1614subject to proper authentication, and a VM of Hypervisor1 may connect with any port of ports1624of VTEP1608subject to proper authentication. Hypervisor1 is shown to include an N number of VMs, “N” representing an integer value. While Hypervisor1 is shown void of any VMs, in some embodiments, Hypervisor1 may indeed include VMs. It is understood that any one of the structures shown inFIG.16may include components not shown inFIG.16. It is additionally understood that system1600is a conceptual block diagram of an example virtual system illustrated merely for the purpose of discussion and that embodiments may deviate by the number of components, links and connections, in addition to having a fewer or a greater number of components. It is understood that the components of system1600may be replaced with components suitably configured to execute the reauthentication processes of various embodiments disclosed herein. In some embodiments, each of the VTEP switches ofFIGS.16-22may be configured as the VTEPs ofFIG.11. Similarly, each of the hypervisors ofFIGS.16-22may be configured as the hypervisors ofFIG.11and links1612and1618may be configured as the ethernet links (Eth1 and Eth2) ofFIG.11. In some embodiments, while not shown inFIGS.16-22, a hub may be connected between each hypervisor and a respective VTEP similar to the physical switch port or physical switch-to-physical switch embodiments ofFIGS.1-8. For example, a hub analogous to hub412ofFIG.4, but configured to be operational with virtual switches, may be connected between Hypervisor2 and VTEP2 and another may be connected between Hypervisor1 and VTEP1. System1600ofFIG.16illustrates a schematic of a VM move from one VTEP to another after the move operation is completed in accordance with the reauthentication embodiments and methods disclosed herein. Systems1700-2100ofFIGS.17-22, respectively, present the configuration outcome of the same network system after a reauthentication step is completed, as discussed below. Accordingly, common components acrossFIGS.16-21are referenced using like reference numbers. For example, EVPN1602inFIG.16is referenced as an EVPN1702inFIG.17, an EVPN1802inFIG.18, an EVPN1902inFIG.19, an EVPN2002inFIG.20, an EVPN2102inFIG.21, and an EVPN2202inFIG.22. Similarly, VTEP1 switch inFIG.16is referenced as a VTEP1 switch1706inFIG.17, a VTEP switch1806inFIG.18, a VTEP1 switch1906inFIG.19, a VTEP switch2006inFIG.20, a VTEP1 switch2106inFIG.21, and a VTEP switch2206inFIG.22, and so on. Additionally, components and connections, as discussed herein relative to the embodiment ofFIG.16are application to each of the embodiments ofFIGS.17-22. Also, while the discussions ofFIGS.16-22are centered around a virtual machine, VMN, (or host-to-host), it is understood that any of the remaining VMs of hypervisor 1 may make a similar move to hypervisor 2 in accordance with some disclosed methods and embodiments. Additionally, VMNmay instead be a physical device moving from VTEP1 switch1606to VTEP1 switch1604. In each of theFIGS.16-22, VMN, the Nth VM in Hypervisor1, is presumed to move from an existing connection at port1622A of VTEP1622through a link1618of Hypervisor1 where VMNis pre-authenticated to port1624B of VTEP1624of Hypervisor2, a remote host relative to Hypervisor 1, where VMNis not initially authenticated. In some embodiments, system1600and each of systems1700-2200are BGP networks. In the example embodiment ofFIGS.16-22, an EVPN is presumed. In some embodiments, the links physically connecting a VM to a VTEP port are Ethernet Layer 2 links. For example, existing link1618and an unestablished link1612may be Ethernet links. Each of VTEP1 switch1606and VTEP2 switch1604includes a forwarding table (not shown), also referred to herein as a “host table”, for forwarding traffic at authenticated local and remote nodes. For example, and as discussed relative to preceding figures, each VTEP forwarding table holds associations between secure MAC addresses and port numbers. In some embodiments, each forwarding table holds information in addition to MAC addresses and corresponding port numbers, to reliably effect the process of a virtual machine reauthentication movement to a destination host by establishing a handshake procedure between the source and destination hosts. In accordance with example embodiments, the handshake information includes a new extended community (or authentication extension) as previously discussed. For example, the new extended community may include authentication information related to a particular authentication protocol standard to advertise the moving virtual machine is pre-authenticated at a source host. In an embodiment, other than MAC addresses and port numbers, a forwarding table entry may include the corresponding authentication type. In each of the embodiments ofFIGS.16-22, VMNis presumed pre-authenticated at VTEP1. In an embodiment, VMNis assumed authorized t port of VTEP1 switch1606, specifically, port1622A of ports1622of VTEP1614and linked to Hypervisor1 through link1618. For example, authentication agent1616of system1600may have facilitated a VMNpre-authentication process. VMNwishes now to move to VTEP2 and link with Hypervisor2 but is prevented from forwarding traffic at a port of VTEP2 switch1604without packet loss risks unless VMNis re-authenticated at a port of VTEP2 switch1604because VTEP2 switch1604, by its very nature, remains ignorant of a VM link down and/or sign off event, as earlier discussed. Upon the completion of successful reauthentication, pursuant to an embodiment and method of the disclosure, as discussed in further detail with reference toFIGS.17-22, VMNmay move to VTEP2 switch1604and freely forward traffic, for example, through link1612, at port1624B of VTEP1608. FIGS.17-22each illustrate a block diagram of a network system, in accordance with select illustrative embodiments of the disclosure. More specifically,FIGS.17-22depict block diagrams of network systems1700-2200, respectively. As previously noted, an example procedure for a VM move, in a virtual (EVPN) environment, from VTEP1 switch1606to VTEP2 switch1604is shown in a series of steps usingFIGS.17-22. AtFIG.17, VMNis shown authenticated, with a reliable established connection, to port1722A of VTEP1714of VTEP1 switch1706. Port1722A is identified by port value “A”. VMNwishes to move to port1724B of VTEP1708of VTEP2 switch1704. Port1724B is identified by port number “B”. VTEP1 switch1706is shown to house a forwarding table1732with entries cross-referenced by authenticated (or “secure”) MAC addresses and corresponding port numbers. In some embodiments, a forwarding table of various embodiments of the disclosure, such as without limitation, forwarding table1732, may include an aggregation of one or more authentication tables with MAC addresses and corresponding authenticated ports. In some embodiments, one or more of the authentication tables may be implemented in software and incorporated into a corresponding forwarding table by hardware, software, or a combination implementation. The corresponding MAC address of VMNis characterized as a “local” MAC address in forwarding table1732, ofFIG.17, because VMNis authorized locally to VTEP1714for traffic forwarding whereas upon completion of the VMNreauthentication and movement, as shown inFIG.22, the MAC address associated with VMNwill be a “remote” address to VTEP1 switch1706and a “local” address to VTEP2 switch1704. In the example ofFIGS.16-22, as shown inFIG.17, an entry of table1732includes an authenticated MAC address field for VMN, a corresponding port number field, and a (route) Type field. The type field, in conventional techniques, typically identifies a characteristic of a corresponding advertised route, for example, as having a static or a dynamic address. Instead, in accordance with various embodiments and methods herein, the Type field shown inFIG.17includes authentication information corresponding to VMNto describe a corresponding advertised route in association with a particular type of authentication. In some embodiments, a value in the Type field of forwarding table1732entry, may identify a feature of a corresponding advertised route as a static, dynamic, or authentication conforming. The feature in the Type field in forwarding table1732may simply describe the corresponding authentication, for example, the 802.1x standard authentication. In the particular example ofFIG.17, the depicted entry of forwarding table1732identifies a local secure MAC address associated with VMNwith the value “VMN, the port number onto which VMNconnects through link1718, “A”, and a 802.1x authentication type associated with the secure MAC address, “802.1x”. It is understood that each of the fields of forwarding table1732may include additional, fewer, or replacement information corresponding to the virtual machine VMN. InFIG.18, a route process of VTEP1 switch1806builds a payload1834based on the depicted entry of table1832for advertising a corresponding route through EVPN1802. In an example embodiment, payload1834is a Type 2 route. In some embodiments and methods, VTEP1 switch1806creates a route, based on payload1834, that includes a new extended community field1834B identifying VMNas denoted in an address field1834A of payload1834. VTEP1 switch1806may base, at least in part, the new extended community in field1834B on the information in the Type field of the depicted entry of table1832. In some embodiments and methods of the disclosure, the new extended community of payload1834includes information relating to the authenticated address in field1834A of payload1834. Fields1834A and1834B are generally a part of the header information of a corresponding advertised route and may be but a couple of fields of several fields of payload1834and a corresponding created route, not all of which are shown inFIG.17. In some cases, route1834includes a mac mobility community, or not. If VMNhas been previously learned somewhere else, route1834may include a MAC mobility community, otherwise, route1834may not include a MAC mobility community. In some embodiments, packets of an advertised route may be VXLAN-, MPLS, GRE-, or CAPWAP-encapsulated. The encapsulated packets may be further encapsulated with the header information of payload1834, particularly field1834B for VMNauthentication messaging to VTEP2 switch1804. For example, a VXLAN header may include a VXLAN network identifier (VNI) used to uniquely identify a corresponding VXLAN. MPLS may encapsulate the advertised route packets based on a corresponding network protocol. An example of a new extended community is provided and discussed relative toFIG.25. In some embodiments, VTEP2 switch1804may receive the advertised route, through EVPN1802, into a corresponding border gateway protocol (BGP) process for processing. For example, the received route may undergo a best path process. The received route may undergo further suitable processes. VTEP2 switch1804may decapsulate the received route and generate a forwarding table entry based on the received BGP route, an entry corresponding to VMN. VTEP2 switch1804is made aware of VMNpre-authentication at VTEP by virtue of the received route carrying new extended community, accordingly, VTEP2 switch1804knows VMNis 802.1x-compliant and a reauthentication process can begin. Prior to the start of a VMNreauthentication process however, in some embodiments and methods of the disclosure, VTEP2 switch1904acknowledges that the control of the VMNcorresponding (secure) MAC address lies with VTEP1 switch1906treating the MAC address as a remote address. As shown inFIG.19, VTEP2 switch1904forwards an authentication acknowledgement (route)1936through EVPN1902to VTEP1 switch1906, accordingly. In Some embodiments, route1936is a Type 2 route. All the meanwhile, VMNis blocked from reliable traffic forwarding on port1924B of VTEP1908of VTEP2 switch1904and remains authenticated at port1922A of VTEP1914of VTEP1 switch1906. VMNreauthentication is shown, in relevant part, inFIG.20where authentication agent2010of VTEP2 switch2004intercepts authentication packets2038, initially headed for port1924B of VTEP1908, from VMN, and re-directs the authenticated packets2038to an authentication server2040, as described relative to prior figures. VMNthen authenticates with authentication server2040in a new authentication session. Without such reauthentication, hardware entries of port1924B on VTEP2 switch1904would reject non-authenticated packets from VMN. Upon completion of authentication at VTEP2 switch1904however, the hardware entries recognize the authenticated packets from VMN. As previously noted, in an embodiment, a trigger for authentication of VMNat VTEP2 switch1904may stem from receiving authentication packets from VMN. Conventionally, VTEP hardware processes receptive to regular network traffic packets (non-authentication packets) instead pass authentication packets onto a central processor for processing. In some disclosed method and embodiments, a VTEP authentication agent views this as an opportunity to perform soft authentication. With continued reference toFIG.20, authentication agent2010of VTEP switch2004intercepts and redirects the VMN-sourced authentication packets. But non-authenticated packets from VMNremain blocked. In response to successful authentication of VMNat VTEP2 switch2004, inFIG.20, VTEP2 switch2004takes over the secure MAC address corresponding to VMNand updates a message to VTEP2 switch2006accordingly. In some embodiments, VTEP2 switch2004may update the message by a route2142(FIG.21) back to VTEP1 switch2006, through EVPN2002, to announce the address takeover. This is shown inFIG.21with route2142originating from VTEP2 switch2104to VTEP1 switch2106. With continued reference toFIG.21, in some embodiments, route2142is a Type 2 route. Route2142may include the secure MAC address, which is now local to VTEP2 switch2104, “VMN”, a MAC mobility extension, and an authenticated community, as shown inFIG.21. In implementations using BGP type 2 routes, the routes include a VNI, MAC address, and the VTEP used to communicate with the routes. The first type 2 route that is sent out for a given VNI and MAC address generally includes the foregoing information in addition to the authentication community, assuming such authentication has taken place. Any device that wants to take over a given VNI and MAC address must add the MAC mobility community at this point. In some embodiments, the payload in route2142may further include a sequence number generated by incrementing a prior sequence number by one. The prior sequence number corresponds to the last update to the MAC address accompanying the advertised route from VTEP2 switch2104to indicate an update (takeover) to the MAC address. In some cases, if route1834had a MAC mobility community, the sequence number of route would need to be higher than that of route1834and if route1834did not have a MAC mobility community, the sequence number of route2142would be “1”. In response to receiving route2142through EVPN2102, VTEP 1 switch2206disregards the old authentication, blocks port2122A at VTEP12114, and programs an entry (or replaces the old entry) for host (VMN) pointing the VMNsecure MAC address to VTEP2 switch2104. The VMNsecure MAC address is now remote to VTEP1 switch2106, and local to VTEP2 switch2204, as shown inFIG.22. VMNhas completed its desired move from Hypervisor1 to Hypervisor2. Notably and as previously discussed relative to the tables ofFIGS.12A-12F, VTEP2 switch2206maintains the VMNauthentication at port2122A until receipt and processing of route2142. Although, for a brief period, VTEP2 switch2204and VTEP1 switch2206are out of synchronization relative to the authentication of VMN, this momentary event is resolved when VTEP1 switch2206removes authentication to port2122A. FIGS.23-24are flow charts of a virtual machine move reauthentication process, in accordance with some embodiments of the disclosure. InFIG.23, process2300includes example process steps for achieving host-to-host virtual machine movement. In some embodiments, process2300is implemented by processor circuitry206ofFIG.2. Process2300starts at step2302where a reauthentication indication of a virtual machine at a second network device (e.g., a destination host) of a remote network is received by a first network device (e.g., source host) of a virtual network of a local network. In an embodiment, the virtual machine ofFIG.11is an example of a virtual machine of process2300. At step2306, a payload, carried by an advertised route including an authentication (community) extension through the virtual network is advertised to the second network device for authentication of the virtual machine at the second network device, such as discussed relative to the payload and advertised route ofFIG.18. Next, at step2308, a determination is made as to whether or not a new authentication (or reauthentication) session is to begin based, for example, on receipt of authentication packets from the virtual machine, as discussed with reference toFIGS.11-22. In response to the determination yielding a positive result, process2300proceeds to step2402ofFIG.24, otherwise, process2300proceeds to step2310. At step2310, the virtual machine remains authenticated at the first network device and blocked at the second network deice and process2300ends. Essentially, the virtual machine movement attempt fails. At step2402ofFIG.24, an authentication acknowledgement is received by second network device via an advertised route, such as route1936ofFIG.19. In some embodiments, the second network device initiates reauthentication by sending a request to an independent authentication server and waits for the virtual machine to re-authenticate itself at the second network device with the authentication server before step2404is implemented. At step2404, the first networking device blocks the virtual machine from forwarding traffic to the first network device but network traffic is freely forwarded through the second network device and at step2406, the first network device removes the local secure MAC address entry pointing to first network device (corresponding to virtual machine) and adds a remote secure MAC address entry pointing to second network device. It is understood that although a particular order and flow of steps of processes is depicted in each ofFIGS.9,10,13,14,15,23, and24, it will be understood that in some embodiments one or more of the steps of each process may be modified, moved, removed, or added, and that the process flow depicted inFIGS.9,10,13,14,15,23, and24may be modified accordingly. FIG.25shows an example structure of an authentication extension, in accordance with various embodiments of the disclosure. In some embodiments, the extension ofFIG.25is an extension for an EVPN extended community. The extension structure ofFIG.25is specifically tailored for the 802.1x standard and flexibly programmable. The extension may be conveniently applied to other industry standards authentication protocol standards. In some embodiments, the extension ofFIG.25is a header encapsulation of an advertised route, such as from a source host or a destination host. For example, the advertised route from VTEP1 switch to VTEP2 switch1804, inFIG.18and the advertised route from VTEP2 switch1904to VTEP1 switch1906, inFIG.19. The new extended community includes various fields such as a Type field, a Sub-Type field, an Authentication field, and a Reserved field. As shown inFIG.25, the Type field is indicative of a community, for example, represented by the type value “6”. The Sub-Type field is indicative of the authentication type, for example, a sub-type value “0x13” represents the 802.1x standard authentication type. The Authentication field is indicative of an 802.1x-compliant (secure) corresponding MAC address accompanying the extension The Reserved field is maintained for future attribute additions. It is understood that the authentication field and remaining fields of the new extended community may have different fields or different values representing different fields. For example, the Sub-Type field may be assigned a value to represent a MAC address WIFI authentication type compliance and a value different than “13” may designate an 802.1x authentication compliance. It will be apparent to those of ordinary skill in the art that methods involved in the present invention may be embodied in a computer program product that includes a computer-usable and/or -readable medium. For example, such a computer-usable medium may consist of a read-only memory device, such as a CD-ROM disk or conventional ROM device, or a random-access memory, such as a hard drive device or a computer diskette, having a computer-readable program code stored thereon. It should also be understood that methods, techniques, and processes involved in the present disclosure may be executed using processing circuitry. The processes discussed above are intended to be illustrative and not limiting. More generally, the above disclosure is meant to be exemplary and not limiting. Only the claims that follow are meant to set bounds as to what the present invention includes. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any other embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiment in a suitable manner, done in different orders, or done in parallel. In addition, the systems and methods described herein may be performed in real-time. It should also be noted, the systems and/or methods described above may be applied to or used in accordance with other systems and/or methods.
104,114
11863528
DETAILED DESCRIPTION Techniques described and suggested herein include systems, methods, and processes for a reverse proxy service which assigns a destination IP address to a plurality of virtual computing resources and provides such destination IP address to be added into the firewall access control list (“ACL”), such as a firewall allowlist. More specifically, the reverse proxy service can associate a reverse proxy with a plurality of virtual computing resource endpoints, assign a destination IP address to the reverse proxy, and provide the destination IP address to be added into the firewall so that such destination IP address can be allowlisted. Once the destination IP address is added into the firewall allowlist, the client computer may communicate with the virtual computing resources without the need of constantly updating the firewall as the virtual computing resources are added dynamically. To access the virtual computing resources of the resource service provider, the client computer may send a network packet including a destination identifier such as the domain name of the resource service provider. The Domain Name System (“DNS”) server receives the domain name of the packet and translates the domain name into the destination IP address. Thereafter, the destination IP address is to determine whether it is listed in the firewall allowlist or otherwise satisfies a set of security rules being applied to the firewall. If so, the destination IP address can be forwarded to the reverse proxy service for further processing. If the destination IP address is not listed in the firewall allowlist, the packet can be blocked instead. Once the packet arrives through the firewall, the load balancer component of the reverse proxy service first determines the reverse proxy based on the destination IP address of the packet. As described above, the determined reverse proxy may include a plurality of virtual computing resources that to which the packet can be forwarded. The load balancer component may then select a reverse proxy that can process the packet sent from the client computer and transmit the packet to the selected reverse proxy. The selected reverse proxy then obtains the domain name associated with the received packet and performs a DNS lookup to identify the actual IP address of the resource. In some embodiments, the selected reverse proxy may determine the domain name of the virtual computing resource, such as a fully qualified domain name (FQDN), then perform a DNS lookup to identify the actual IP address of the resource. Once the resource IP address is identified, the reverse proxy service transforms the destination IP address of the packet to the resource IP address and forwards the packet to the resource IP address. In addition, the reverse proxy may continue to dynamically update its egress rules based on another DNS, which the egress rules specify how the network packets are to be forwarded to the computing resources. In one example, a user builds a client-side vendor application which requires access to an item price database provided by a computer resource service provider. The vendor application first sends a packet containing the database query through the network which includes the domain name of the computer resource service provider. The DNS server first intercepts the packet and translates the domain name into a destination IP address. The firewall then determines whether the destination IP address is within the firewall allowlist, and if so, forwards the packet to the reverse proxy service of the service provider. In response to receiving the packet through the firewall, the load balancer of the reverse proxy service selects a reverse proxy component that can forward the packet to a virtual computing resource that contains the item price database. The reverse proxy component of the reverse proxy service may then translate the domain name of the packet into the IP address of the virtual computing resource. Finally, the reverse proxy component may forward the packet to the virtual computing resource using the translated IP address. The database being executed within the virtual computing resource can provide a response to the database query back to the vendor application. As one skilled in the art will appreciate in light of this disclosure, certain embodiments may be capable of achieving certain advantages. For example, techniques of this disclosure enable simplified management of control of egress to the Internet, especially when the techniques are employed with legacy infrastructure environments. Also, techniques disclosed herein improve the security of the computer networks by minimizing the possible number of IP addresses, ports, and domains listed under the firewall ACL. In addition, the reverse proxy service as illustrated in the embodiments may increase compatibility between legacy firewall systems and modern virtual computing resource systems, because adding a limited number of destination IP addresses to the firewall systems can account for the dynamic nature of virtual computing resources which can be added or removed at any time. In the preceding and following description, various techniques are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of possible ways of implementing the techniques. However, it will also be apparent that the techniques described below may be practiced in different configurations without the specific details. FIG.1illustrates an example diagram of a system100in which a client102communicates with a plurality of virtual computing resources110A/B/C through firewall106in accordance with an embodiment. To access the services of or otherwise communicate with resources110A/B/C, client102may generate a network packet through a web browser or through any client-side application. A network packet can be any request for data provided by resources110A/B/C. In some implementations, a network packet may include user credentials (e.g., a username, password, etc.) needed to be authenticated by resources110A/B/C. In other implementations, a network packet generated by client102may be an application programming interface (API) call which may contain several parameters to indicate which data the resources110A/B/C should provide. The network packet generated by client102may include destination identifier, typically in the form of a domain name may include several other characters, numbers, and/or strings indicative of a destination. For example, the destination identifier may include email addresses, accounts associated the computing resources, or any other identification strings capable of being translated to an IP address to allow the network packets to reach the intended destination. In several embodiments, the destination IP address is an IP address generated specifically by the service provider108to route network packets to any of the resources110A/B/C without exposing the IP addresses of such resources back to the client102. In one embodiment in which the destination information is a domain name, the client sends the domain name to the DNS which associates the domain names to the appropriate destination IP addresses. The firewall106may reside on the client102, on the service provider108, or separately on the network104which can intercept network packets and determine whether the packets are permitted or otherwise allowed to be transmitted. The firewall106may be a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules. In one embodiment, the firewall106examines each network packet transmitted by the client102and then passes the packet through to the other side unchanged, drops the packet entirely, or handles the packet itself in some way. In many implementations, the firewall106typically performs its operations based on IP source and destination addresses and port numbers of the endpoint devices. For example, the firewall106may block packets from the Internet side that claim a source address of a system on the internal network, block TELNET or RLOGIN connections from the Internet to the internal network, block SMTP and FTP connections to the Internet from internal systems not authorized to send email or move files, act as an intermediate server in handling SMTP and HTTP connections in either direction, or require the use of an access negotiation and encapsulation protocol to gain access to the Internet, to the internal network, or both. In some instances, the firewall106can be a protocol end point which may implement a “safe” subset of the protocol, perform extensive protocol validity checks, use an implementation methodology designed to minimize the likelihood of bugs, and/or run in an insulated, “safe” environment. In several embodiments, the firewall106may adopt different firewall models, including a blocklist or allowlist model. In the blocklist model, the firewall106may permit all network traffic except for a subset of IP addresses that are blocked. For systems that require enhanced security, however, the firewall106may implement a allowlist model, where communications are blocked by default with only a subset of IP addresses is allowed. In both instances, the firewall106may include a set of security rules that determine the egress behavior of the network packets entering through the firewall. If the set of security rules are applied and satisfied, the firewall106may forward the network packet to the intended destination. In an embodiment, the service provider108may be an integrating service, a web services provider, a cloud computing platform, an application server, an infrastructure-as-a-service (IaaS) platform, or any other appropriate network-based service provider. The service provider108may allocate network packets to a plurality computing resources, which can be incorporated into one or more client-end or back-end applications that provide other services such as personal assistant voice service, a calendar service, a shopping service, an email or messaging service, a navigation service, or any other appropriate service hosted on a public or private network. The service provider108can receive network packets transmitted through the firewall106and determine the destination IP address of the network packets. Based on the destination IP address, the service provider108may select a reverse proxy with which the destination IP address is associated and forwards the network packet to the selected reverse proxy. In some implementations, the service provider108may access a database which stores the mapping between the destination IP address and the reverse proxies. Based on the selected reverse proxy, the service provider108may forward the network packet to one or more appropriate computing resources, including resource110A, resource110B, and resource110C. Resources110A/B/C may be virtual machine images or instances which receive the network packets, performs any tasks as requested in the network packets, and transmits a response back to the client102. Example of resources110A/B/C may include, but are not limited to, compute resources (e.g., physical and/or virtual computer systems), storage resources (e.g., physical and/or virtual disks, logical data containers, data objects, databases, database records, etc.), identities, policies, and/or other resources that may be offered by a computing resource service provider. Each resource of the resources110A/B/C may be associated with its own domain name and IP address which are generated as the resources are instantiated by the service provider108. For example, the domain name of the resource110A can be a string of characters which indicate a top-level domain (TLD), a second-level domain (SLD), and any other lower level domains providing the region in which the resources were instantiated. FIG.2illustrates an example diagram of a system200including a load balancing component and reverse proxy component of the service provider in accordance with an embodiment. In an embodiment, the service provider208includes a load balancer212and a reverse proxy214. The client202may be the client102discussed above in connection withFIG.1. The service provider208may be the service provider108discussed above in connection withFIG.1. Firewall206may be the firewall106discussed above in connection withFIG.1. The resources210A/B/C may be the resources110A/B/C discussed above in connection withFIG.1. In one embodiment, the client202generates a network packet that needs to be transmitted to resources210A/B/C. In some embodiments, the destination IP address associated with the resources210A/B/C is not identified by the client202. In such example, the client202first examines its cache to determine whether the destination IP address corresponding to the domain name is available. If not, the client202instead provides a domain name to the DNS server218which resolves the domain name and provides the destination IP address associated with such domain name. In some embodiments, the DNS server218may identify that the domain space associated with the domain name resides at a different server, and accordingly refers the request of the client202to one or more name servers until the destination IP address is resolved. In response to the destination IP address being identified, the client202may generate a network packet containing the destination IP address. In several embodiments, the client202or any other computing systems may configure a routing table of the DNS server218in which the domain names may continue to be updated with the corresponding destination IP address. The network packet may contain a Hypertext Transfer Protocol (HTTP) request which permits the client202to transmit information to one or more of the resources210A/B/C. For example, examples of information provided in a HTTP request include source port, proxies, destination IP address, destination port, host, protocols, requesting methods and content, user agents, referring pages, cookies, connection controls, cash controls, authorizations and the like. In another example, a network packet may contain a File Transfer Protocol (FTP) request which allows larger files to be transferred between client202and one or more of the resources210A/B/C. It must be noted that although the present disclosure mainly associates network packets with HTTP request, it is contemplated that embodiments are not limited to HTTP or FTP protocols; rather, as used herein, the network packet is contemplated to be generated from any types of internet protocol request that may represent application data. For example, the data in the network packet may be of any type and may transit in any fashion appropriate to the implementation, For example, the data may transit as traffic over a network, and may be transacted via one or more network protocols at any layer or other level of abstraction. Examples include application layer protocols such as Border Gateway Protocol (“BGP”), Dynamic Host Configuration Protocol (“DHCP”), Authentication, Authorization, and Accounting (“AAA”), Authentication, Authorization, and Accounting with Secure Transport (“AAAS”), Domain Name System (“DNS”), File Transfer Protocol (“FTP”), Hypertext Transfer Protocol (“HTTP”), Internet Message Access Protocol (“IMAP”), Lightweight Directory Access Protocol (“LDAP”), Media Gateway Control Protocol (“MGCP”), Network News Transfer Protocol (“NNTP”), Network Time Protocol (“NTP”), Post Office Protocol (“POP”), Open Network Computing (“ONC”), Remote Procedure Call (“RPC”), RADIUS, Real-Time Transport Protocol (“RTP”), Real Time Streaming Protocol (“RTSP”), Routing Information Protocol (“RIP”), Session Initiation Protocol (“SIP”), Simple Mail Transfer Protocol (“SMTP”), Simple Network Management Protocol (“SNMP”), Secure Shell (“SSH”), Terminal Access Controller Access Control System (“TACACS”), Telnet, Transport Layer Security (“TLS”), Secure Sockets Layer (“SSL”), Extensible Messaging and Presence Protocol (“XMPP”), and the like. Other examples include transport layer protocols such as Transmission Control Protocol (“TCP”), User Datagram Protocol (“UDP”), Datagram Congestion Control Protocol (“DCCP”), Stream Control Transmission Protocol (“SCTP”), Resource Reservation Protocol (“RSVP”), and the like. Yet other examples include Internet layer protocols, such as Internet Protocol (“IP”) (including IPv4 and IPv6), Internet Control Message Protocol (“ICMP”) (including ICMPv6, ECP, IGMP, IPsec, and the like. Still other examples include link layer protocols such as Address Resolution Protocol (“ARP”), Neighbor Discovery Protocol (“NDP”), Open Shortest Path First (“OSPF”), Layer 2 Tunneling Protocol (“L2TP”), Point-to-Point Protocol (“PPP”), Medium Access Control (“MAC”), and the like. In some embodiments, the data may be transmitted as a series of packets or other quanta, such as network packets, that may conform with one or more of network protocols, such as one of the network protocols enumerated immediately above. The attributes of such quanta (e.g., length, format, metadata) may be defined by one or more of the network protocols used. The firewall206receives the network packet communicated through the network204. Once received, the firewall206examines the destination IP address (and port number, if necessary) as provided in the IP header of the network packet. If the firewall determines that the destination IP address is in the allowlist216, the firewall forwards the network packet to the service provider208. The service provider208receives the network packet transmitted through the firewall206, identifies the destination IP address of the network packet, and determines one or more reverse proxies214associated with the resources210A/B/C. In several embodiments, the load balancer212is a computing system or a component thereof that distributes workload (e.g., network packets) across multiple computing resources, such as computers, a computer cluster, network links, central processing units, reverse proxies, or disk drives. In one embodiment, the load balancer212can be configured to listen for network packets transmitted through a network port (e.g., port80). If the network packet is detected in the network port, the load balancer212determines a reverse proxy from a plurality of reverse proxies214based on the destination IP address. In some embodiments, the load balancer212selects a reverse proxy214from a reverse proxy group associated with the destination IP address based on availability of such reverse proxy. In those embodiments, the selection of the reverse proxy214can be based on a round-robin balancing method in which successive network packets can be distributed equally among the reverse proxies214in the reverse proxy group. In other embodiments, the round-robin balancing method can be weighted towards a first reverse proxy, so that more network packets can be transmitted as compared to the remaining reverse proxies in the group. In yet other embodiments, the load balancer212may select a reverse proxy214based on the availability of the virtual computing resources that will receive the network packet. In this implementation, the load balancer212may ping (e.g., TELNET ping) each reverse proxy214within the reverse proxy group and sends the network packet to the reverse proxy214that provides a response to the ping. The load balancer212then sends the network packet to the selected reverse proxy214. In some embodiments, the service provider208may include a deep packet inspection (“DPI”) component (not shown) that may examine the application data (e.g., data or code payloads) of the packet to ensure that the network packet can be forwarded to another entity if necessary. As another method of packet filtering in addition to the firewall206, the DPI of the service provider208may detect vulnerabilities that can be caused by the network packet even if the destination IP address indicates that the network packet can pass through the firewall206. For example, a network packet having the allowlisted destination IP address may contain an SQL injection code which may alter or delete data stored in resources210A/B/C. To prevent such events, the DPI can be configured to parse SQL statements in the network packet and perform one or more security actions, such as dropping the packet, if the parsed SQL statements may include suspicious SQL syntax. If the DPI determines that the network packet is safe to proceed, the service provider208forwards the network packet to the selected reverse proxy214. The reverse proxy214receives the network packet and forwards the network packet to the one of the resources210A,210B, or210C based on the domain name of the network packet. In several embodiments, the reverse proxy214is a type of proxy server that retrieves data on behalf of a client (e.g., client202) from one or more servers (e.g., resources210A/B/C). These data are then returned to the client as if they originated from the service provider (e.g., service provider208) itself and does not expose the IP addresses or FQDN of the resources. In one embodiment, the reverse proxy214may submit the domain name of the network packet and/or the FQDN associated with the destination IP address to the DNS server220to identify the IP address of the corresponding resource210A,210B, or210C. Similar to the DNS server218above, the DNS server220may refer to other name servers to resolve the domain name to an IP address of the resource. In some implementations, both the client202and the reverse proxy214may perform the DNS lookup through the same DNS server, which may include DNS server218or DNS server220. In several embodiments, the DNS server220may include a routing table in which the domain name of the network packet or any other FQDN associated with resources210A,210B, or210C can be routed to the appropriate resource IP address. The routing table can be periodically updated so that any new or existing domain information can be assigned with a different resource IP address. As a result of the updates, the DNS server220may control the egress rules of the reverse proxy214which may accommodate the rather transient nature of virtual computing resources, which can be instantiated or removed at any time depending on the client needs or scalability. In other implementations, the reverse proxy214may configure its egress rules to forward any packets with a first domain name to another destination identifier such as FQDN associated with resources210A/B/C. For example, the reverse proxy214may listen to port443and perform a proxy pass function to forward network packets having domain name http://xx.example.com to a FQDN of the resource which is instead xx.us-west-2.exampleresourceservice.com:443. Once the domain name of the resource210A,210B, or210C is identified, the reverse proxy214may submit the FQDN to the DNS server so that the IP address of the resource domain is determined. As a result of the resource domain being determined, the reverse proxy214forwards the network packet to one or more corresponding resources210A/B/C, which in turn processes any application data within the network packet and generates a response for further processing. FIG.3illustrates an example sequence diagram300for a reverse proxy service ofFIG.2in accordance with an embodiment. Initially, the client302generates a network packet (packet)324to communicate with resource310. Packet324contains destination IP address326, which is an IP address associated with the domain name as provided by the service provider308. As previously indicated above, the destination IP address326can be determined based on a DNS lookup performed by the DNS server, including DNS server218and/or220ofFIG.2. The client then transmits the packet324through a network, such as network104ofFIG.1or network204ofFIG.2. The network packet, before, during, or after being transmitted through the network, can be intercepted by the firewall306. The firewall306parses the packet324to identify the destination IP address326and determine whether the packet324should continue to be transmitted or dropped. In one example, the firewall306may be configured with a allowlist which provides a list of IP addresses, ports, and domain name, in which the firewall306determines that the packet324should be allowed to be forwarded if the destination IP address matches with an IP address provided in the allowlist of the firewall306. If found on the allowlist, the firewall306allows the packet324to be transmitted to the service provider308. If not found on the allowlist, the firewall306drops, blocks, or otherwise denies the packet324from further processing. In another example, the firewall306may be configured with a blocklist which provides a list of IP addresses, ports, and domain name, in which the firewall306determines that the packet324should be blocked. In such example, the firewall306may block the incoming network packet if its destination IP address is found in the blocklist. In response to the firewall306permitting the network packet to continue to be transmitted, the load balancer312of the service provider308receives the packet324then determines a reverse proxy314that is associated with the destination IP address326. As described herein, the load balancer312may submit a query to a database (not shown) to retrieve a reverse proxy group corresponding to the destination IP address326after which a reverse proxy314can be selected. Once the reverse proxy314is determined, the load balancer312forwards the packet324to the reverse proxy314. In some implementations, the packet324forwarded from the load balancer312may be identical to the packet324initially generated by the client302. In other implementations, the packet324from the load balancer312may be modified by adding or substituting the destination IP address with another destination IP address328which may indicate the IP address of the reverse proxy314or the IP address of the resource310. The reverse proxy314receives the network packet324and first obtains the domain name associated with the packet324. In some embodiments, the domain name can be obtained through processing the destination IP address326or the IP address328. The reverse proxy314then determines the IP address330of the resource310based on the identified domain name. As described above, the determination of the resource IP address330can be performed by the reverse proxy314submitting the obtained domain name to the DNS server218or220. After the resource IP address330is determined, the reverse proxy314substitutes IP address328with the resource IP address330then forwards the packet324to resource310. In several embodiments, resource310may generates a response based on the data payload of the packet324which can be transmitted back to the client302. It should be noted that service provider308may be service provider208discussed above in connection withFIG.2, client302may be client202ofFIG.2, firewall306may be firewall206ofFIG.2, load balancer312may be load balancer212ofFIG.2, reverse proxy314may be reverse proxy214ofFIG.2, and resource310may be any of resource210A,210B, or210C. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, in an embodiment, the reverse proxy314may perform the load balancing operations which the destination IP addresses are directly associated with resources310. In such embodiment, the load balancer312can be a switch of a router which simply forwards the packet324to the reverse proxy314. Numerous other variations are within the spirit of the present disclosure. FIG.4illustrates an example configuration400of a load balancer412of a service provider in accordance with an embodiment. Load balancer412may be load balancer312discussed in connection withFIG.3above, firewall406may be firewall306discussed in connection withFIG.3, and reverse proxy414may be reverse proxy314in connection withFIG.3. In an embodiment, the network packet (such as packet324inFIG.3) is generated by a client device (such as client302inFIG.3) and is forwarded through the firewall406. In an embodiment, the network packet includes a destination IP address (such as IP address326) which may be identified through a DNS server lookup function. After the firewall406confirms that the destination IP address is in the allowlist, the network packet is forwarded to the load balancer412. The load balancer412includes a plurality of destination IP addresses428A/B/C/D, each of which is associated with corresponding resources. For example, destination IP address428A may be associated with a first set of resources (such as resource210A ofFIG.2), destination IP address428B may be associated with a second set of resources (such as resource210B ofFIG.2), destination IP address428C may be associated with a third set of resources (such as resource210C ofFIG.2), and destination IP address428D may be associated with a fourth set of resources (not shown). The load balancer412in certain time intervals may communicate with firewall406to add newly assigned destination IP addresses, so that any future requests made by the client to the new resources will not be dropped by the firewall406. In several embodiments, a set of resources assigned to the destination IP address428A,428B,428C, or428D may provide the same service. In other implementations, however, each resource in the set of resources can perform a different function. The load balancer412receives the network packet transmitted through the firewall406. The load balancer then determines whether the destination IP address in the network packet matches one of the destination IP addresses428A,428B,428C, or428D. If so, the load balancer selects such destination IP address in the load balancer, in this case destination IP address428C (194.xxx.x.x). The load balancer412may then select a reverse proxy414that is associated with the destination IP address428C. In some embodiments, the load balancer412first determines a reverse proxy group associated with the destination IP address428C, and then selects the reverse proxy414that is available to respond to the client network packet. In other embodiments, the load balancer will simply select a reverse proxy414without determining a reverse proxy group. Once the reverse proxy414is determined, the load balancer forwards the network packet to reverse proxy414for further processing. The selected reverse proxy414includes a set of resource IP addresses430A,430B,430C, and430D associated with the reverse proxy414, at which the reverse proxy414determines to which resource IP address the network packet should be forwarded. The process in determining the appropriate resource IP address is further described herein below. FIG.5illustrates an example configuration500of a reverse proxy514of a service provider in accordance with an embodiment. Reverse proxy514may be reverse proxy314in connection withFIG.3and/or reverse proxy414in connection withFIG.4, resource IP addresses530A/B/C/D may be resource IP addresses430A/B/C/D in connection withFIG.4, and resource510may be resource310in connection withFIG.3. As described herein above, the load balancer (such as load balancer412ofFIG.4) may forward the network packet (such as packet324ofFIG.3) to the reverse proxy514once the load balancer determines that the reverse proxy514is associated with the destination IP address (such as destination IP address428C ofFIG.4). The reverse proxy514then receives the network packet and forwards the packet to the resource510. In one embodiment, the reverse proxy514parses the network packet to identify its domain name then translates the domain name to the resource IP address, in this case resource IP address530C. The reverse proxy514translates the domain name (or, in other embodiments, FQDN associated with the destination IP address) through submitting the domain name in a DNS server (such as DNS server220ofFIG.2). For example, the reverse proxy514may generate and submit a DNS command such as “nslookup example.com”, and in response the DNS server may provide a number of resource IP addresses such as resource IP address530C. In another example, the reverse proxy514may parse the network packet to determine that the domain name associated with the packet is “example.com/accounts” then submits another type of DNS command such as “ping example.com/accounts.” In response, the DNS server may respond with another resource IP address such as IP address530B. In another embodiment, the reverse proxy514may access a database instead of the DNS server to identify the resource IP address530C based on the domain name. For example, the resource IP address and the corresponding domain name can be inserted into the table of a database which can be hosted by the reverse proxy514or a separate server (not shown). In response to receiving the network packet from the load balancer, the reverse proxy514may submit a query statement containing the domain name of the network packet (such as “SELECT IP_address FROM resources WHERE domain_name=‘example.com’”) and retrieve the results of the query which will be the IP address of the resource associated with the domain name. In yet another embodiment, the reverse proxy514may already store the resource IP addresses530A,530B,530C, and530D in a datastore. In this embodiment, the reverse proxy514does not perform a DNS lookup. Rather, the reverse proxy514may implement a proxy pass function which maps a domain name path to one of the resource IP addresses530A/B/C/D. For example, consider the reverse proxy514as a NGINX web server. The reverse proxy514may include the following C programming functions such as “location /path1/ {proxy_pass ‘resource IP address530A’; location/path2/ {proxy_pass ‘resource IP address530B’;} location /path3/ {proxy_pass ‘resource IP address530C’;} location/path4/proxy pass ‘resource IP address530D’;}.” In other words, the reverse proxy514may parse the domain name of the network packet and determines to which resource IP address the packet should be forwarded based on the domain extension paths. In this example, the reverse proxy514determines that the domain name associated with the packet is “example.com/path3” then utilizes the proxy pass function to determine resource IP address530C based on the “path3” domain name. In response to determining the resource IP address530C, the reverse proxy514may forward the network packet to the resource510which corresponds to the resource IP address530C. The resource510can accept the packet, strip all headers, and process the data payload in the packet. After processing the data payload, the resource510may generate a response, including any data requested by the client (such as client302ofFIG.3) or a status code (e.g., HTTP 200 OK status response). FIG.6illustrates an example flowchart for generating a destination IP address for the firewall (such as firewall206ofFIG.2) in accordance with an embodiment. The process600ofFIG.6is performed in response to a trigger event such as the service provider (such as service provider208ofFIG.2) generating at least one computing resource (such as resource210A ofFIG.2) that may be utilized by a client (such as client202ofFIG.2). In one example, the client requires interaction with the virtual computing resources to perform a series of tasks such as retrieving a set of data from a database hosted in the virtual computing resources. In another example, the client requires interaction with a third-party service provider which provides its services through the virtual computing resources. For example, a user may make a spoken request to a personal assistant device to add a new event to their online calendar, where the calendar is provided by a third-party service. In another example, a user may use a graphical interface on a mobile device (e.g., a smart phone) to interact with an online shopping service. In both of these examples, interaction with a third-party service (e.g., the online calendar service, the online shopping service, etc.) may require the client to communicate with, i.e., send network packets, to virtual computing resources though the firewall may block any client network packets from being transmitted to the computing resources. In yet another example, the user may host a client-side application which requires communication with the computing resources provided by service provider (such as service provider208ofFIG.2) to ensure scalability of the computing capacity. The client-side application, however, may also run a secure firewall which only allows small list of endpoints to communicate with said application. In any of these examples, the service provider may perform process600to ensure that clients may fully communicate with the computing resources without its packets being blocked by firewall, as well as providing only a small number of IP addresses to minimize any risks associated with creating a big hole in the firewall, i.e., a firewall with too many exceptions. At step602, the service provider determines a reverse proxy. In one implementation, a reverse proxy can be a virtual application server (e.g., NGINX) that can be instantiated and be associated with a plurality of resources. The service provider then associates a plurality of resources to the reverse proxy (step604). In several embodiments, the IP addresses and/or domain names of each resource can be associated with the reverse proxy. The association of the resource IP addresses with the domain names can be submitted to the DNS server to allow a later DNS lookup. In another implementation, the association of the resource IP addresses with the domain names can be stored in a separate database table to enable a database query to be executed. In yet another implementation, the reverse proxy may construct a proxy pass function in which the domain name can be passed as a conditional statement which may generate the corresponding resource IP address as a result of satisfying the condition. At step606, the service provider determines a destination IP address. In several embodiments, the destination IP address may be in IPv4 or IPv6 format. The determination of the destination IP address may occur before, in parallel, or after step602, and/or may occur before, in parallel, or after step604. After the destination IP address is determined, the service provider assigns the destination IP address to the reverse proxy (step608). In some implementations, the service provider may assign the destination IP address a reverse proxy group that includes at least one reverse proxy associated with a plurality of resources that can process a request sent from a client computer. At step610, the service provider may then provide the destination IP address to the firewall (such as firewall206inFIG.2). In response, the firewall may include the destination IP address to the firewall allowlist, to enable network packets addressed to the destination IP address can be allowed for transmission to the service provider. FIG.7illustrates an example flowchart of a firewall (such as firewall206ofFIG.2) in accordance with an embodiment. The process700ofFIG.7is performed in response to a trigger event such as the client (such as client202ofFIG.2) generating a network packet then providing such packet to the firewall. At step702, the firewall receives the packet. The firewall then parses the network packet to determine the destination IP address of the network packet (step704). In one embodiment, the firewall may inspect the header of the packet and extracts the data associated with the destination address field. The firewall then determines whether the destination IP address is in the firewall allowlist (step706). In another implementation in which the firewall deploys a blocklist model, the firewall determines whether the destination IP address is not in the firewall blocklist. In other words, the firewall determines whether the destination IP address can be allowed for further transmission. If the destination IP address is not in the firewall allowlist (“No” path from step706), the firewall may block the packet from further transmission (step708). The firewall may generate a response message back to the client that the packet was blocked from further transmission. For example, a response message can be an HTTP response code403, which indicates that the destination IP address is forbidden from further access. If the destination IP address is in the firewall allowlist (“Yes” path from step706), the firewall can forward the network packet to the service provider (step710). In one implementation, the firewall forwards the allowed network packet to the load balancer (such as load balancer212ofFIG.2) of the service provider. Thereafter, the firewall terminates process700. FIG.8illustrates an example flowchart of a service provider (such as service provider208ofFIG.2) forwarding network packets to computing resources (such as resources210A/B/C/D ofFIG.2) in accordance with an embodiment. The process800ofFIG.8is initiated by service provider receiving packet from firewall (step802). At step804, the load balancer component of the service provider (such as load balancer212ofFIG.2) determines a reverse proxy based on the destination IP address of the received packet. In some embodiments, the load balancer searches whether the IP address in the packet matches one of the stored destination IP addresses. For example, the load balancer submits a query in the routing table of the load balancer to determine whether there is a matching destination IP address. At step806, the load balancer of the service provider transmits the packet to the selected reverse proxy. At the reverse proxy component (such as reverse proxy214ofFIG.2) of the service provider, domain name associated with the packet is obtained (step808). In some implementations, the domain name obtained by the reverse proxy may be the same with the domain name initially submitted by the client (such as client202ofFIG.2) when generating the network packet. In other implementations, the domain name obtained by the reverse proxy may be different from the domain name of the original network packet. At step810, the reverse proxy determines the resource IP address based on the obtained domain name. As described herein above, the determination of the resource IP address may be based on submitting the domain name of the packet in a DNS server to resolve the resource IP address. After the resource IP address is identified, the reverse proxy may transform the destination IP address to the resource IP address then attach the new IP address to the packet (step812). At step814, the reverse proxy forwards the packet to the resource IP address. The process800terminates thereafter. FIG.9illustrates aspects of an example system900for implementing aspects in accordance with an embodiment. As will be appreciated, although a web-based system is used for purposes of explanation, different systems may be used, as appropriate, to implement various embodiments. In an embodiment, the system includes an electronic client device902, which includes any appropriate device operable to send and/or receive requests, messages, or information over an appropriate network904and convey information back to a user of the device. Examples of such client devices include personal computers, cellular or other mobile phones, handheld messaging devices, laptop computers, tablet computers, set-top boxes, personal data assistants, embedded computer systems, electronic book readers, and the like. In an embodiment, the network includes any appropriate network, including an intranet, the Internet, a cellular network, a local area network, a satellite network or any other such network and/or combination thereof and components used for such a system depend at least in part upon the type of network and/or system selected. Many protocols and components for communicating via such a network will not be discussed herein in detail. In an embodiment, communication over the network is enabled by wired and/or wireless connections and combinations thereof. In an embodiment, the network includes the Internet and/or other publicly addressable communications network, as the system includes a web server906for receiving requests and serving content in response thereto, although for other networks an alternative device serving a similar purpose could be used as would be apparent to one of ordinary skill in the art. In an embodiment, the illustrative system includes at least one application server908and a data store910and it should be understood that there can be several application servers, layers or other elements, processes or components, which may be chained or otherwise configured, which can interact to perform tasks such as obtaining data from an appropriate data store. Servers, in an embodiment, are implemented as hardware devices, virtual computer systems, programming modules being executed on a computer system, and/or other devices configured with hardware and/or software to receive and respond to communications (e.g., web service application programming interface (API) requests) over a network. As used herein, unless otherwise stated or clear from context, the term “data store” refers to any device or combination of devices capable of storing, accessing and retrieving data, which may include any combination and number of data servers, databases, data storage devices and data storage media, in any standard, distributed, virtual or clustered system. Data stores, in an embodiment, communicate with block-level and/or object level interfaces. The application server can include any appropriate hardware, software and firmware for integrating with the data store as needed to execute aspects of one or more applications for the client device, handling some or all of the data access and business logic for an application. In an embodiment, the application server provides access control services in cooperation with the data store and generates content including, but not limited to, text, graphics, audio, video and/or other content that is provided to a user associated with the client device by the web server in the form of HyperText Markup Language (“HTML”), Extensible Markup Language (“XML”), JavaScript, Cascading Style Sheets (“CSS”), JavaScript Object Notation (JSON), and/or another appropriate client-side or other structured language. Content transferred to a client device, in an embodiment, is processed by the client device to provide the content in one or more forms including, but not limited to, forms that are perceptible to the user audibly, visually and/or through other senses. The handling of all requests and responses, as well as the delivery of content between the client device902and the application server908, in an embodiment, is handled by the web server using PHP: Hypertext Preprocessor (“PHP”), Python, Ruby, Perl, Java, HTML, XML, JSON, and/or another appropriate server-side structured language in this example. In an embodiment, operations described herein as being performed by a single device are performed collectively by multiple devices that form a distributed and/or virtual system. The data store910, in an embodiment, includes several separate data tables, databases, data documents, dynamic data storage schemes and/or other data storage mechanisms and media for storing data relating to a particular aspect of the present disclosure. In an embodiment, the data store illustrated includes mechanisms for storing production data912and user information916, which are used to serve content for the production side. The data store also is shown to include a mechanism for storing log data914, which is used, in an embodiment, for reporting, computing resource management, analysis or other such purposes. In an embodiment, other aspects such as page image information and access rights information (e.g., access control policies or other encodings of permissions) are stored in the data store in any of the above listed mechanisms as appropriate or in additional mechanisms in the data store910. The data store910, in an embodiment, is operable, through logic associated therewith, to receive instructions from the application server908and obtain, update or otherwise process data in response thereto and the application server908provides static, dynamic, or a combination of static and dynamic data in response to the received instructions. In an embodiment, dynamic data, such as data used in web logs (blogs), shopping applications, news services, and other such applications are generated by server-side structured languages as described herein or are provided by a content management system (“CMS”) operating on, or under the control of, the application server. In an embodiment, a user, through a device operated by the user, submits a search request for a certain type of item. In this example, the data store accesses the user information to verify the identity of the user, accesses the catalog detail information to obtain information about items of that type, and returns the information to the user, such as in a results listing on a web page that the user views via a browser on the user device902. Continuing with example, information for a particular item of interest is viewed in a dedicated page or window of the browser. It should be noted, however, that embodiments of the present disclosure are not necessarily limited to the context of web pages, but are more generally applicable to processing requests in general, where the requests are not necessarily requests for content. Example requests include requests to manage and/or interact with computing resources hosted by the system900and/or another system, such as for launching, terminating, deleting, modifying, reading, and/or otherwise accessing such computing resources. In an embodiment, each server typically includes an operating system that provides executable program instructions for the general administration and operation of that server and includes a computer-readable storage medium (e.g., a hard disk, random access memory, read only memory, etc.) storing instructions that, if executed (i.e., as a result of being executed) by a processor of the server, cause or otherwise allow the server to perform its intended functions. The system900, in an embodiment, is a distributed and/or virtual computing system utilizing several computer systems and components that are interconnected via communication links (e.g., transmission control protocol (TCP) connections and/or transport layer security (TLS) or other cryptographically protected communication sessions), using one or more computer networks or direct connections. However, it will be appreciated by those of ordinary skill in the art that such a system could operate in a system having fewer or a greater number of components than are illustrated inFIG.9. Thus, the depiction of the system900inFIG.9should be taken as being illustrative in nature and not limiting to the scope of the disclosure. The various embodiments further can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices or processing devices which can be used to operate any of a number of applications. In an embodiment, user or client devices include any of a number of computers, such as desktop, laptop or tablet computers running a standard operating system, as well as cellular (mobile), wireless and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols and such a system also includes a number of workstations running any of a variety of commercially available operating systems and other applications for purposes such as development and database management. In an embodiment, these devices also include other electronic devices, such as dummy terminals, thin-clients, gaming systems and other devices capable of communicating via a network, and virtual devices such as virtual machines, hypervisors, software containers utilizing operating-system level virtualization and other virtual devices or non-virtual devices supporting virtualization capable of communicating via a network. In an embodiment, a system utilizes at least one network (such as network204ofFIG.2) that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as Transmission Control Protocol/Internet Protocol (“TCP/IP”), User Datagram Protocol (“UDP”), protocols operating in various layers of the Open System Interconnection (“OSI”) model, File Transfer Protocol (“FTP”), Universal Plug and Play (“UpnP”), Network File System (“NFS”), Common Internet File System (“CIFS”) and other protocols. The network, in an embodiment, is a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network, a satellite network, and any combination thereof. In an embodiment, a connection-oriented protocol is used to communicate between network endpoints such that the connection-oriented protocol (sometimes called a connection-based protocol) is capable of transmitting data in an ordered stream. In an embodiment, a connection-oriented protocol can be reliable or unreliable. For example, the TCP protocol is a reliable connection-oriented protocol. Asynchronous Transfer Mode (“ATM”) and Frame Relay are unreliable connection-oriented protocols. Connection-oriented protocols are in contrast to packet-oriented protocols such as UDP that transmit packets without a guaranteed ordering. In several embodiments, the system may further utilize a firewall (such as firewall206ofFIG.2) to control access of network packets being transmitted through the at least one network. For example, the protocol headers of the network packets can be inspected by the firewall and applied with a set of security rules in ACL of the firewall to determine whether the network packet should be blocked or allows through the network. In an embodiment, the system utilizes a web server that run one or more of a variety of server or mid-tier applications, including Hypertext Transfer Protocol (“HTTP”) servers, FTP servers, Common Gateway Interface (“CGI”) servers, data servers, Java servers, Apache servers, and business application servers. In an embodiment, the one or more servers are also capable of executing programs or scripts in response to requests from user devices, such as by executing one or more web applications that are implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Ruby, PHP, Perl, Python or TCL, as well as combinations thereof. In an embodiment, the one or more servers also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase®, and IBM® as well as open-source servers such as MySQL, Postgres, SQLite, MongoDB, and any other server capable of storing, retrieving, and accessing structured or unstructured data. In an embodiment, a database server includes table-based servers, document-based servers, unstructured servers, relational servers, non-relational servers, or combinations of these and/or other database servers. In an embodiment, the system includes a variety of data stores and other memory and storage media as discussed above which can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In an embodiment, the information resides in a storage-area network (“SAN”) familiar to those skilled in the art and, similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices are stored locally and/or remotely, as appropriate. In an embodiment where a system includes computerized devices, each such device can include hardware elements that are electrically coupled via a bus, the elements including, for example, at least one central processing unit (“CPU” or “processor”), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad), at least one output device (e.g., a display device, printer, or speaker), at least one storage device such as disk drives, optical storage devices, and solid-state storage devices such as random access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc., and various combinations thereof. In an embodiment, such a device also includes a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above where the computer-readable storage media reader is connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. In an embodiment, the system and various devices also typically include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or web browser. In an embodiment, customized hardware is used and/or particular elements are implemented in hardware, software (including portable software, such as applets), or both. In an embodiment, connections to other computing devices such as network input/output devices are employed. In an embodiment, storage media and computer readable media for containing code, or portions of code, include any appropriate media used in the art, including storage media and communication media, such as, but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, Electrically Erasable Programmable Read-Only Memory (“EEPROM”), flash memory or other memory technology, Compact Disc Read-Only Memory (“CD-ROM”), digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by the system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims. Other variations are within the spirit of the present disclosure. Thus, while the disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention, as defined in the appended claims. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. Similarly, use of the term “or” is to be construed to mean “and/or” unless contradicted explicitly or by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected,” when unmodified and referring to physical connections, is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. The use of the term “set” (e.g., “a set of items”) or “subset” unless otherwise noted or contradicted by context, is to be construed as a nonempty collection comprising one or more members. Further, unless otherwise noted or contradicted by context, the term “subset” of a corresponding set does not necessarily denote a proper subset of the corresponding set, but the subset and the corresponding set may be equal. The use of the phrase “based on,” unless otherwise explicitly stated or clear from context, means “based at least in part on” and is not limited to “based solely on.” Conjunctive language, such as phrases of the form “at least one of A, B, and C,” or “at least one of A, B and C,” (i.e., the same phrase with or without the Oxford comma) unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with the context as used in general to present that an item, term, etc., may be either A or B or C, any nonempty subset of the set of A and B and C, or any set not contradicted by context or otherwise excluded that contains at least one A, at least one B, or at least one C. For instance, in the illustrative example of a set having three members, the conjunctive phrases “at least one of A, B, and C” and “at least one of A, B and C” refer to any of the following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}, and, if not contradicted explicitly or by context, any set having {A}, {B}, and/or {C} as a subset (e.g., sets with multiple “A”). Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. Similarly, phrases such as “at least one of A, B, or C” and “at least one of A, B or C” refer to the same as “at least one of A, B, and C” and “at least one of A, B and C” refer to any of the following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}, unless differing meaning is explicitly stated or clear from context. In addition, unless otherwise noted or contradicted by context, the term “plurality” indicates a state of being plural (e.g., “a plurality of items” indicates multiple items). The number of items in a plurality is at least two, but can be more when so indicated either explicitly or by context. Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. In an embodiment, a process such as those processes described herein (or variations and/or combinations thereof) is performed under the control of one or more computer systems configured with executable instructions and is implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. In an embodiment, the code is stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. In an embodiment, a computer-readable storage medium is a non-transitory computer-readable storage medium that excludes transitory signals (e.g., a propagating transient electric or electromagnetic transmission) but includes non-transitory data storage circuitry (e.g., buffers, cache, and queues) within transceivers of transitory signals. In an embodiment, code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer-readable storage media having stored thereon executable instructions that, when executed (i.e., as a result of being executed) by one or more processors of a computer system, cause the computer system to perform operations described herein. The set of non-transitory computer-readable storage media, in an embodiment, comprises multiple non-transitory computer-readable storage media and one or more of individual non-transitory storage media of the multiple non-transitory computer-readable storage media lack all of the code while the multiple non-transitory computer-readable storage media collectively store all of the code. For example, a first non-transitory computer-readable storage medium includes instructions to be executed by a load balancer of the service provider (such as load balancer212ofFIG.2), a second non-transitory computer-readable storage medium includes instructions to be executed by a reverse proxy of the service provider (such as reverse proxy214ofFIG.2), and a third non-transitory computer-readable storage medium includes instructions to be executed by a firewall (such as firewall206ofFIG.2). In an embodiment, the executable instructions are executed such that different instructions are executed by different processors—for example, a non-transitory computer-readable storage medium store instructions and a main CPU execute some of the instructions while a graphics processor unit executes other instructions. In an embodiment, different components of a computer system have separate processors and different processors execute different subsets of the instructions. Accordingly, in an embodiment, computer systems are configured to implement one or more services that singly or collectively perform operations of processes described herein and such computer systems are configured with applicable hardware and/or software that enable the performance of the operations. Further, a computer system that implement an embodiment of the present disclosure is a single device and, in another embodiment, is a distributed computer systems comprising multiple devices that operate differently such that the distributed computer system performs the operations described herein and such that a single device does not perform all operations. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention. Embodiments of this disclosure are described herein, including the best mode identified to the inventors for carrying out the invention. Variations of those embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate and the inventors intend for embodiments of the present disclosure to be practiced otherwise than as specifically described herein. Accordingly, the scope of the present disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the scope of the present disclosure unless otherwise indicated herein or otherwise clearly contradicted by context. All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
68,599
11863529
DETAILED DESCRIPTION The present invention relates generally to networking and more particularly to the use of private cloud networks. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the embodiments and the generic principles and features described herein will be readily apparent to those skilled in the art. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features described herein. The term “Client” is interchangeable with “Smart Device Client” throughout discussion in the context. The term “router” is in general interchangeable with “gateway”, “access point” and/or “NAT” (network address translation) in the discussion. A system and method in accordance with the present invention addresses the following challenges in a consumer oriented environment for a Smart Device Client in a WAN to be able to obtain services from a Private Cloud Storage Server (PCSS) or any Private Cloud Server (PCS):1. Access the Private Cloud Server (PCS) at anytime from anywhere.2. Access the PCS behind the firewall with fixed or dynamic IP address.3. Require no public cloud based routing server in the WAN.4. Require no additional router setup in the LAN.5. Authenticate with the PCS.6. Establish a secure communication channel with the PCS. If such challenges can be met and resolved, the deployment of the Private Cloud Server or service will increase exponentially, due to plug and play simplicity and availability. The technical and business concern will also be removed by not utilizing a public cloud based routing server. The Private Cloud Server being utilized for storage, remote desktop service and Internet of Things (IoT) becomes very affordable and ubiquitous in the private cloud infrastructure. In the private cloud environment, if there are more than one private cloud servers or services co-existing at the same time, it is advantageous to separate out the functions of Private Cloud Server into two functional blocks including Private Cloud Routing Service (PRS) and Private Network Service (PNS). The PNS is designed to be managed and accessed on the private network environment, be it wired or wireless, by the Smart Device Client. Examples of a PNS include application program server to provide remote desktop protocol (RDP), VNC, office tools, media player, and other user specific applications. The PNS may also function as a storage server that contains multiple terabytes of storage serving the private cloud. Functions of the PRS of the multiple Private Cloud Routing Servers can then be aggregated together into just one Private Cloud Routing Server (PCRS). The PCRS can generally be referred to as a Private Cloud Router. A system and method in accordance with the present invention addresses the following challenges in the consumer oriented environment for utilizing the Smart Device Client in the WAN to be able to manage and access Private Network Service (PNS) from a Private Cloud Routing Server (PCRS):1. Access the Private Cloud Routing Server (PCRS) at anytime from anywhere.2. Access the PCRS behind the firewall with fixed or dynamic IP address.3. Require no outside or public cloud based routing server in the WAN.4. Require no additional router setup in the LAN.5. Authenticate with the PCRS.6. Establish a secure communication channel with the PNS to manage and access. If the PCRS can fulfill the above mentioned challenges, heterogeneous Private Cloud Servers from different manufacturers and vendors can then be broken down into simpler Private Network Services and remove the complexity of private cloud setup, configuration and access The purpose of a system and method in accordance with the invention is to provide a Private Cloud Routing Server (PCRS), Private Network Service and Client architecture without utilizing a routing server. The system and method in accordance with the present invention addresses the above identified challenges that to allow a Client to be able to access the Private Network Service (PNS) from anywhere at anytime. The system and method also accesses the PNS behind a firewall with fixed or dynamic IP, requires no additional router setup and no public cloud based routing server in the WAN, to authenticate with the PCRS, and to establish a secure communication channel directly with the PNS. As shown inFIG.1, a cloud network infrastructure includes a public cloud100, a public cloud server113, a public routing server112, a VPN routing server114, a Smart Device Client101in the WAN, a Router_P102and a Router_S103. The Router_S103connects between a LAN105and the Internet in public cloud100. The Router_P102connects between a LAN104and the Internet in public cloud100. Behind the LAN104, there are Smart Device Clients106,107and a Private Cloud Server (PCS)108. Behind the LAN105, there are Smart Device Clients109,110and111. The Smart Device Client can be a PC, notebook, tablet, eBook reader, GPS, smart TV, set top box, MP3 player, or any networkable embedded device. They are denoted in the Cloud Network Infrastructure as101,106,107,109,110, and111. Any one of the Smart Device Clients above is interchangeable in the context and discussion. The focus on this discussion is the Smart Device Client109, as the representative in this context. Physically, there are three scenarios that a Smart Device Client101,107or109can connect to the Private Cloud Server108. First, a Smart Device Client107determines whether the target is in the locally accessible LAN104and decides to connect to the Private Cloud Server108directly. Second, the Smart Device Client101determines the target is not in the locally accessible LAN104and decides to connect through the WAN to the public cloud100. The WAN locates the Router_P102and LAN104, and then connects to the Private Cloud Server108. Third, the Smart Device Client109determines the target is not in the locally accessible LAN105and decides to passes through LAN105, Router_S103, and connects to the public cloud100in the WAN. The Smart Device Client109then locates Router_P102, LAN104and connects to the Private Cloud Server108. The first and the second scenario are two special cases and derivatives of the third scenario. Therefore, it is beneficial to focus on the third scenario that is broader in scope and complexity. The routing server message box (not shown) or client message box215, can be hosted inside an email server, text message server, web server, or any kind of server that can host secure message for information exchange between the Private Cloud Routing Server208, and the Private Cloud Call-Back Server216, as a server, and the Smart Device Client206,207,209,210,211,201,221, as a client. The Call-Back Server Message Box (not shown) or Client Message Box message_box_S215, is accessible and under the secure and private control of either Private Cloud Routing Server208, and the Private Cloud Call-Back Server216, as a server, or the Smart Device Client206,207,209,210,211,201,221, as a client. The security and business model of the message box is well understood and expected in the industry by the user. For any reason either message box is down, it can be replaced or redeployed immediately without jeopardizing the communication between the server and the client in the private cloud infrastructure. FIG.2shows a block diagram of a first embodiment of a Cloud Network Infrastructure for a secure connection mechanism between the Private Cloud Routing Server, the Private Cloud Call-Back Server, and the Smart Device Client for the exploring and accessing of Private Network Service across the public cloud. The Smart Device Client201,211and221, through the communication path222,224and223respectively are able to locate the Private Cloud Routing Server208with the mechanism disclosed inFIG.5through15. The Private Cloud Routing Server208and the Private Cloud Routing Server216then builds a virtual LAN, VLAN,240and a virtual LAN, VLAN,2400allowing the authorized Smart Device Clients201,211and221to join in as members of the virtual LAN, VLAN,240and the virtual LAN, VLAN,2400. The Smart Device Client201through the installed program can initiate a private and secure communication as a host. The Smart Device Client211or221through the installed program can receive the communication invitation as a guest and join the private and secure communication session with the host Smart Device Client201. As shown inFIG.2, when a Smart Device Client201wants to start a communication session as a host, the program installed on the host Smart Device Client first locates and logs-in to the Private Cloud Call-Back Server (PCCBS)216through the communication path222. After the Private Cloud Call-Back Server216locating the Private Cloud Routing Server208, it joins the virtual LAN, VLAN,240. The Smart Device Client commits to join chat communication as a host201. The program allows the Smart Device Client201to create and host a communication session. The program broadcasts the host session to invite communication guest221. Afterwards, the program starts scanning for recognizable guest221. Once the guest is authenticated, the Smart Device Client201can start private and secure communication as a host with the authenticated guest Smart Device Client221. The private and secure communication includes video, audio, text or application. The application can be a program, utility, operation or remote desktop that is recognizable by both host and guest. If the Smart Device Client211or221wants to join a communication session as a guest, the program installed on the guest Smart Device Client first locates and logs-in to the Private Cloud Call-Back Server (PCCBS)216through the communication path224or223respectively. After the Private Cloud Call-Back Server216locating the Private Cloud Routing Server208, it joins the virtual LAN, VLAN,240under the server. The Smart Device Client commits to join chat communication as a client. The program waits for a communication invitation. Once it receives a communication invitation, the Smart Device Client211or221may join a communication session as a guest. The program then starts scanning for recognizable host. Upon identifying the host, the program goes through the communication log-in authentication prompted by the host. Once authenticated, the Smart Device Client can join the communication session. The Smart Device Client211,221starts private and secure communication as a guest with the host Smart Device Client201. The private and secure communication includes video, audio, text or application. The application can be a program, utility, operation or remote desktop that is recognizable by both host and guest. In another embodiment of the present invention, the Smart Device Client can establish a private and secure communication with any service that is reachable on the physical LAN, LAN1250or virtual LAN, VLAN,240and virtual LAN, VLAN,2400under the Private Cloud Routing Server and the Private Cloud Call-Back Server. As shown inFIG.2, once the Smart Device Client201,211or221locates and logs-in to the Private Cloud Call-Back Server216, it may access any Private Network Service228that is reachable on the physical LAN, LAN1250, LAN2260, and virtual LAN, VLAN,240and virtual LAN, VLAN,2400under the Private Cloud Routing Server and the Private Cloud Call-Back Server through a secure communication path225. The Private Network Service includes audio, video contents, live or archived information, and execution of applications, social media, messaging, email, storage, backup, calendar, contact, synchronization, sharing, remote desktop, Internet of Things (IoT) and others. In an embodiment, the communication path225between the Private Cloud Routing Server (PCRS), the Private Cloud Call-Back Server (PCCBS) and the Smart Device Client may include several sets of commands:1. Initialize and Provision a PCRS (by an Admin from a PCRS LAN)2. Initialize and Provision a PCCBS (by an Admin from WAN)3. Create a PCRS Client (by the Admin from a PCRS LAN)4. Register to a PCCBS (by a PCCBS Client from WAN)5. Connect to a PCCBS (by a PCCBS Client from WAN)6. View a PCCBS Client (by the administrator from WAN)7. Reset a PCCBS peer-to-peer password and status (by the administrator from the WAN)8. Change the PCCBS peer-to-peer password (by the PCCBS Client through a virtual private network (VPN) from WAN) A number of entities are introduced to allow for the secure communication path225including but not limited to: Administrator, Admin Device, PCRS Utility, PCCBS Utility, PCRS Device Client, PCCBS Device Client, Invitee and Invitee Device. These entities are defined herein below. Utility is a utility running in the Private Cloud Routing Server. Admin Device is a device that administrator uses to configure the PCRS. PCRS Device Client is a device that an Invitee uses to communicate with the PCRS. Invitee is a physical party invited by the Admin to access the PCRS service and resources. Invitee Device is a Smart Device Client that the Invitee uses to communicate with the PCRS. A number of terms are introduced including Access_Code, Code_Expiration, Address_Invitee, Address_PCRS_Client, Hash_Password_PCRS_P2P, Password_PCRS_P2P_Expiration, and Status in PCRS Client database. These terms are defined hereinbelow. Access_Code is an invitee access code sent by Admin through PCRS via message box216. Code_Expiration is an expiration date/time of the access code for security purpose. Address_Invitee is a message box address of the invitee. Address_PCRS_Client is a message box address of the PCRS Client which may be different from the invitee. Hash_Password_PCRS_P2P is a hashed password for the PCRS peer-to-peer communication. It is stored in the PCRS Client database. The actual password Password_PCRS_P2P is never stored in PCRS for security consideration. The Password_PCRS_P2P_Expiration is the expiration of the Password_PCRS_P2P. The Status is the Active, Inactive or Deleted status of the PCRS Client record in the PCRS Client database. Other terms not associated with the PCRS client database are: Address_PCRS, Password_PCRS, Password_PCRS_Client and Virtual LAN subnet. They are defined herein below. Address_PCRS and Password_PCRS are used to configure the message box account of the PCRS. They are used only once during initialization and provisioning of PCRS and is never stored for security purpose. Address_PCRS_Client and Password_PCRS_Client are used to configure the message box account of the PCRS Client. They are used only once during creation of PCRS Client in the database. While the Address_PCRS_Client is stored in the database, the Password_PCRS_Client is never stored for security purpose. Virtual LAN subnet is the subnet setting of the VPN (virtual private network). It is configurable and changeable to specify the private subnet for security purpose. As shown inFIG.2, the Private Cloud Routing Server (PCRS)208contains a PCRS_Utility270, which in turn contains a PCRS Client database271and a Routing Server Message Box utility272. The PCRS Client database271contains the registered list of PCRS clients. The message box utility272is able to communicate with the Call-Back Server Message Box (not shown). The Admin Device273is itself a Smart Device Client207. It contains an application utility PCRS_App274, which in turn contains a PCRS Server database275and a Client Message Box utility276. The PCRS Server database275contains the registered list of PCRS servers. The message box utility276is able to communicate with the Client Message Box215. The PCCBS Device Client201is itself a Smart Device Client. It contains an application utility PCCBS_App278, which in turn contains a PCCBS Server database279and a Client Message Box utility280. The PCCBS Server database279contains the registered list of PCCBS servers. The message box utility280is able to communicate with the Client Message Box215. The Invitee Device281is itself a Smart Device Client221. It contains a Client Message Box utility282. The message box utility282is able to communicate with the Client Message Box215. The administrator uses the utility PCRS_App274to initialize and provision the PCRS208, as shown inFIG.5, from Admin Device207. The Admin Device207is located on the same physical LAN204as that of PCRS208, in order to conduct configuration for security purpose to avoid hacking exposure on Internet or WAN. The administrator first configures the PCRS Routing server message box credentials by setting its account name and password. The PCRS Routing server message box credentials are then sent to PCRS Utility270in the PCRS208. The Private Cloud Call-Back Server (PCCBS)216contains a PCCBS_Utility2700, which in turn contains a PCCBS Client database2710and a Routing Server Message Box utility2720. The PCCBS Client database2710contains the registered list of PCCBS clients. The message box utility2720is able to communicate with the Call-Back Server Message Box (not shown). The utility PCCBS_Device_App278is also used by the administrator277to create a PCCBS Client account, as shown inFIG.6. The administrator277, which is itself a PCCBS device client201, then sets the Invitee notification address in PCCBS_Device_App605. It then asks the PCCBS to send connection invitation through the Call-Back Server Message Box utility2720, to Call-Back Server Message Box (not shown), through Client Message Box215, and eventually to the Invitee Device281, and it's Client Message Box Utility282. Note that Call-Back Server Message Box (not shown) and Client Message Box215are both hosted inside message box servers, such as email servers, web servers and message servers. Both Call-Back Server Message Box and Client Message Box can logically be the same or different. After the invitee receives the invitation620, it retrieves PCCBS_Device_App from the PCCBS App link621and installs PCCBS_App on a desired PCCBS Device Client201. The Invitee device281is not necessary at the same physical device as the PCCBS Device Client201. The administrator has to know the invitee's message box address Address_Invitee605, in order to send out the invitation. On the desired PCCBS Device Client201, the invitee launches the PCCBS_Device_App700and proceeds to register to a PCCBS701as shown inFIG.7. The invitee's role at this point changes to a PCCBS Client on the PCCBS Device Client201. The PCCBS Client then configures its Client Message Box credentials by setting its account name and password and registers the credentials to the Client Message Box215. The previously received Address_PCCBS and Access_Code are then retrieved from the Invitee Device281and sent along with the Client Message account Address_PCCBS_Client to PCCBS710via740. After authentication by the PCCBS Utility2700inside PCCBS216, a set of peer-to-peer connection credentials including Password_PCCBS_P2P are generated714. The actual Password is sent to the Invitee Device281through the Client Message Box215. The hashed password along with other client credentials are stored in the PCCBS Client database. The actual client P2P password is never stored in PCCBS216for security reasons. However its hashed value is stored instead for comparison in authentication716. As soon as the PCCBS Device Client201receives acknowledgement from the PCCBS216for registration707, it records the PCCBS identity Address_PCCBS in the PCCBS server database279in the PCCBS_Device_App278. There are a total of four commands provided in the PCCBS_Device_App for the Admin Device: “Initialize and Provision”, “Create a Client”, “View PCCBS Client” and “Reset PCCBS P2P Password/Edit Attributes”, as shown inFIGS.6,9and10. Whenever the Admin operation is involved, only the access to the PCCBS from the PCCBS virtual LAN, VLAN, (be it physical or virtual) is allowed for security reasons. Due to the limited access, network traffic sniffing and hacking is avoided by conducting setting and configuration of PCCBS solely on the PCCBS virtual LAN, VLAN. There are three commands provided in the PCCBS_Device_App for the PCCBS Device Client: “Register to a PCCBS”, “Change P2P Password” and “Connect to PCCBS”, as shown inFIGS.7,8and11. In the case of “Register to a PCCBS” command, the PCCBS Device Client is able to run PCCBS_Device_App and connect to the PCCBS Utility from either WAN or PCCBS virtual LAN, VLAN. The PCCBS Device Client is able to run PCCBS_Device_App and connect to the PCCBS Utility from either WAN or PCCBS virtual LAN, VLAN, because the communication exchange between the PCCBS Device Client and the PCCBS Utility for registration to PCCBS is through Client Message Box215and Call-Back Server Message Box (not shown), as shown inFIG.7. In the case of “Change P2P Password” command, the PCCBS Device Client has to run PCCBS_Device_App on PCCBS virtual LAN, VLAN, after secure VPN connection from WAN, because the P2P Password can only be reset on the PCCBS virtual LAN, VLAN, for security reason. The only way for the PCCBS Device Client to connect to PCCBS virtual LAN, VLAN, is through a secure VPN connection to the virtual LAN of PCCBS, as shown inFIG.11. In the case of “Connect to PCCBS” command, the PCCBS Device Client has yet to connect to the PCCBS from anywhere either on WAN or PCCBS virtual LAN, VLAN. The consequence of this command on the PCCBS_App is the prerequisite for any secure and private connection between the PCCBS Device Client and the PCCBS, as is shown inFIG.8. The private cloud call-back server216acts as a middleman to relay communication between the smart device client221,201,211and the private cloud routing server218. It will call back the private cloud routing server on demand based on the smart device client request. FIG.3shows a block diagram of a second embodiment of the invention. The Private Cloud Routing Server308connects to the LAN of a Router_P302, in a manner similar to the way Private Cloud Routing Server208connects to the LAN of a Router_P202inFIG.2. The PCRS308also has a physical LAN LAN2360connecting downstream. A Private Network Service336and a Smart Device Client335are connected downstream. The Private Network Service336is accessible through the communication path326, connecting through LAN334to Private Cloud Routing Server308. As long as the virtual LAN340, the physical LAN LAN1350, and physical LAN LAN2360are all explorable and accessible by the Smart Device Clients311,310,309,301,321,306, and335across the cloud through the Private Cloud Call-Back Server316, and the Private Cloud Routing Server308, all Private Network Service328,336, and Smart Device Client306,335become accessible. FIG.4shows a block diagram of a third embodiment of the invention. The Private Cloud Routing Server408connects to the cloud and has a public_IP_P417. The PCRS408also has a physical LAN LAN2460connecting downstream. A Private Network Service436, and a Smart Device Client435are connected downstream. The Private Network Service436is accessible through the communication path426, connecting through LAN434to Private Cloud Routing Server408. As long as the virtual LAN440, the physical LAN LAN2460are all explorable and accessible by the Smart Device Clients411,410,409,401,421, and435across the cloud through the Private Cloud Call-Back Server416, and the Private Cloud Routing Server408, all Private Network Service436, and Smart Device Client435become accessible. FIG.5shows the communication flow of the Initializing and Provisioning of the Private Cloud Routing Server by the PCRS Admin in accordance with the present invention. As shown inFIG.5, from the PCRS Admin Device standpoint, first connect the PCRS Admin device to the PCRS network on LAN, via step500. Then, open PCRS_Device_App from PCRS LAN, via step501. Thereafter, discover and select PCRS Address_PCRS on LAN, via step502. Then the “Initialize and Provision” command on PCRS_Device_App is selected, via step503. Thereafter, the PCRS is configured by setting address, password (Address_PCRS, Password_PCRS) as its identity, via step504. Then the PCRS is logged in with Admin credentials (“Initialize and Provision”, Admin_name, Admin_password, Address_PCRS, Password_PCRS), via step505. The credentials are sent to PCRS Utility510, via step540. Thereafter, the Admin waits for PCRS authentication, via step506. Then the Virtual LAN subnet and PCRS App link are configured, via step507. The PCRS Utility514is sent, via step542. Thereafter, the PCRS is joined to the existing access point router as a client, if desired, via step508. Thereafter this information is sent to PCRS Utility516via step543. From PCRS Utility standpoint, accept PCRS Admin credentials (“Initialize and Provision”, Admin_name, Admin_password, Address_PCRS, and Password_PCRS), via step510. Thereafter, the Admin credentials (Admin_name, Admin_password) are authenticated, via step511. Thereafter the credentials are sent to Admin Device506, via step541. Then (Address_PCRS, Password_PCRS) are stored as the identity for PCRS, via step512. Then (Address_PCRS, Password_PCRS) are registered to a Routing Server Message Box, via step513. Thereafter, the Virtual LAN subnet and PCRS App link are stored, via step514. Thereafter the PCRS_Profile file is generated and saved including interface protocol, certificates and keys, via step515. Finally, an existing access point router as a client is joined, if desired, via step516. FIG.6shows the communication flow of creating a client for Private Cloud Call-Back Server by the PCCBS Admin in accordance with the present invention. From PCCBS Admin Device201standpoint, first the PCCBS_Device_App from WAN is opened, via step600. Next, a PCCBS216at Address_PCCBS is discovered and selected, via step601. Thereafter a “Create a Client” command on PCCBS_Device_App is selected via step602. Thereafter an Invitee notification address Address_Invitee is set, via step603. Then the PCCBS216is logged in with Admin credentials (“Create a Client”, Admin_name, Admin_password, Address_Invitee), via step604. Thereafter the credentials are sent to a PCCBS_Device Utility, via step640. Thereafter the administrator277waits for PCCBS authentication, via step605. From the PCCBS_Device Utility standpoint, first the PCCBS Admin credentials (“Create a Client”, Admin_name, Admin_password, Address_Invitee) are accepted, via step610. Thereafter, the Admin credentials (Admin_name, Admin_password), are authenticated, via step611. Then the credentials are sent to the Admin Device via step641. Next, an Access_Code and Code_Expiration for Access_Code is generated, via step612. Thereafter, (Access_Code, Code_Expiration, Address_Invitee) is stored into entry (Access_Code, Code_Expiration, Address_Invitee, Address_PCCBS_Device_Client, Hash_Password_PCCBS_Device_P2P, Password_PCCBS_Device_P2P_Expiration, Status) in PCCBS_Device Client database, via step613. Then an Invitation to Invitee notification address Address_Invitee with (PCCBS_Device app link, Address_PCCBS_Device, Access_Code and Code_Expiration) is sent, via step614. Send to Invitee620via642. From Invitee Device standpoint, accept invitation on Address_Invitee with PCCBS_Device app link, Address_PCCBS_Device, Access_Code and Code_Expiration, via step620. Then PCCBS_Device_App is retrieved from PCCBS_Device app link, via step621. Finally the PCCBS_Device_App is installed on the PCCBS Device Client201,209,210or211, via step622. FIG.7shows the communication flow of Registering to a Private Cloud Call-Back Server by a PCCBS Device Client in accordance with the present invention. From the PCCBS Device Client standpoint, the PCCBS_Device_App from the WAN or the PCRS LAN is open, via step700. Next, the PCCBS_Device Client address (Address_PCCBS_Device_Client) is created, if necessary (not shown). Next, “Register a Private Cloud Call-Back Server” command on the PCCBS_Device_App is selected, via step701. Next, if the PCCBS_Device Client is not yet configured, the Address_PCCBS_Device_Client and the Password_PCCBS_Device_Client are set, via step702, where the Password_PCCBS_Device_P2P is the message box password associated with message box (not shown) address for client at the Address_PCCBS_Device_Client for peer-to-peer communication. Next, the Address_PCCBS_Device_Client and the Password_PCCBS_Device_Client are registered to Client Message Box, via step702. The Address_PCCBS_Device and the Access_Code are then retrieved from Invitee, via step703. The information is originally received by the invitee device620. Next, the Address_PCCBS_Device and the Access_Code are sent to the PCCBS through client message box with the Client credentials (“Register a Private Cloud Call-Back Server”, Address_PCCBS_Device, Address_PCCBS_Device_Client, Access_Code), via step704. Then the Address_PCCBS_Device and the Access_Code are sent to the PCCBS Device710, via step740. Next, the PCCBS Device Client waits for the PCCBS authentication through client message box, via step705. Then the PCCBS Device Client waits for the PCCBS registration complete acknowledgement through client message box, via step706. Next, the Address_PCCBS_Device entry in the PCCBS_Device Server database is registered on the PCCBS_Device_App if it is a new entry, via step707. From the PCCBS_Device Utility standpoint, the PCCBS_Device Client credentials (“Register a Private Cloud Call-Back Server”, Address_PCCBS_Device, Address_PCCBS_Device_Client, Access_Code) are accepted, via step710. Verification is made to check if the Address_PCCBS_Device_Client is in the PCCBS_Device Client database, via step712. If so, Invitee's designated PCCBS_Device Client address (Address_PCCBS_Device_Client) is acknowledged with the PCCBS_Device address (Address_PCCBS_Device), via step719, then return. Otherwise, the Access_Code is authenticated, via step712. Next, the Code_Expiration on Access_Code is authenticated in the PCCBS_Device Client database, via step713. Next, the Code_Expiration on the Access_Code is sent to the PCCBS Device Client705via741. Next, (Password_PCCBS_Device_P2P, Password_PCCBS_Device_P2P_Expiration, Status) associated with (Access_Code, Code_Expiration, Address_Invitee, Address_PCCBS_Device_Client) are generated, via step714. Next, the hashed value of the Password_PCCBS_Device_P2P is saved as Hash_Password_PCCBS_Device_P2P715. Next, (Address_PCCBS_Device_Client, Hash_Password_PCCBS_Device_P2P, Password_PCCBS_Device_P2P_Expiration, Status) are stored into entry (Access_Code, Code_Expiration, Address_Invitee, Address_PCCBS_Device_Client, Hash_Password_PCCBS_Device_P2P, Password_PCCBS_Device_P2P_Expiration, Status) in the PCCBS_Device Client database, via step716. Next, the Password_PCCBS_Device_P2P is sent to Invitee notification address at Address_Invitee, via step717. Next, the Password_PCCBS_Device_P2P is sent to Invitee720, via step743. Next, the Password_PCCBS_Device_P2P is cleared, via step718. Next, Invitee's designated PCCBS_Device Client address (Address_PCCBS_Device_Client) is acknowledged with PCCBS_Device address (Address_PCCBS_Device), via step719. Next, Invitee's designated PCCBS_Device Client address is sent to the PCCBS Device Client706, via step744. From Invitee Device point of view, the Password_PCCBS_Device_P2P is accepted and saved for future use, via step720. FIG.8shows the communication flow of Connection from the PCCBS Device Client to the Private Cloud Call-Back Server by a PCCBS_Device Client in accordance with the present invention. From the PCCBS Device Client standpoint, the PCCBS_VPN_App is open from the WAN, via step800. Next, an Address_PCCBS_VPN is selected from the registered PCCBS_VPN database, via step801. Next, “Connect to PCCBS_VPN” command is selected on the PCCBS_VPN_App, via step802. Next, peer-to-peer connection request is sent to the Address_PCCBS_VPN, via step803. Next, the peer-to-peer connection request is sent to the PCCBS_VPN Utility810, via step840. Next, peer-to-peer negotiation starts using the Address_PCCBS_VPN_Client to communicate with the PCCBS_VPN at Address_PCCBS_VPN, via step804. Next, the PCCBS Device Client communicates with the PCCBS_VPN Utility811, via step841. Next, the PCCBS_VPN_Profile file is accepted to start the Smart VPN connection with the PCCBS_VPN at the Address_PCCBS_VPN, via step805. Next, pee-to-peer connection is established between the PCCBS_VPN and the Device Client, via step806. Next, the PCCBS Device Client communicates with the PCCBS_VPN Utility813, via step843. Next, the PCCBS_VPN is logged in with the Client credentials (“Connect to PCCBS_VPN”, Address_PCCBS_VPN, Address_PCCBS_VPN_Client, Password_PCCBS_VPN_P2P), via step807. Next, the Client credentials are sent to the PCCBS_VPN Utility814, via step844. Next, the PCCBS Device Client waits for authentication, via step808. Next, secure peer-to-peer communication starts, via step809. Next, the PCCBS Device Client communicates with the PCCBS_VPN Utility817, via step846. Next, the PCCBS Device Client securely connects to the virtual private LAN on the PCCBS_VPN, via step820. From PCCBS_VPN Utility standpoint, peer-to-peer connection request is accepted from the Address_PCCBS_VPN_Client, via step810. Next, peer-to-peer negotiation starts using the Address_PCCBS_VPN to communicate with the PCCBS_VPN Client at the Address_PCCBS_VPN_Client, via step811. Next, the PCCBS_VPN Utility communicates with the PCCBS Device Client804, via step841. Next, the PCCBS_VPN_Profile file is sent to the Address_PCCBS_VPN_Client to start the Smart VPN connection, via step812. Next, the PCCBS_VPN_Profile file is sent to the PCCBS Device Client805, via step842. Next, pee-to-peer connection is established between the PCCBS_VPN and the Device Client, via step813. Next, the PCCBS_VPN Utility communicates with the PCCBS Device Client806, via step843. Next, the PCCBS_VPN Client credentials (“Connect to PCCBS_VPN”, Address_PCCBS_VPN, Address_PCCBS_VPN_Client, Password_PCCBS_VPN_P2P) are accepted, via step814. Next, entry list based on the Address_PCCBS_VPN_Client in the PCCBS_VPN Client database (Access_Code, Code_Expiration, Address_Invitee, Address_PCCBS_VPN_Client, Hash_Password_PCCBS_VPN_P2P, Password_PCCBS_VPN_P2P_Expiration, Status) is searched, via step815. Next, existing peer-to-peer (P2P) password is authenticated by checking if the hashed value matches the Hash_Password_PCCBS_VPN_P2P entry based on the Address_PCCBS_VPN_Client in the PCCBS_VPN Client database, via step816. Next, existing peer-to-peer (P2P) password is sent to the PCCBS Device Client808, via step845. Next, secure peer-to-peer communication starts, via step817. Next, the PCCBS_VPN Utility communicates with the PCCBS Device Client809, via step846. Next, the PCCBS_VPN Utility calls back to PCRS and starts peer-to-peer communication with the PCRS818. Next, the PCCBS Device Client securely connects to virtual private LAN on PCRS820, via step847. Next, the PCCBS_VPN Utility establishes peer-to-peer communication channel between the PCRS Device Client and the PCRS Device Client or another PCCBS Device Client819. Next, the PCCBS Device Client starts connecting to the PCRS Device Client or another PCCBS Device Client821, via step848. FIG.9shows the communication flow of Viewing Client of the Private Cloud Call-Back Server by PCCBS Admin in accordance with the present invention. From the Admin Device standpoint, the PCCBS_Device_App is open from the WAN, via step900. Next, an Address_PCCBS_Device is selected from the registered PCCBS_Device database, via step901. Next, “View PCCBS_Device Client” command is selected on the PCCBS_Device_App, via step902. Next, a View entry in the PCCBS_Device Client database is selected as a look-up index, via step903. Next, the PCCBS is logged in with the Admin credentials (“View PCCBS_Device Client”, Admin_name, Admin_password, View entry), via step904. Next, the Admin credentials are sent to the PCCBS_Device Utility910, via step940. Next, the Admin Device waits for the PCCBS authentication, via step905. Next, entry list in the PCCBS_Device Client database (Access_Code, Code_Expiration, Address_Invitee, Address_PCCBS_Device_Client, Hash_Password_PCCBS_Device_P2P, Password_PCCBS_Device_P2P_Expiration, and Status) is displayed based on the look-up index, via step906. From PCCBS_Device Utility standpoint, the PCCBS_Device Client credentials (“View PCCBS_Device Client”, Admin_name, Admin_password, View entry) are accepted, via step910. Next, the Admin credentials (Admin_name, Admin_password) are authenticated, via step911. Next, the Admin credentials are sent to the Admin Device905, via step941. Next, the View entry is used as a look-up index, reply from entry list in the PCCBS_Device Client database (Access_Code, Code_Expiration, Address_Invitee, Address_PCCBS_Device_Client, Hash_Password_PCCBS_Device_P2P, Password_PCCBS_Device_P2P_Expiration, Status) based on the look-up index, via step912. Next, the replay is sent to the Admin Device906, via step942. FIG.10shows the communication flow of Resetting peer-to-peer password and editing attributes of a PCCBS Device Client by PCCBS Admin in accordance with the present invention. From the Admin Device standpoint, the PCCBS_Device_App is open from the WAN, via step1000. Next, an Address_PCCBS_Device is select from the registered PCCBS_Device database, via step1001. Next, “Reset P2P Password/Edit Attributes” command is select on the PCCBS_Device_App, via step1002. Next, the Invitee notification address Address_Invitee is entered as a look-up index, via step1003. Next, the PCCBS is logged in with the Admin credentials (“Reset P2P Password/Edit Attributes”, Admin_name, Admin_password, and Address_Invitee), via step1004. Next, the Admin credentials are sent to the PCCBS_Device Utility1010, via step1040. Next, the Admin Device waits for the PCCBS_Device authentication, via step1005. Next, the entry list based on the Address_Invitee in the PCCBS_Device Client database (Access_Code, Code_Expiration, Address_Invitee, Address_PCCBS_Device_Client, Hash_Password_PCCBS_Device_P2P, Password_PCCBS_Device_P2P_Expiration, Status) is displayed, via step1006. If “Reset P2P Password” command is selected, the Admin Device waits for completion, via step1007. If “Edit Attributes” is selected, the Attributes are edited as desired, via step1008. Next, the Attributes include but are not limited to Status of the PCCBS_Device Client (Active, Inactive, Deleted), the Virtual LAN subnet and the PCCBS_Device App link. Next, the Attributes are sent to the PCCBS_Device Utility1017, via step1044. From the PCCBS_Device Utility standpoint, the PCCBS Admin credentials (“P2P Password/Edit Attributes”, Admin_name, Admin_password, and Address_Invitee) are accepted, via step1010. The Admin credentials (Admin_name, Admin_password) are authenticated, via step1011. Next, the PCCBS Admin credentials are sent to the Admin Device1005, via step1041. Next, the Address_Invitee is used as a look-up index, reply entry list based on Address_Invitee in PCCBS_Device Client database (Access_Code, Code_Expiration, Address_Invitee, Address_PCCBS_Device_Client, Hash_Password_PCCBS_Device_P2P, Password_PCCBS_Device_P2P_Expiration, and Status), via step1012. Next, the replay is sent to the PCCBS_Device Utility1006, via step1042. If “Reset P2P Password” command is selected, via step1013, a new Password_PCCBS_Device_P2P is generated; the hashed value of Password_PCCBS_Device_P2P in Hash_Password_PCCBS_Device_P2P is saved, via step1014. Next, the new Password_PCCBS_Device_P2P is sent to the Admin Device1007, via step1043. Next, (Access_Code, Password_PCCBS_Device_P2P) is sent to invitee notification address Address_Invitee; Password_PCCBS_Device_P2P is cleared, via step1015. Next, (Access_Code, Password_PCCBS_Device_P2P) is sent to Invitee1020, via step1045. If “Edit Attributes” command is selected, via step1016, the edited Attributes are accepted and stored in the PCCBS_Device, via step1017. From the Invitee Device standpoint, (Access_Code, Password_PCCBS_Device_P2P) are accepted in invitee notification address Address-Invitee, via step1020. FIG.11shows the communication flow of changing peer-to-peer password of a PCCBS Device Client by a PCCBS_Device Client in accordance with the present invention. From the PCCBS Device Client standpoint, the PCCBS_Device_App is open on the WAN after secure VPN connection from the WAN, via step1100. Next, an Address_PCCBS_Device is selected from the registered PCCBS_Device database, via step1101. Next, “Change P2P Password” command is selected on the PCCBS_Device_App, via step1102. The PCCBS is logged in with the Client credentials (“Change P2P Password”, Address_PCCBS_Device, Address_PCCBS_Device_Client, and Password_PCCBS_Device_P2P), via step1103. Next, the Client credentials are sent to the PCCBS_Device Utility1110, via step1140. Next, the PCCBS Device Client waits for the PCCBS_Device authentication, via step1104. Next, the new P2P passwords are entered and re-entered till they match, via step1105. Next, the new P2P passwords are sent to the PCCBS_Device Utility1113, via step1142. From PCCBS_Device Utility standpoint, the PCCBS_Device Client credentials (“Change P2P Password”, Address_PCCBS_Device, Address_PCCBS_Device_Client, and Password_PCCBS_Device_P2P) are accepted, via step1110. Next, the Hash_Password_PCCBS_Device_P2P entry is searched based on the Address_PCCBS_Device_Client in the PCCBS_Device Client database, via step1111. Next, existing P2P password is authenticated by checking if the hashed value matches the Hash_Password_PCCBS_Device_P2P entry based on the Address_PCCBS_Device_Client in the PCCBS_Device Client database (Access_Code, Code_Expiration, Address_Invitee, Address_PCCBS_Device_Client, Hash_Password_PCCBS_Device_P2P, Password_PCCBS_Device_P2P_Expiration, Status), via step1112. Next, the existing P2P password is sent to the PCCBS Device Client1104, via step1141. Next, the new P2P password Password_PCCBS_Device_P2P is accepted, via step1113. Next, the new P2P password is hashed as Hash_Password_PCCBS_Device_P2P, via step1114. Next, the Hash_Password_PCCBS_Device_P2P entry is updated based on the Address_PCCBS_Device_Client in the PCCBS_Device Client database (Access_Code, Code_Expiration, Address_Invitee, Address_PCCBS_Device_Client, Hash_Password_PCCBS_Device_P2P, Password_PCCBS_Device_P2P_Expiration, and Status), via step1115. Next, the P2P password Password_PCCBS_Device_P2P is cleared, via step1116. FIG.12shows the communication flow of P2P Connection Mechanism between a Device Client1and a Device Client2through Cloud Network (Prior Art). A Device Client1and a Device Client2on Cloud Network can communicate with each other through a Public Routing Server or a Public VPN Routing Server112,114. The Device Client1App1201first register to the Public VPN Routing Server Utility1200with its IP address and port capability in TCP/UDP protocols. The Device Client1App, IP address and ports are kept alive with the routing server1203. The Device Client1then requests to the routing server utility1200for connection to the Device Client21204. The routing server utility1200then notifies the Device Client2Utility1202with the IP address and port capability in TCP/UDP protocols of the Device Client1and its intention to connect1205. The Device Client2App1202then replies to the routing server utility1200with its own registration that includes its IP address and port capability in TCP/UDP protocols. The IP address and port capability of the Device Client2are kept alive with connection to the routing server utility1200. The routing server utility1200then responds to the Client Devic1App1201with the IP address and port capability in TCP/UDP protocols of the Client Devic21207. After receiving the IP address and port capability in TCP/UDP protocols of the Device Client2, the Device Client1App1201starts punching holes through the firewall of the Device Client21208. The Device Client2App1202also starts punching holes through the firewall of the Device Client11209. Eventually, both sides of the firewall holes are punched through. The peer-to-peer communication starts between the Device Client1and the Device Client21210. Note that without the Public VPN Routing Server, the connection mechanism between the routing server utility and either Device Client1or Device Client2is not possible. It is the fundamental flaw of the connection mechanism that has to rely on a Public VPN Routing Server. FIG.13is a diagram of a communication flow of P2P Connection Mechanism between PCRS and a PCRS Device Client through a Cloud Network (Prior Art). It shows in accordance to the present invention that no Public VPN Routing Server is required for the Device Clients to connect and access to either the Server, or another Device Client, or the network services under the server through Cloud Network. As shown inFIG.13, a Device Client1and a Private Cloud Routing Server (PCRS) on Cloud Network can communicate with each other without going through a Public Routing Server or a Public VPN Routing Server112,114. The Device Client1App1301first requests to connect to the PCRS Utility (Server part)1300through Client Message Box215, and PCRS Utility803as shown inFIG.8, with its IP address and port capability in TCP/UDP protocols. The PCRS Device Client1App, IP address and ports are kept alive with the PCRS Utility1303. The PCRS Utility (Server part) receives the registration through Call-Back Server Message Box (not shown). The PCRS Device Client1then requests to the PCRS Utility (Server part)1300also through Client Message Box215for connection to the PCRS Utility (Client part)1304. The PCRS Utility (Server part)1300receives the request through Call-Back Server Message Box (not shown) and notifies the PCRS Utility (Client part)1302with the IP address and port capability in TCP/UDP protocols of the PCRS Device Client1and its intention to connect1305. The PCRS Utility (Client part)1302then replies to the PCRS Utility (Server part)1300with its own registration that includes its IP address and port capability in TCP/UDP protocols. The IP address and port capability of the Device Client2are kept alive with connection to the PCRS Utility (Server part)1300. The PCRS Utility (Server part)1300then responds to the Client Devic1App1301with the IP address and port capability in TCP/UDP protocols of the Client Devic21307through Call-Back Server Message Box (not shown). After receiving the IP address and port capability in TCP/UDP protocols of the PCRS Utility (Client part) through Client Message Box215, the PCRS Device Client1App1301starts punching holes through the firewall of the PCRS Utility (Client part)1308. The PCRS Utility (Client part)1302also starts punching holes through the firewall of the PCRS Device Client11309. Eventually, both sides of the firewall holes are punched through. The peer-to-peer communication starts between the PCRS Device Client1and the PCRS Utility (Client part)1310. All information exchange between the PCRS Utility and the PCRS Device Client1are through Call-Back Server Message Box (not shown) and Client Message Box215, instead of going through a Public Routing Server212or a Public VPN Routing Server214. PCRS Device Client1can then securely connect to virtual private LAN on PCRS as shown in820. The PCRS Device Client1is able to access any Device Client206or private network service228accessible under the PCRS. Other PCRS Device Clients201,221,209,210,211can connect to the PCRS through the same connection mechanism as shown inFIG.13. Once any pair of PCRS Device Clients and PCCBS Device Clients connect to the virtual private LAN240and the virtual private LAN2400of the PCRS and the PCCBS, they can conduct the private and secure communication between themselves for text, audio or video communication. FIG.14is a diagram of a communication flow of P2P Connection Mechanism between PCRS, PCCBS, a PCRS Device Client and a PCCBS Device Client through a Cloud Network. It shows in accordance to the present invention that no public cloud Routing Server is required for the Device Clients to connect and access to either the Server PCRS, PCCBS, or another Device Client, or the network services under the server through Cloud Network. As shown inFIG.14, a Device Client1and a Private Cloud Routing Server (PCRS) on Cloud Network can communicate with each other without going through a Public Routing Server or a Public VPN Routing Server112,114. The PCRS Admin Device1420first initializes and provisions the PCRS1428through the PCRS Device Utility1421, as described inFIG.5and circle0,1400. The PCRS Utility1421then passes the info internally inside PCRS1428, to PCRS_VPN Utility1422. It then registers to the PCCBS VPN Utility1423with the PCRS registration info that includes the IP address and port capability in TCP/UDP protocols, as inFIG.15and circle1,1401. It also establishes the PCCBS tuple and communication socket,1600. The IP address and port capability of the Device Client2are kept alive with connection to the PCCBS Utility1401. After registration, the PCRS_VPN Utility connects to the PCCBS_VPN1602and establishes peer-to-peer communication channel between PCRS_VPN and PCCBS_VPN1619, as inFIG.16. The PCCBS_VPN Utility1423communicates with the PCCBS_Device Utility1424, internally inside the PCCBS1427. The PCCBS_Device Utility stays in a loop waiting on demand for the PCCBS Device Client request, as circle2,1402. The PCCBS Device Client11405first registers to the PCCBS_Device Utility1424as shown inFIG.7, with its IP address and port capability in TCP/UDP protocols. The PCCBS Device Client1, IP address and ports are kept alive with the PCCBS_Device Utility1424, as inFIG.7and circle3-1,1403. The PCCBS_Device Utility1424passes the registration and the connection request internally inside PCCBS1427, to the PCCBS_VPN Utility1423. After registration, the PCCBS Device Client11425connects to the PCCBS_VPN802and establishes peer-to-peer communication channel between PCCBS Device Client11424and PCCBS_VPN817, as inFIG.8. The PCCBS_VPN Utility1423then calls back to PCRS_VPN Utility1422to establish peer-to-peer communication channel between PCCBS_VPN Utility1423and PCRS_VPN Utility1422, as inFIG.14, circle5,1405, circle7,1407and inFIG.8,818. After the call-back action is successful from PCCBS_VPN Utility1423to PCRS_VPN Utility1422, the peer-to-peer communication channel is finally established between PCCBS_Device Client1and PCRS_VPN and in turn connecting to a PCRS Device Client21426or yet another PCCBS Device Client31401, assuming another PCCBS Device Client3has also successfully connected to the PCCBS_VPN Utility1423. The call-back action818from the PCCBS_VPN Utility to the PCRS_VPN Utility1422is explained in details inFIG.17. FIG.15shows the communication flow of Registering to a Private Cloud Call-Back Server by a PCRS in accordance with the present invention. From the PCRS standpoint, the PCCBS tuple and the communication socket are established, via step1500. Next, the PCCBS_Device Client address (Address_PCCBS_Device_Client) is created, if necessary (not shown). Next, “Register a Private Cloud Call-Back Server” command is issued, via step1501. Next, if the PCCBS_Device Client is not yet configured, the Address_PCCBS_Device_Client and the Password_PCCBS_Device_Client are set, via step1502, where the Password_PCCBS_Device_P2P is the message box password associated with message box (not shown) address for client at the Address_PCCBS_Device_Client for peer-to-peer communication. Next, the Address_PCCBS_Device_Client and the Password_PCCBS_Device_Client are registered to Client Message Box, via step1502. The Address_PCCBS_Device and the Access_Code are then retrieved from Invitee, via step1503. The information is originally received by the invitee device620. Next, the Address_PCCBS_Device and the Access_Code are sent to the PCCBS through client message box with the Client credentials (“Register a Private Cloud Call-Back Server”, Address_PCCBS_Device, Address_PCCBS_Device_Client, Access_Code), via step1504. Then the Address_PCCBS_Device and the Access_Code are sent to the PCCBS Device1510, via step1540. Next, the PCRS waits for the PCCBS authentication through client message box, via step1505. Then the PCRS waits for the PCCBS registration complete acknowledgement through client message box, via step1506. Next, the Address_PCCBS_Device entry in the PCCBS_Device Server database is registered on the PCCBS_Device_App if it is a new entry, via step1507. From the PCCBS_Device Utility standpoint, the PCCBS_Device Client credentials (“Register a Private Cloud Call-Back Server”, Address_PCCBS_Device, Address_PCCBS_Device_Client, Access_Code) are accepted, via step1510. Verification is made to check if the Address_PCCBS_Device_Client is in the PCCBS_Device Client database, via step1512. If so, Invitee's designated PCCBS_Device Client address (Address_PCCBS_Device_Client) is acknowledged with the PCCBS_Device address (Address_PCCBS_Device), via step1519, then return. Otherwise, the Access_Code is authenticated, via step1512. Next, the Code_Expiration on Access_Code is authenticated in the PCCBS_Device Client database, via step1513. Next, the Code_Expiration on the Access_Code is sent to the PCRS1505via1541. Next, (Password_PCCBS_Device_P2P, Password_PCCBS_Device_P2P_Expiration, Status) associated with (Access_Code, Code_Expiration, Address_Invitee, Address_PCCBS_Device_Client) are generated, via step1514. Next, the hashed value of the Password_PCCBS_Device_P2P is saved as Hash_Password_PCCBS_Device_P2P1515. Next, (Address_PCCBS_Device_Client, Hash_Password_PCCBS_Device_P2P, Password_PCCBS_Device_P2P_Expiration, Status) are stored into entry (Access_Code, Code_Expiration, Address_Invitee, Address_PCCBS_Device_Client, Hash_Password_PCCBS_Device_P2P, Password_PCCBS_Device_P2P_Expiration, Status) in the PCCBS_Device Client database, via step1516. Next, the Password_PCCBS_Device_P2P is sent to the PCRS message box, via step1517. Next, the Password_PCCBS_Device_P2P is cleared, via step1518. Next, Invitee's designated PCCBS_Device Client address (Address_PCCBS_Device_Client) is acknowledged with PCCBS_Device address (Address_PCCBS_Device), via step1519. Next, Invitee's designated PCCBS_Device Client address is sent to the PCRS1506, via step1544. From the PCRS point of view, the Password_PCCBS_Device_P2P is accepted and saved for future use, via step1520. FIG.16shows the communication flow of Connection from the PCRS to the Private Cloud Call-Back Server by a PCRS in accordance with the present invention. From the PCRS standpoint, the PCCBS tuple and communication socket are established, via step1600. Next, an Address_PCCBS_VPN is selected from the registered PCCBS_VPN database, via step1601. Next, “Connect to PCCBS_VPN” command is selected on the PCCBS_VPN_App, via step1602. Next, peer-to-peer connection request is sent to the Address_PCCBS_VPN, via step1603. Next, the peer-to-peer connection request is sent to the PCCBS_VPN Utility1610, via step1640. Next, peer-to-peer negotiation starts using the Address_PCCBS_VPN_Client to communicate with the PCCBS_VPN at Address_PCCBS_VPN, via step1604. Next, the PCRS_VPN communicates with the PCCBS_VPN Utility1611, via step1641. Next, the PCCBS_VPN_Profile file is accepted to start the Smart VPN connection with the PCCBS_VPN at the Address_PCCBS_VPN, via step1605. Next, pee-to-peer connection is established between the PCCBS_VPN and the Device Client, via step1606. Next, the PCRS_VPN communicates with the PCCBS_VPN Utility1613, via step1643. Next, the PCCBS_VPN is logged in with the Client credentials (“Connect to PCCBS_VPN”, Address_PCCBS_VPN, Address_PCCBS_VPN_Client, Password_PCCBS_VPN_P2P), via step1607. Next, the Client credentials are sent to the PCCBS_VPN Utility1614, via step1644. Next, the PCRS_VPN waits for authentication, via step1608. Next, secure peer-to-peer communication starts, via step1609. Next, the PCRS_VPN communicates with the PCCBS_VPN Utility1617, via step1646. Next, the PCRS_VPN securely connects to the virtual private LAN on the PCCBS_VPN, via step1620. From PCCBS_VPN Utility standpoint, a peer-to-peer connection request is accepted from the Address_PCCBS_VPN_Client, via step1610. Next, peer-to-peer negotiation starts using the Address_PCCBS_VPN to communicate with the PCCBS_VPN Client at the Address_PCCBS_VPN_Client, via step1611. Next, the PCCBS_VPN Utility communicates with the PCRS_VPN1604, via step1641. Next, the PCCBS_VPN_Profile file is sent to the Address_PCCBS_VPN_Client to start the Smart VPN connection, via step1612. Next, the PCCBS_VPN_Profile file is sent to the PCRS_VPN1605, via step1642. Next, pee-to-peer connection is established between the PCCBS_VPN and the Device Client, via step1613. Next, the PCCBS_VPN Utility communicates with the PCRS_VPN1606, via step1643. Next, the PCCBS_VPN Client credentials (“Connect to PCCBS_VPN”, Address_PCCBS_VPN, Address_PCCBS_VPN_Client, Password_PCCBS_VPN_P2P) are accepted, via step1614. Next, entry list based on the Address_PCCBS_VPN_Client in the PCCBS_VPN Client database (Access_Code, Code_Expiration, Address_Invitee, Address_PCCBS_VPN_Client, Hash_Password_PCCBS_VPN_P2 P, Password_PCCBS_VPN_P2P_Expiration, Status) is searched, via step1615. Next, existing peer-to-peer (P2P) password is authenticated by checking if the hashed value matches the Hash_Password_PCCBS_VPN_P2P entry based on the Address_PCCBS_VPN_Client in the PCCBS_VPN Client database, via step1616. Next, existing peer-to-peer (P2P) password is sent to the PCRS_VPN1608, via step1645. Next, secure peer-to-peer communication starts, via step1617. Next, the PCCBS_VPN Utility communicates with the PCRS_VPN1609, via step1646. Next, the PCCBS_VPN Utility establishes peer-to-peer communication channel between the PCRS_VPN and the PCCBS_VPN1619. Next, the PCRS_VPN starts connecting to the PCCBS_VPN1621, via step1648. FIG.17shows the communication flow of Connection from the PCCBS calling back to the Private Cloud Routing Server by a PCCBS in accordance with the present invention. From the PCCBS standpoint, the PCRS tuple and communication socket are established, via step1700. Next, an Address_PCRS_VPN is selected from the registered PCRS_VPN database, via step1701. Next, “Connect to PCRS_VPN” command is selected on the PCRS_VPN_App, via step1702. Next, peer-to-peer connection request is sent to the Address_PCRS_VPN, via step1703. Next, the peer-to-peer connection request is sent to the PCRS_VPN Utility1710, via step1740. Next, peer-to-peer negotiation starts using the Address_PCRS_VPN_Client to communicate with the PCRS_VPN at Address_PCRS_VPN, via step1704. Next, the PCRS_VPN communicates with the PCRS_VPN Utility1711, via step1741. Next, the PCRS_VPN_Profile file is accepted to start the Smart VPN connection with the PCRS_VPN at the Address_PCRS_VPN, via step1705. Next, pee-to-peer connection is established between the PCRS_VPN and the Device Client, via step1706. Next, the PCRS_VPN communicates with the PCRS_VPN Utility1713, via step1743. Next, the PCRS_VPN is logged in with the Client credentials (“Connect to PCRS_VPN”, Address_PCRS_VPN, Address_PCRS_VPN_Client, Password_PCRS_VPN_P2P), via step1707. Next, the Client credentials are sent to the PCRS_VPN Utility1714, via step1744. Next, the PCRS_VPN waits for authentication, via step1708. Next, secure peer-to-peer communication starts, via step1709. Next, the PCCBS_VPN communicates with the PCRS_VPN Utility1717, via step1746. Next, the PCCBS_VPN Utility establishes peer-to-peer communication channel between the PCRS_VPN and the PCCBS_VPN1719. Next, the PCCBS establishes P2P communication channel between PCCBS_VPN Device Client and PCRS Device Client or another PCCBS_VPN Device Client1721, via step1721. From PCRS_VPN Utility standpoint, a peer-to-peer connection request is accepted from the Address_PCRS_VPN_Client, via step1710. Next, peer-to-peer negotiation starts using the Address_PCRS_VPN to communicate with the PCRS_VPN Client at the Address_PCRS_VPN_Client, via step1711. Next, the PCRS_VPN Utility communicates with the PCRS_VPN1704, via step1741. Next, the PCRS_VPN_Profile file is sent to the Address_PCRS_VPN_Client to start the Smart VPN connection, via step1712. Next, the PCRS_VPN_Profile file is sent to the PCRS_VPN1705, via step1742. Next, pee-to-peer connection is established between the PCRS_VPN and the Device Client, via step1713. Next, the PCRS_VPN Utility communicates with the PCRS_VPN1706, via step1743. Next, the PCRS_VPN Client credentials (“Connect to PCRS_VPN”, Address_PCRS_VPN, Address_PCRS_VPN_Client, Password_PCRS_VPN_P2P) are accepted, via step1714. Next, entry list based on the Address_PCRS_VPN_Client in the PCRS_VPN Client database (Access_Code, Code_Expiration, Address_Invitee, Address_PCRS_VPN_Client, Hash_Password_PCRS_VPN_P2P, Password_PCRS_VPN_P2P_Expiration, Status) is searched, via step1715. Next, existing peer-to-peer (P2P) password is authenticated by checking if the hashed value matches the Hash_Password_PCRS_VPN_P2P entry based on the Address_PCRS_VPN_Client in the PCRS_VPN Client database, via step1716. Next, existing peer-to-peer (P2P) password is sent to the PCRS_VPN1708, via step1745. Next, secure peer-to-peer communication starts, via step1717. Next, the PCCBS_VPN Utility communicates with the PCRS_VPN1709, via step1746. Next, the PCCBS_VPN Utility establishes peer-to-peer communication channel between the PCRS_VPN and the PCCBS_VPN1719. Next, the PCRS establishes P2P communication channel between PCCBS_VPN Device Client and PCRS Device Client or another PCCBS_VPN Device Client1721, via step1748. FIG.18is a diagram of a communication flow of P2P Connection Mechanism between PCRS, PCCBS, a PCRS Device Client and a PCCBS Device Client through a Cloud Network based on server farm, computer resources aggregation and virtual machine. Further,FIG.18expands uponFIG.14by adding server farm1830, computer resources aggregation1831, and virtual machine1832, to exemplify the implementation of the private cloud routing server connection mechanism in a hyperscale data center. The hyperscale data center may have at least one server farm1830, at least one computer resources aggregation1831and at least one virtual machine1832. The virtual machine is scalable in quantity and size. The hyperscale datacenter or the service provider may construct and deploy a large number of independent PCCBS in its corresponding virtual machines in order to service its corresponding PCRS and the corresponding PCRS Device Clients. In essence, a community pair of peer-to-peer communication relationship between PCCBS Device Client and PCRS Device Client may be constructed and deployed by the platform owner who is responsible for maintaining the virtual machines with or without the topology of computer resources aggregation and server farms. A possible business model, for example, is for an Internet platform owner to offer to a large number of subscribers to host their private and secure PCCBS in the virtual machines. In addition, a separate private and secure PCRS is also offered to allow the individual subscriber to install the PCRS in their own local area network (LAN). Through the invention, the platform subscriber may establish from anywhere, a peer-to-peer communication between its PCCBS Device Client, such as a smart phone or a tablet, and a PCRS Device Client, such as a Notebook (NB), Internet of Things (IoT) device, network attached storage (NAS), or media server, residing on the subscriber's private and secure LAN.FIG.18shows in accordance with the present invention that no public cloud Routing Server is required for the Device Clients to connect and access to either the Server PCRS, PCCBS, or another Device Client, or the network services under the server through Cloud Network. As shown inFIG.18, a PCCBS Device Client11825and a Private Cloud Routing Server (PCRS) on Cloud Network may communicate with each other without going through a Public Routing Server or a Public VPN Routing Server112,114(not shown). The PCRS Utility1821passes the registration info internally inside PCRS1828, to PCRS_VPN Utility1822. The PCRS_VPN Utility1822then registers to the PCCBS_VPN Utility1823with the PCRS registration info that includes the IP address and port capability in TCP/UDP protocols, as inFIG.15and circle1,1801. The PCRS_VPN Utility1822also establishes the PCCBS tuple and communication socket,1600. The IP address and port capability of the PCRS Device Client21826are kept alive with connection to the PCCBS Utility1801. After registration, the PCRS_VPN Utility connects to the PCCBS_VPN1602and establishes peer-to-peer communication channel between PCRS_VPN and PCCBS_VPN1619, as inFIG.16. The PCCBS_VPN Utility1823communicates with the PCCBS Device Utility1824, internally inside the PCCBS1827. The PCCBS Device Utility stays in a loop waiting on demand for the PCCBS Device Client request, as circle2,1802. The PCCBS Device Client11805first registers to the PCCBS Device Utility1824as shown inFIG.7, with its IP address and port capability in TCP/UDP protocols. The PCCBS Device Client1, IP address and ports are kept alive with the PCCBS Device Utility1824, as inFIG.7and circle3-1,1803. The PCCBS Device Utility1824passes the registration and the connection request internally inside PCCBS1827, to the PCCBS_VPN Utility1823. After registration, the PCCBS Device Client11825connects to the PCCBS_VPN802and establishes peer-to-peer communication channel between PCCBS Device Client11824and PCCBS_VPN817, as inFIG.8. The PCCBS_VPN Utility1823then calls back to PCRS_VPN Utility1822to establish peer-to-peer communication channel between PCCBS_VPN Utility1823and PCRS_VPN Utility1822, as inFIG.18, circle5,1805, circle7,1807and inFIG.8,818. After the call-back action is successful from PCCBS_VPN Utility1823to PCRS_VPN Utility1822, the peer-to-peer communication channel is established between PCCBS Device Client11825and PCRS_VPN and in turn connecting to a PCRS Device Client2. The call-back action818from the PCCBS_VPN Utility to the PCRS_VPN Utility1822is explained in detail inFIG.17. Although the present invention has been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations to the embodiments and those variations would be within the spirit and scope of the present invention. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims.
66,859
11863530
DETAILED DESCRIPTION Embodiments of the disclosure are directed to an authentication procedure performed by a VPN client that includes specific logic to perform operations utilizing a single sign-on scheme, such as SAML. Specifically, as is known in the art, a VPN client may operate on a network device of a user to establish a secure connection (e.g., a VPN tunnel) between the network and a gateway, e.g., deployed in a cloud computing network. In some instances, the gateway is managed by a controller also deployed within the cloud computing network. In order to establish the secure connection, the VPN client must authenticate the user with the gateway. The disclosure provides an authentication procedure implemented by the VPN client and controller that utilizes a single sign-on scheme. The authentication procedure may be initiated by the VPN client, operating on a network device, through performance of operations that cause an internet browser instance (“browser”) to launch and open a new tab or window. A resource request (e.g., a request for a one-time authentication token) is transmitted through the browser over a public network to a controller. Upon receipt of the resource request and following a determination that an active, valid logon session has been not established for the user, the controller performs an authentication request generation procedure. The authentication request is transmitted to an identity provider, which attempts to validate the user either through an active, valid logon session being established with the identity provider or the user providing valid logon credentials. Following validation of the user by the identity provider, the identity provider generates an assertion representing whether the user provided valid logon credentials and, in some embodiments, may include a user identifier and the user's profile associations as well (e.g., used by the gateway to limit the environments to which the user has access). The assertion is then provided to the controller from the identity provider in an authentication response transmitted via the browser over the public network. The controller then parses the authentication response to determine whether the authentication performed by the identity provider was successful, and if so, extract at least one or more profile associations of the user included within the authentication response, when applicable. The user identifier may also be extracted. In the event that the authentication procedure of the identity provider was successful, the controller generates a one-time token, which is stored by the controller in a database. The controller also transmits the token and the one or more profile associations of the user, when applicable, to the VPN client through the browser. Upon receiving the token and, optionally, the one or more profile associations of the user, the VPN client initiates a procedure to establish a secure channel between the network device and the VPN gateway via a VPN generation request that includes the received token and the one or more profile associations of the user. Upon receiving the VPN connection request, the gateway queries the controller to validate the token and to retrieve parameters corresponding to the one or more profile associations of the user (e.g., allowed IP CIDR ranges). In some embodiments, when the VPN connection request includes one or more profile associations, the included profile associations may override the profile associations stored by the controller as discussed below. However, in other embodiments, the profile associations stored by the controller may over the profile associations received in the VPN connection request. The controller validates the token through a comparison of the received token/user identifier pairing with the token/identifier pairings stored in a database by the controller. Upon validating the received token, the controller responds to the query indicating successful validation. The gateway then establishes the secure channel to the network device through the VPN client. Additionally, the gateway configures the gateway's firewall with one or more rules that restrict the user's access based to IP addresses behind the gateway based on the allowed IP CIDR ranges corresponding to the one or more profile associations of the user. The user is then provided access to authorized resources behind gateway via the VPN client. I. TERMINOLOGY In the following description, certain terminology is used to describe features of the invention. In certain situations, the term “logic” is representative of hardware, firmware, and/or software that is configured to perform one or more functions. As hardware, the logic may include circuitry having data processing or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a microprocessor, one or more processor cores, a programmable gate array, a microcontroller, an application specific integrated circuit, wireless receiver, transmitter and/or transceiver circuitry, semiconductor memory, or combinatorial logic. Alternatively, or in combination with the hardware circuitry described above, the logic may be software in the form of one or more software modules. The software module(s) may include an executable application, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, a shared library/dynamic load library, or one or more instructions. The software module(s) may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; a semiconductor memory; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device. As firmware, the executable code may be stored in persistent storage. The term “computerized” generally represents that any corresponding operations are conducted by hardware in combination with software and/or firmware. The term “construct” may be construed as a virtual or physical logic directed to a particular functionality such as a gateway, virtual private cloud network (VPC), sub-network, or the like. For instance, as an illustrative example, the construct may correspond to virtual logic in the form of software (e.g., a virtual machine), which may assign a device-specific address (e.g., a Media Access Control “MAC” address) and/or an IP address within an IP address range supported by to a particular IP subnet. Alternatively, in some embodiments, the construct may correspond to physical logic, such as an electronic device that is communicatively coupled to the network and assigned the MAC and/or IP address(es). Examples of electronic devices may include, but are not limited or restricted to a personal computer (e.g., desktop, laptop, tablet or netbook), a mobile phone, a standalone appliance, a sensor, a server, or an information routing device (e.g., a router, bridge router (“brouter”), etc.). It is contemplated that each construct may constitute at least logic residing as part of a public network, although certain constructs may be deployed as part of an “on-premises” (or local) network. The term “gateway” may refer to a software instance deployed within a public cloud network or a virtual private cloud network deployed with the public cloud network and controls the flow of data traffic within and from the public cloud network (e.g., to one or more remote sites including computing devices that may process, store and/or continue the routing of data). Herein, each gateway may operate as a “transit gateway” or “spoke gateway,” which are gateways having similar architectures but are identified differently based on their location/configurations within a cloud computing environment. For instance, a “spoke” gateway is configured to interact with targeted instances while a “hub” gateway is configured to further assist in the propagation of data traffic (e.g., one or more messages) directed to a spoke gateway or a computing device within an on-premises network. The term “controller” may refer to a software instance deployed within a cloud computing environment (e.g., resources of a public cloud network) that manages operability of certain aspects of one or more cloud computing environments spanning across different public cloud networks (multi-cloud network). For instance, a controller may be configured to collect information pertaining to each VPC and/or each gateway instance and configures one or more routing tables associated with one or more VPCs and/or gateway instances spanning a multi-cloud network to establish communication links (e.g., logical connections) between different sources and destinations. These sources and/or destinations may include, but are not restricted or limited to on-premises computing devices, gateway instances or other types of cloud resources. The term “message” generally refers to information in a prescribed format and transmitted in accordance with a suitable delivery protocol. Hence, each message may be in the form of one or more packets, frames, or any other series of bits having the prescribed format. The term “network device” may be construed as any electronic computing system with the capability of processing data and connecting to a network. Such a network may be a public network such as the internet or a private network such as a wireless data telecommunication network, wide area network, a type of local area network (LAN), or a combination of networks. Examples of a network device may include, but are not limited or restricted to, an endpoint device (e.g., a laptop, a mobile phone, a tablet, a computer, etc.), a standalone appliance, a server, a router or other intermediary communication device, a firewall, etc. The term “link” may be generally construed as a physical or logical communication path between two or more constructs. For instance, as a physical communication path, wired and/or wireless interconnects in the form of electrical wiring, optical fiber, cable, bus trace, or a wireless channel using infrared, radio frequency (RF), may be used. A logical communication path includes any communication scheme that enables information to be exchanged between multiple constructs Finally, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. As an example, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive. As this invention is susceptible to embodiments of many different forms, it is intended that the present disclosure is to be considered as an example of the principles of the invention and not intended to limit the invention to the specific embodiments shown and described II. GENERAL ARCHITECTURE Referring now toFIG.1, a diagram of an exemplary embodiment of a VPN client including logic configured to perform an authentication procedure implementing a single sign-on scheme is shown according to some embodiments. The network environment100is shown to include a controller102and a gateway104managed by the controller102. In some embodiments, the controller102and the gateway104are software instances deployed in a cloud computing environment. The network environment100also includes a VPN client106including an authentication logic108that is configured to communicate with a browser110. In some embodiments, the VPN client106and the browser110are configured for processing on a network device of a user (e.g., a laptop, mobile device, or other computing device configured for network connectivity). In some embodiments, the VPN client106may be configured to communicate with the gateway104to perform and manage operations including authentication with the gateway104to set up the secure communication (e.g., the VPN tunnel), and modification of client environment parameters (e.g., route tables, DNS, network adapters, etc.). Further, the network environment100includes an identity provider112that assists in facilitating an authentication procedure. Examples of the identity provider112include but are not limited or restricted to, OKTA®, ONELOGIN®, FACEBOOK®, GOOGLE®, etc. Referring toFIG.2, an illustration of a plurality of operations performed during an authentication procedure performed by the VPN client ofFIG.1is shown according to some embodiments.FIG.2includes a plurality of numerals, i.e.,1-18, with each numeral representing one or more operations performed by one or more components of the networking environment100or transmission of the data within the networking environment100. As discussed throughout the disclosure, in some embodiments, an authentication procedure may include establishment of a secure channel between a network device utilizing a virtual private network (VPN) client and a network gateway using a single sign-on scheme, such as SAML. Although the operations discussed with respect to at leastFIG.2illustrate the use of the SAML standard for exchanging authentication and authorization data, the disclosure is not limited or restricted to the use of the SAML procedure as the only single sign-on scheme. Prior to initiation of the first operation illustrated inFIG.2, it is assumed that the VPN client106has been downloaded and installed on the network device200. The authentication procedure illustrated inFIG.2may be initiated by the receipt of user input received by the network device200indicating a desire by the user to utilize resources accessible through the gateway104that is managed by the controller102via the VPN client106. As part of the initiation of the authentication procedure, the VPN client106is launched on the network device200. As is shown, the VPN client106includes authentication logic108that implements a single sign-on scheme, e.g., SAML. As a first operation in the authentication procedure, the authentication logic108, upon execution by one or more processors of the network device200, performs operations including causing the internet browser instance (“browser”)110to launch and open a new tab or window, or open a new tab or window in the browser110in the event that the browser110has previously been launched (numeral1). The authentication logic108then transmits a resource request to the controller102through the browser110over a public network such as the internet (numeral2). The resource request includes at least a user identifier. In this example, the requested resource is a one-time authentication token for the user. Upon receipt of the resource request, the controller102performs an access check procedure that includes determining whether the user currently has an active, valid logon session established (numeral3). In one embodiment, the controller102may query the session database202to determine whether a valid session identifier corresponding to the user identifier included in the resource request is currently stored therein. In the event that an active, valid logon session has been not established for the user, the controller102performs an authentication request generation procedure (numeral4). In some embodiments, the authentication request is an Extensible Markup Language (XML) document that is URL-encoded. The authentication request is then added as a query parameter to the URL corresponding to the logon webpage of the identity provider112(collectively, numeral4). As is known, a SAML authentication request includes several fields including an ID field. In some embodiments, the controller102may store a table pairing an authentication token (discussed below) and the time an authentication response was received by the controller from the identity provider in the request database210. The time field enables the controller102to expire authentication requests after a predetermined time period (i.e., requiring the user to initiate another resource request). Still referring toFIG.2, upon adding the authentication request as a query parameter, the URL corresponding to the logon webpage of the identity provider112is then transmitted to the identity provider112using HTTP Redirect Binding. The redirect response is then passed by the browser110to the URL corresponding to the logon webpage of the identity provider112, e.g., a Single Sign-On (SSO) service212(collectively, numeral5). The SSO212determines whether the user has an active, valid logon session established with the identity provider112. Upon determining the user does not have an active, valid logon session established with the identity provider112prompts or “challenges” the user for credential information (numeral6). In particular, the controller102does not store authenticating credentials for the user and, thus, requests such information from an identity provider, e.g., the identity provider112. The identity provider112validates the user either through an active, valid logon session being established with the identity provider112or the user providing valid logon credentials. Following validation of the user by the identity provider112, the identity provider112generates a SAML assertion representing whether the user provided valid logon credentials. The SAML assertion may include, at least: (i) a status tag including a result of the user's authentication with the identity provider (success or failure); (ii) a subject tag identifying the user (e.g., an email address of the user, user's first and last name, employee identifier, etc.); (iii) an attribute statement tag, which may provide any other identifying information such as one or more profile associations of the user (e.g., admin, developer, guest, etc.), which may indicate a permission level once the user is provided access to various resources following authentication. Specifically, a user's profile associations may be used to limit the environments to which the user has access. The term “environment” may refer to any services within or outside a VPC (spoke or transit) that the gateway may directly reach (e.g., communicate with). As one example, an environment may include a range of IP CIDR addresses to which access in a cloud computing network is permitted using only specific protocols (HTTP, SSH, TCP, UDP, ICMP, etc., on only specified ports). In some embodiments, one or more profile associations for the user may be stored on the controller102. In such instances, the user's profile associations received from the identity will override the previously stored profile associations. The SAML assertion is then digitally signed and placed within a SAML Response message. The SSO212performs a redirect to the Assertion Consumer Service (ACS) URL204of the controller102via a SAML binding an HTTP POST binding (numeral7). Examples of SAML bindings include, but are not limited or restricted to HTTP POST, Simple Object Access Protocol (SOAP), “reverse SOAP” (PAOS), HTTP_REDIRECT, HTTP_ARTIFACT, etc. As is known to those in the art, an ACS URL is an endpoint where the identity provider will transmit an authentication response. In some embodiments, the ACS URL is an HTTPS endpoint used to transfer Personally Identifiable Information (PII). The controller102then parses the SAML response message and verifies the digital signature of the identity provider112(numeral8). For example, the controller102may verify the digital signature with a certificate, e.g., a public key, previously provided by the identity provider112and stored by the controller102, e.g., in the request database210. Further, parsing of the SAML response message extracts the status tag, the subject tag and the attributes statement tag, as discussed above. Assuming that the status tag indicates the authentication was successful, the controller102generates a one-time authentication token (numeral9). The controller102then stores the token in a database (e.g., the token database210). As discussed above, the one-time token may be any uniquely generated random string that is utilized for an authentication procedure. Following generation of the token, the controller102transmits the token and one or more attributes included in the SAML response message to the VPN client106through the browser110(numerals10-11). For example, in the illustrated embodiment, the token and the attributes may be transmitted to the VPN client106using an HTTP POST message from the browser110(numeral10), which is posted to or retrieved by the local VPN client106, e.g., by the authentication logic108, processing on the network device200(numeral11). In an alternative embodiment, the token and the attributes may be transmitted to the VPN client106by a request to open an application Uniform Resource Indicator (URI), which would cause a download (or other transfer) of the token, identifier and any other attributes to the VPN client106. An application URI may be a custom protocol configured and registered to work with a specified application. As one example of an application URI, directing within a web browser for “company://test” would open an application defined for “company://” in the Operating System(OS) with the request. Upon receiving the token and the one or more attributes from the controller102, the VPN client106initiates a procedure to establish a secure channel between the network device200and the VPN gateway104using the token obtained via the SAML response message (numeral12). In one embodiment, the VPN client106generates a VPN connection request that includes the token and one or more of the attributes received in the SAML response message (e.g., profile association information). Following generation, the VPN connection request is transmitted to the gateway104. Upon receiving the VPN connection request, the gateway104queries the controller102to validate the token and to retrieve parameters corresponding to the one or more profile associations of the user (numeral13). The controller102validates the received token by comparing the received token to the token generated previously (at numeral9) and stored by the controller102. Upon validating the received token, the controller102responds to the query indicating successful validation (numeral14). In some embodiments, the controller102invalidates the token after the authentication is completed, so that a subsequent request would require a new one-time token. Alternatively, or in addition, the controller102may also invalidate the token is also when a SAML response message is not received within a predetermined time period, e.g., 2 minutes, 5 minutes, etc. Upon receiving an indication that the token provided by the VPN client106is valid, the gateway104establishes the secure channel to the network device200through the VPN client106(numeral15). Following the establishment of the secure channel, the gateway104queries the controller102for firewall rules (e.g., corresponding to the user's profile associations) (numeral16). In one example, the gateway104executes a script such as the learn_address script used with OPENVPN®. Specifically. the learn_address script is an OPENVPN® functionality that may be triggered upon a successful connection by from the gateway104and the VPN client106. Execution of the learn-address script is triggered by the successful connection from the gateway104, and causes transmission of a request to the controller102with the request including a username (or other identifier), virtual IP address, and an action (add/delete/update). The controller102generates the rules based on the information provided from the learn-address script, the predefined user profiles (e.g., allowed or denied IP CIDR ranges) and/or whitelisted/blacked IP ranges or domains. The controller102determines allowed/denied IP CIDR ranges associated with the user's profile information (numeral16). The controller queries a database, e.g., the profile database208, to (i) retrieve profile associations of the user via the identifier of the user, and (ii) retrieves allowed/denied IP CIDR ranges associated with each profile association of the user, and/or whitelisted/blacked IP ranges or domains. For example, the profile database208may store a listing of all user/profile association pairings. Additionally, the profile database208may store pairings of predefined profile associations (e.g., admin, developer, guest, etc.) with allowed/denied IP CIDR ranges. In response to receiving the request generated on execution of the learn_address script, the controller102generates one or more rules to be implemented by the firewall of the gateway104. The controller102responds to the request with the various allow/deny rules defined for various IP CIDR blocks corresponding to the one or more profile associations of the user (numeral17). The gateway104implements and stores the one or more rules2181-218iprovided by the controller102to restrict the user's access over the secure channel to IP addresses behind the gateway104(numeral18). As a result, the user's access to certain resources is based on the profile information. As one example, when the gateway104receives the one or more rules2181-218i, which may be IP packet filter rules, and configures a firewall of the gateway104with the rules2181-218i. As discussed above, the gateway104may configure the IP packet filter rules of a Linux kernel firewall implemented by or in association with the gateway104. In one particular embodiment, the configuration of the IP packet filter rules is performed through the use of the “iptables” user-space module. Following the establishment of the secure channel between the network device200and the gateway104, and the generation of the rules restricting the user's access according to the user's profile information, the user is provided access to authorized resources behind gateway104via the VPN client106. III. Logical Representation Referring now toFIG.3, an exemplary illustration of a logical representation of the VPN client103ofFIG.1installed and configured for execution on the network device200is shown in accordance with some embodiments. As shown, the network device200includes one or more processors (“processors”)300, a communication interface302and a persistent storage304. In particular, the network device200may include a housing, which may be made entirely or partially of a hardened material (e.g., hardened plastic, metal, glass, composite or any combination thereof) that protects the circuitry within the housing. The communication interface302, in combination with a communication logic (not shown), enables communications with external network devices and/or other network appliances. The communication interface302may be implemented as a physical interface including one or more ports for wired connectors or one or more radio units for supporting wireless communications with other electronic devices. The persistent storage304may be non-transitory computer-readable storage medium and include the logic as software modules stored thereon, such as the VPN client106. The operations of the VPN client106, upon execution by the processors302, are described herein at least with respect toFIGS.2andFIGS.5A-6B. Referring now toFIG.4, an exemplary illustration of a logical representation of the controller102ofFIG.1is shown in accordance with some embodiments. The controller102, as noted above, may be a software instance deployed within a cloud network to assist in managing operability of constructs within one or more public cloud networks. According to this embodiment, the controller102may be configured with certain logic modules including at least an access check logic400, an authentication request generation logic402, a response receiving logic404, a token generation logic406, and a communication interface logic408. The controller102may also include the profile database208, the token database210and a routing table database410. The access check logic400may be configured to perform operations including those described with respect to numeral3ofFIG.2. The authentication request generation logic402may be configured to perform operations including those described with respect to numeral4ofFIG.2. The response authentication and parsing logic404may be configured to perform operations including those described with respect to numeral8ofFIG.2. The token generation logic406may be configured to perform operations including those described with respect to numeral9ofFIG.2. The communication interface logic408may be configured to communicate with the constructs deployed within a computing network and managed, or accessible, by the controller102. Additionally, the communication interface logic408may facilitate transmission of network data from the controller102to the browser110en route to the identity provider112, for example. The routing table database206may store VPC routing table data. For example, the controller102may configure a VPC routing table associated with each VPC to establish communication links (e.g., logical connections) between a transit gateway and cloud instances associated with a particular instance subnet. A VPC routing table is programmed to support communication links between different sources and destinations, such as an on-premise computing devices, a cloud instance within a particular instance subnet or the like. Thus, the controller102obtains and stores information that reveals certain properties of resources (e.g., constructs such as gateways, subnets, VPCs, instances within VPCs, etc.) within the purview of the controller102as well as status information pertaining to the connections (communication links) between with these resources. Along with a pairing of predetermined profiles and corresponding allowed IP CIDR ranges, the profile database208may also store whitelisted or blacklisted IP CIDR ranges and/or whitelisted or blacklisted domain sets. IV. Operational Flow Referring toFIG.5, a flowchart of a method of an authentication procedure performed by the components ofFIG.1is shown according to some embodiments. Each block illustrated inFIG.5represents an operation performed in the method500of performing an authentication procedure including an establishment of a secure channel between a network device utilizing a virtual private network (VPN) client and a network gateway. Prior to the start of operations comprising the method500, it is assumed that a networking environment, such as the network environment100ofFIG.1, is operationally functioning, e.g., a controller and gateway (managed by the controller) have been deployed and are in communication with one or more of an identity provider and a network device having installed thereon a VPN client. The method500beings when a VPN client, such as the VPN client106ofFIG.1, is launched on a network, such as the network device200(block502). The VPN client launches a web browser and transmits a resource request to the controller via the browser over a public network, e.g., the internet (block504). The controller transmits an authentication request to an identity provider via the browser (e.g., a browser redirect operation), based on the resource request (block506). For example, an identifier of the user from the resource request is provided in the authentication request to the identity provider. Following performance of an authentication procedure by the identity provider, the controller receives an authentication response from the identity provider via the browser (e.g., a browser redirect operation) (block508). The authentication response includes an indication as to whether the authentication procedure was successful, and in some embodiments, one or more profile associations of the user. Responsive to receipt of the authentication response and in the event that the authentication procedure performed by the identity provider was successful, the controller generates and transmits an authentication token to the VPN client via the browser (block510). The transmission to the VPN client may also include the user attributes (e.g., a user identifier) and, optionally, profile associations of the user. The VPN client receives the transmission from the controller, generates a secure a secure connection request including the authentication token and user attributes, and, optionally, the one or more profile associations of the user and transmits the secure connection request to the gateway (block512). In response to receiving the secure connection request, the gateway queries the controller to validate the authentication token (block514). Following the successful validation of the authentication token, the gateway establishes the secure connection with the VPN client (block516). Additionally, the gateway queries the controller for an indication of resources the user is authorized to access based on the profile associations of the user (block518). The gateway receives and stores such an indication from the controller, and configures a firewall of the gateway based on the indication (block520). The user is then provided access to authorized resources behind the gateway via the VPN client (block522). Referring toFIGS.6A-6B, a flowchart of a detailed method of an authentication procedure performed by the components ofFIG.1is shown according to some embodiments. Each block illustrated inFIGS.6A-6Brepresents an operation performed in the method600of performing an authentication procedure including an establishment of a secure channel between a network device utilizing a virtual private network (VPN) client and a network gateway. Prior to the start of operations comprising the method600, it is assumed that a networking environment, such as the network environment100ofFIG.1, is operationally functioning, e.g., a controller and gateway (managed by the controller) have been deployed and are in communication with one or more of an identity provider and a network device having installed thereon a VPN client. The method600beings when a VPN client, such as the VPN client106ofFIG.1, is launched on a network, such as the network device200(block602). The VPN client launches a web browser and transmits a resource request to the controller via the browser over a public network, e.g., the internet (block604). The controller determines that the user does not yet have a valid logon session established with the controller (block606). The controller then generates a SAML authentication request and transmits a redirect response to the identity provider via the browser, where the redirect response includes the SAML authentication request (block608). For example, an identifier of the user from the resource request is included in the SAML authentication request. Following validation of user credentials by the identity provider, the controller receives a SAML assertion generated by the identity provider within a response message, that includes user's profile associations and one or more user attributes to identify the user via the browser (block610). In the event that the authentication procedure was successful, the controller generates and transmits a one-time authentication token based on the SAML assertion and an identifier of the user to the VPN client (block612). The token and the identifier are stored by the controller102, e.g., in the token database210, for subsequent validation, as will be discussed below. The VPN client receives the token and identifier of the user from the controller, generates a secure a secure connection request including the authentication token and the identifier of the user and transmits the secure connection request to the gateway (block614). In response to receiving the secure connection request, the gateway queries the controller to validate the authentication token and to retrieve allowed/denied IP ranges based on the one or more profile associations of the user (block616). Following the successful validation of the authentication token by the controller102(through a comparison of the received token/identifier pairing with the stored token/identifier pairing referenced above at block612), the gateway establishes the secure connection with the VPN client (block618). Additionally, the gateway queries the controller for firewall rules based on the allowed/denied IP ranges (block620). In one example, the gateway may execute a script such as the learn_address script used with OPENVPN®, the execution of which is triggered upon a successful connection between the gateway and the VPN client. Execution of the learn-address script causes transmission of a request for rules to the controller. The controller generates the firewall rules based on the allowed/denied IP ranges and transmits the rules to the gateway (block622). In one embodiment, the controller generates the rules based on the information provided from execution of the learn-address script, the predefined user profiles (e.g., allowed or denied IP CIDR ranges) and/or any whitelisted/blacked IP ranges or domains. The gateway stores the rules and configures its firewall with the firewall rules (624). The user is then provided access to authorized resources behind the gateway via the VPN client in accordance with the firewall rules (block626). In the foregoing description, the invention is described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims.
38,080
11863531
DETAILED DESCRIPTION For simplicity and illustrative purposes, the principles of the disclosed subject matter are described by referring mainly to examples thereof. In the following description, numerous specific details are set forth in order to provide a thorough understanding of such examples, or embodiments. It will be apparent to one of ordinary skill in the art that these embodiments may be practiced without these specific details. In some instances, well known methods and structures have not been described in detail so as not to unnecessarily obscure the embodiments. According to an embodiment, a separate virtual network may be provided to each entity or individual across a wired and/or wireless MDU network on an MDU property having a managed WiFi infrastructure which is otherwise shared by multiple unrelated entities or individuals across the property. For instance, according to this embodiment, a managed network provided in an otherwise large-scale will appear to each individual user or tenant on the MDU property as a private or personal network such as provided in a routed single-family home, private premises, etc. According to some embodiments, the WiFi infrastructure may use a single Service Set Identifier (SSID) across the property, or in other embodiments may use multiple Service Set Identifiers (SSIDs). A SSID is the name assigned to the managed WiFi (wireless) network and provides an IP address for the network. All devices in the network must use this case-sensitive name, typically a text string up to 32 bytes long, to communicate over the WiFi infrastructure. Thus, the client devices of each tenant of the MDU property necessarily use the same one or more SSIDs. It is possible that the MDU property may use an additional SSID for guests of the premises, or that a resident on the property has their own private network via their own private infrastructure. According to some embodiments, each tenant on the MDU property may be provided with a “Personal Network” (PN) to which their wired and wireless devices, and only their wired and wireless devices, can intercommunicate throughout the MDU property independent of physical connection or network access point. Thus, for instance, a tenant may have multiple devices that connect to the MDU network and that are able to see and intercommunicate with each other. For instance, a laptop of the tenant connected to the Personal Network (PN) should be able to see his/her printer connected to the Personal Network (PN) and send a file to the printer over the MDU network for printing. However, according to this embodiment, the devices and Personal Network of the tenant are hidden/private relative to all other tenants on the property who may use the shared MDU network. In addition, the tenant can connect to his/her other devices and gain access to his/her Personal Network anywhere on the defined MDU property at any access or connection point or infrastructure. FIG.1illustrates a multiple dwelling unit (MDU) property10, such as an apartment building, and numerous access or connection points12located throughout the MDU property10and forming part of a managed WiFi infrastructure.FIG.2shows two electronic devices,14and16, of a “User A” and two electronic devices,18and20, of a “User B”. “User A” is assigned one of the Personal Networks (PN1) shown schematically inFIG.2and “User B” is assigned a different one of the Personal Networks (PN2) shown schematically inFIG.2. Thus, the devices14and16can see and communicate with each other on PN1and are isolated and hidden from the devices18and20. According to an embodiment, the Personal Networks are provided using the IEEE 802.1x dynamic VLAN (Virtual Local Area Network) assignment feature provided by equipment configuring the infrastructure, such as switching routers (ISRs), with MAC (Media Access Control) authorization bypass to dynamically create a Personal Network for a tenant that may be accessed across the MDU property via any access point. MAC Authentication Bypass (MAB) is an access control technique which uses the MAC address of a device to determine the extent of network access to provide to the device. Accordingly, a tenant registers his/her devices which are assigned to a unique VLAN thereby providing a Personal Network to the tenant and his/her devices. Thus, a known tenant or user connects to the SSID, and 802.1x MAB authentication permits boarding of the device of the tenant on the assigned VLAN. FIG.3shows a schematic view of system architecture22which provides different VLANs to different tenants and enables the tenants to access the Internet24or another network or source. In this example, Tenant1has two electronic devices,26(a wireless device) and28(a wired device), assigned a Personal Network (PN1), i.e., VLAN200on SSID 192.1680x/24, and Tenant2has two electronic devices,30(a wireless device) and32(a wired device), assigned a Personal Network (PN2), i.e., VLAN199on SSID 192.1680x/24. These devices may connect to a Personal Network on the MDU network via wireless or wired connections. For example, the devices may connect to a Personal network via the access point34of the managed infrastructure and associated switch36, located on the multiple dwelling unit (MDU) property38. During MDU network creation for a MDU property, a captive portal and property ID are created by the AAA (Authentication, Authorization, and Accounting), and/or PCRF (Policy Control Management) unit40and a Captive Portal48for the MDU property38. In the Subscriber Session Controller (SSC)42and Wireless Access Gateway (WAG) infrastructure44including tunnel appliances54, a relationship is created that builds the property ID. As discussed below in greater detail, a MDU Manager46programmatically provisions the SSC42using restful API (Application Program Interface) that uses HTTP requests to get, put, post or delete data. The MDU Manager46is also be utilized to assign VLANs/Personal Networks to the tenants. The WiFi controller50communicates with the access points34via Control and Provisioning of Wireless Access Points (CAPWAP), and Remote Authentication Dial-In User Service (RADIUS) may be utilized by the AAA unit40. RADIUS is a networking protocol that provides centralized Authentication, Authorization, and Accounting management for users who connect and use a network service. A router52provides a connection from the MDU network to the Internet24or another network or source. During individual tenant account creation according to an embodiment, a VLAN is assigned to a tenant's account via the MDU Manager46. The tenant's account may be keyed to an email address, username, or the like. The tenant's devices are on-boarded to the tenant's account via a tenant portal for use by the tenant or the MDU Manager46. Thus, when a client device has been added to an existing tenant's account, the WiFi infrastructure will automatically provide access to the tenant's Personal Network and other devices registered in the tenant's account. In contrast, when an unknown client device attempts to connect to the MDU network via the infrastructure, the unknown device is assigned to a specific onboarding VLAN by the MDU Manager46. On the onboarding VLAN, the unknown device will be redirected to an appropriate tenant portal for account creation and/or device onboarding. After onboarding, the registered device is moved to the VLAN assigned to the tenant. According to an embodiment, dynamic VLAN assignment is accomplished via the MDU Manager46which provides the function of managing and reserving VLANs and assignments thereof on the MDU property. The AAA & PCRF unit40will request a VLAN for a tenant from the MDU Manager46, and the MDU Manager46will mark the VLAN assigned as used and associated with the tenant master account. The MDU Manager46will also free VLANs when a tenant account is deleted. By way of Example, a call flow diagram is shown inFIG.4relative to provisioning and VLAN management. The tenant account is set up in the MDU Manager46. A VLAN Pool Manager54is notified of a potential new customer in step56via a property setup API from the MDU Manager46, and the VLAN Pool Manager54allocates an available VLAN to the account (step58). The MDU Manager46issues a get command for the allocated VLAN (step60), and the VLAN Pool Manager54responds with the VLAN ID (step62). The AAA unit40acquires the VLAN ID from the VLAN Pool Manager54upon first connection of the tenant (steps64and66). Thereafter, the WAG44is provisioned. Account details, MAP allowed CPEs, MAP allowed VLANs, CPE creation (1per Tunnel device) and VLAN settings application (IP, max clients, etc.) communications are sent (see steps68,70,72,74and76) from the MDU Manager46to the WAG44. As a result, the client is provisioned on the network (step78). FIG.5shows an example of a call flow diagram relative to onboarding a known user via an access point34. The user or tenant connects a device80to the access point (AP)34via SSID association (step82). The AP34sends a MAB request to the AAA unit40(steps84,86,88and90). The client device80gains Internet Access100over the MDU network by sending a Dynamic Host Configuration Protocol (DHCP) discover communication (step92), receiving a DHCP offer (step94) from WAG44, requesting DHCP (step96) from WAG44, and receiving a DHCP acknowledgement (step98) from the WAG44. FIG.6shows an example of a call flow diagram relative to onboarding a known user (client device) wired to switch36via an ethernet connection. Here, the user connects a client device102to the switch36via an ethernet connection (step104), and the switch36sends a MAB request to the AAA unit40(step106). The AAA unit40returns an assigned tenant VLAN to the switch36(step108). The switch36then communicates a master accounting request to the AAA unit40(step110) which returns a master accounting response to the switch36(step112). Thereafter, the client device102gains Internet Access122over the network by sending a Dynamic Host Configuration Protocol (DHCP) discover communication (step114), receiving a DHCP offer (step116) from WAG44, requesting DHCP (step118) from WAG44, and receiving a DHCP acknowledgement (step120) from the WAG44. A user or tenant may register or onboard a device to their account by accessing a portal webpage or the like that automatically appears on the screen of a device when an unauthenticated device attempts to connect to the MDU network. The user must already have an account or must create a new account to onboard a new device and therefore will be able to access the Internet or other devices connected to their Personal Network. The portal may request the users email address or other username in combination with an associated password or the like. The portal is configured to collect and verify new user information and may be configured to send a welcome email or other communication to the tenant. The user may use the portal to add and delete client devices, modify account information, change a password, or the like. When adding a new device, a description of the device and a MAC address of the device is required. This may be entered manually or may be detected automatically via DHCP or the like. The user may also use the portal to track client device usage statistics or the like. A management portal may be provided to a property manager or owner. For instance, the management portal may be for use by an individual that is responsible for assisting tenants to access the MDU network (i.e., add users, delete users, reset user password, onboard or remove user devices). The management portal may also enable the manager to send email messages to one or all tenants. Session management may also be provided to enable a property manager to see all active and inactive sessions on the property and to remove any sessions. In addition, the management portal may be used to track, collect, and/or report network and/or infrastructure usage statistics. The foregoing embodiments describe systems and methods that allow an individual who uses an account within an MDU to connect and use multiple devices associated with that account, while simultaneously preventing users of other accounts within the MDU from viewing, accessing, or using any of those devices. However, it may be desirable in some embodiments to allow an account user to make their devices available to chosen ones of other accounts in the MDU, without making those devices available to all accounts in the MDU. Thus, in some embodiments it may be desirable for different account users in an MDU to share accounts. Such sharing may be accomplished in any of a variety of different ways. For example, some implementations may allow for preexisting MDU accounts of different persons to be merged into a single account. Alternatively, some implementations may allow for a creation of a separate joint account, in addition to separately-owned accounts, and by which selected devices may be individually registered. Still other implementations may allow account users to identify devices that may be shared with other accounts along with an identification of other accounts that share the device. FIG.7, for example, shows an implementation130that allows users of preexisting accounts in an MDU to be merged into a single account via an invitation procedure effectuated by e-mail. At step132a user of a first account (user A) may receive a list of PN accounts in the MDU The list may be provided via a user interface accessed through the MDU manager46shown inFIG.3, for example. At step134user A may select chosen other users from that list, at which point the MDU manager may at step136automatically send an invitation to the selected other users, each invitation including a link by which the invitation may be accepted. At step138, the invitee (e.g. user B) may activate the link, after which at step140user B's PN is merged into that of user A's PN. FIGS.8A and8Billustrate an exemplary implementation of the method ofFIG.7. Referring toFIG.8A, an MDU150may include Personal Networks PN1to PN6, each having “n” number of associated devices, e.g. PN1includes devices11to1n, PN2includes devices21to2netc. In this example, the user of PN3sends invitations to the users of PN1, PN2, and PN4. Of those, users PN1and PN4accept.FIG.8Bshows the resulting configuration of PNs in the MDU where the devices of PN1and PN4are merged into that of PN3. The remaining networks PN2, PN5, and PN6remain separate from each other and the merged PN3. In a preferred embodiment, when user PNs are merged, there will be an indication on the service portal of each user of accounts in the merged PN, and the accounts of each user remain active and linked to the merged PN. In this manner, each user may manage their own devices, add and/or remove devices from the shared PN, etc. Other embodiments, instead of merging the PNs of one or more users into that of another user, may instead create a new joint PN, in addition to those held separately by individuals. Thus, when a user sends an e-mail invitation, for example, and one or more responses is received, this embodiment may instead set up a new joint PN, after which each user may register devices to the new joint PN. A system for carrying out any of the above disclosed methods or arrangements may include software or the like provided on a circuit board or within another electronic device and can include various processors, microprocessors, modules, units, components, controllers, managers, chips, storage drives, and the like. It will be apparent to one of ordinary skill in the art that systems, modules, components, units, managers, processors, servers, and the like may be implemented as electronic components, software, hardware or a combination of hardware and software for purposes of providing a system and may be provided by a cloud-based system. Embodiments may also include at least one non-transitory computer readable storage medium having computer program instructions stored thereon that, when executed by at least one processor, can cause the at least one processor to perform any of the steps described above. While the principles of the invention have been described above in connection with specific devices, apparatus, systems, algorithms, and/or methods, it is to be clearly understood that this description is made only by way of example and not as limitation. One of ordinary skill in the art will appreciate that various modifications and changes can be made without departing from the scope of the claims below. The above description illustrates various embodiments along with examples of how aspects of particular embodiments may be implemented and are presented to illustrate the flexibility and advantages of particular embodiments as defined by the following claims and should not be deemed to be the only embodiments. One of ordinary skill in the art will appreciate that based on the above disclosure and the following claims, other arrangements, embodiments, implementations and equivalents may be employed without departing from the scope hereof as defined by the claims. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present invention. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims.
17,678
11863532
DETAILED DESCRIPTION For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the aspects illustrated in the drawings, and specific language may be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is intended. Any alterations and further modifications to the described devices, instruments, methods, and any further application of the principles of the present disclosure are fully contemplated as would normally occur to one skilled in the art to which the disclosure relates. In particular, it is fully contemplated that the features, components, and/or steps described with respect to one aspect may be combined with the features, components, and/or steps described with respect to other aspects of the present disclosure. For the sake of brevity, however, the numerous iterations of these combinations may not be described separately. For simplicity, in some instances the same reference numbers are used throughout the drawings to refer to the same or like parts. FIG.1is an illustration of an example100associated with enabling efficient communication in a hybrid network, according to various aspects of the present disclosure. Example100shows an architectural depiction of included components. In some aspects, the components may include one or more user devices102capable of communicating with a VPN service provider (VSP) control infrastructure104and with one or more VPN servers120for obtaining VPN services and/or mesh network services. The one or more user devices102may communicate with the VSP control infrastructure104and with the one or more VPN servers120over a network118. The VSP control infrastructure104may be controlled by a VPN service provider and may include an application programming interface (API)106, a user database108, a processing unit110, a server database112, and the one or more VPN servers120. As shown inFIG.1, the API106may be capable of communicating with the user database108and with the processing unit110. Additionally, the processing unit110may be capable of communicating with the server database, which may be capable of communicating with a testing module (not shown). The testing module may be capable of communicating with the one or more VPN servers120over the network118. The processing unit110may be capable of configuring and controlling operation of the one or more VPN servers120and of an authentication server (not shown). In some aspects, the one or more VPN servers120may be configured to communicate with the authentication server to authenticate a user device102prior to providing the VPN services and/or mesh network services. The user device102may be a physical computing device capable of hosting a client application and of connecting to the network118. The user device102may be, for example, a laptop, a mobile phone, a tablet computer, a desktop computer, a smart device, a router, or the like. In some aspects, the user device102may include, for example, Internet-of-Things (IoT) devices such as VSP smart home appliances, smart home security systems, autonomous vehicles, smart health monitors, smart factory equipment, wireless inventory trackers, biometric cyber security scanners, or the like. The network118may be any digital telecommunication network that permits several nodes to share and access resources. In some aspects, the network118may include one or more of, for example, a local-area network (LAN), a wide-area network (WAN), a campus-area network (CAN), a metropolitan-area network (MAN), a home-area network (HAN), Internet, Intranet, Extranet, and Internetwork. The VSP control infrastructure104may include a combination of hardware and software components that enable provision of the VPN services and/or mesh network services to the user device102. The VSP control infrastructure104may interface with (the client application114on) the user device102via the API106, which may include one or more endpoints to a defined message system. In some aspects, the API106may be configured to receive, via the network118, a connection request from the user device102to establish a VPN connection with a VPN server120. The connection request may include an authentication request to authenticate the user device102and/or a request for an entry IP address of an optimal VPN server for establishment of the VPN connection therewith. In some aspects, an optimal VPN server may be a single VPN server120or a combination of one or more VPN servers120. The API106may receive the authentication request and the request for the entry IP address of the optimal VPN server in a single connection request. In some aspects, the API106may receive the authentication request and the request for the entry IP address of the optimal VPN server in separate connection requests. The API106may further be configured to handle the connection request by mediating the authentication request. For instance, the API106may receive from the user device102credentials including, for example, a unique combination of a user ID and password (associated with a registered account) for purposes of authenticating the user device102. In another example, the credentials may include a unique validation code known to an authentic user. The API106may provide the received credentials to the user database108for verification. The user database108may include a structured repository of valid credentials associated with registered accounts. In one example, the structured repository may include one or more tables containing valid unique combinations of user IDs and passwords (e.g., password hashes) associated with registered accounts. In another example, the structured repository may include one or more tables containing valid unique validation codes associated with registered accounts. The VPN service provider may add, delete, and/or modify such valid unique combinations of user IDs and passwords from the structured repository. Based at least in part on receiving the credentials from the API106, the user database108and a processor (e.g., the processing unit110or another local or remote processor) may verify the received credentials by matching the received credentials with the valid credentials stored in the structured repository. In some aspects, the user database108and the processor may authenticate the user device102when the received credentials match at least one of the valid credentials. In this case, the VPN service provider may enable the user device102to obtain the VPN services and/or mesh network services. When the received credentials fail to match at least one of the valid credentials, the user database108and the processor may fail to authenticate the user device102. In this case, the VPN service provider may decline to provide the VPN services and/or mesh network services to the user device102. When the user device102is authenticated, the user device102may initiate a VPN connection and may transmit to the API106a request for the entry IP address of an optimal VPN server. The processing unit110included in the VSP control infrastructure may be configured to determine/identify a single VPN server120as the optimal server or a list of VPN servers. The processing unit110may utilize the API106to transmit the IP address of the optimal server or IP addresses of the VPN servers120included in the list to the user device102. In the case where the list of IP addresses of the VPN servers120is provided, the user device102may have an option to select a single VPN server120from among the listed VPN servers as the optimal server120. In some aspects, the processing unit110may be a logical unit including a scoring engine. The processing unit110may include a logical component configured to perform complex operations to compute numerical weights related to various factors associated with the VPN servers120. The scoring engine may likewise include a logical component configured to perform arithmetical and logical operations to compute a server penalty score for one or more of the VPN servers120. In some aspects, based at least in part on server penalty scores calculated utilizing the complex operations and/or the arithmetical and logical operations, the processing unit110may determine an optimal VPN server. In one example, the processing unit110may determine the VPN server120with the lowest server penalty score as the optimal VPN server. In another example, the processing unit110may determine the list of optimal VPN servers by including, for example, three (or any other number) VPN servers120with the three lowest server penalty scores. The user device102may transmit to the optimal VPN server an initiation request to establish a VPN connection (e.g., an encrypted tunnel) with the optimal VPN server. The optimal VPN server with which the user device establishes the encrypted tunnel may be referred to as a primary VPN server or an entry VPN server. Based at least in part on receiving the initiation request, the optimal VPN server may conduct a VPN authentication with the authentication server to authenticate the user device102as a device that may receive the VPN services from the optimal VPN server. When the VPN authentication is successful, the optimal VPN server may proceed to provide the VPN services and/or mesh network services to the user device102. Alternatively, when the VPN authentication fails, the optimal VPN server may refrain from providing the VPN services and/or mesh network services to the user device102and/or may communicate with the user device102to obtain additional information to authenticate the user device102. In some aspects, a VPN server120may include a piece of physical or virtual computer hardware and/or software capable of securely communicating with (the VPN client application on) the user device102for provision of VPN services. Similarly, the authentication server may include a piece of physical or virtual computer hardware and/or software capable of securely communicating with one or more VPN servers120for provision of authentication services. With respect to mesh network services, the processing unit110included in the VSP control infrastructure104may be configured to determine a mesh network associated with the user device102and/or to identify one or more user devices to be included within the determined mesh network. The processing unit110may utilize the API106to transmit information associated with the mesh network and/or the identified one or more user devices to the user device102. The user device102may transmit an initiation request to establish meshnet connections (e.g., encrypted medium) with the one or more user devices. In some aspects, the one or more user devices with which the user device102establishes the meshnet connections may also host respective client applications for communicating with the VSP control infrastructure104and/or with the user device102. One or more components (e.g., API106, user database108, processing unit110, and/or server database112, processing unit116) included in the VSP control infrastructure104and/or included in the user device102may further be associated with a controller/processor, a memory, a communication interface, or a combination thereof (e.g.,FIG.9). For instance, the one or more components of the set of components may include or may be included in a controller/processor, a memory, or a combination thereof. In some aspects, the one or more of the components included in the VSP control infrastructure104and/or the user device102may be separate and distinct from each other. Alternatively, in some aspects, one or more of the components included in the VSP control infrastructure104and/or the user device102may be combined with one or more of other components included in the VSP control infrastructure104. In some aspects, the one or more of the components included in the VSP control infrastructure104and/or the user device102may be local with respect to each other. Alternatively, in some aspects, one or more of the components included in the VSP control infrastructure104and/or the user device102may be located remotely with respect to one or more of other components included in the VSP control infrastructure104and/or the user device102. Additionally, or alternatively, one or more components of the components included in the VSP control infrastructure104and/or the user device102may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. Additionally, or alternatively, a set of (one or more) components shown inFIG.1may be configured to perform one or more functions described as being performed by another set of components shown inFIG.1. As indicated above,FIG.1is provided as an example. Other examples may differ from what is described with regard toFIG.1. One or more user devices may rely on a mesh network to communicate (e.g., transmit and/or receive) data. In example200shown inFIG.2, a first user device, a second user device, a third user device, and a fourth user device may rely on a mesh network to communicate data with each other. The data may be communicated using wired communications and/or wireless communications over a network such as, for example, the Internet. The communicated data may include any information including digital information such as, for example, files, documents, text data, voice data, image data, signal data, and/or video data. Further, the mesh network may be a secure mesh network that may enable the user devices to communicate the data in encrypted form. To communicate the data, the one or more user devices may utilize respective mesh client applications. Although the mesh network may enable the communicated data to be encrypted, information communicated outside the mesh network may take place over the open Internet (e.g., clearnet) in unencrypted form. In some cases, the information may include private information (e.g., locations of the one or more user devices, private/sensitive information associated with users of the one or more user devices, or the like) associated with the user devices. In this case, the information communicated outside the mesh network may be monitored and/or intercepted by a malicious third party. Such monitoring and/or interception may allow the malicious third party to discover, track, and manipulate the private information. As a result, the private information may become compromised, and the one or more user devices may be unable to, among other things, privately send and receive data across public networks. To mitigate instances of the private information becoming compromised, the one or more user devices may utilize respective VPN client applications to establish a VPN connection and to privately send and receive data across the clearnet. In the example200shown inFIG.2, the first user device may utilize a first VPN client application to establish an encrypted tunnel (e.g., a VPN connection) with a VPN server, as discussed elsewhere herein, to privately send and receive data across the clearnet. Once the first user device has established the encrypted tunnel with the VPN server, all communications transmitted by the first user device may be intercepted by the VPN client application and may be transmitted via the encrypted tunnel. Such communications may include data transmitted to the one or more user devices in the mesh network. In an example, data transmitted by the first user device to the second user device (and/or to the third user device and/or to the fourth user device) utilizing the meshnet client application may be intercepted by the VPN client application and may be transmitted via the encrypted tunnel to the VPN server, which may then relay the data to the second user device (and/or to the third user device and/or to the fourth user device). The relay of data, transmitted by the first device to the second user device, via the VPN server may result in inefficient utilization of VPN resources (e.g., processing resources, management resources, memory resources, network bandwidth, power consumption, etc.), which may otherwise be utilized to perform suitable tasks associated with providing VPN services. Additionally, a plurality of hops may be unnecessarily added between the first user device and the second user device, thereby increasing consumption of network resources (e.g., internet nodes, etc.) and introducing a delay in the data being received by the second user device. The relay of data may also result in underutilization of existing mesh network resources (e.g., connection between the first user device and the second user device) that are dedicated for communication of data between the first user device and the second user device. Further, a user device may have to utilize a mesh client application to securely communicate data in the mesh network and utilize a separate, VPN client application to privately communicate data via the VPN network. Various aspects of systems and techniques discussed in the present disclosure may enable efficient communication in a hybrid network, which may include a VPN network and a secure mesh network enabled by a VSP control infrastructure. In some aspects, the VSP control infrastructure may configure and provide a single client application to be installed on a user device. The single client application may enable the user device to securely and efficiently communicate data via the VPN network and via the secure mesh network. In an example, the single client application may enable the user device to establish an encrypted tunnel (e.g., VPN connection) with a VPN server to communicate encrypted data over the clearnet and to establish one or more meshnet connections with one or more endpoints included in the secure mesh network to communicate encrypted data within the secure mesh network. In some aspects, the client application may determine, based at least in part on determining a destination of a transmission packet (e.g., packet transmitted by the user device), whether the transmission packet is to be transmitted via the encrypted tunnel or via the one or more meshnet connections. When the client application determines that the destination of the transmission packet is the one or more endpoints in the mesh network, the client application may transmit the transmission packet via the one or more meshnet connections to the one or more endpoints. Alternatively, when the client application determines that the destination of the transmission packet is a device other than the one or more endpoints, the client application may transmit the transmission packet via the encrypted tunnel to the VPN server. In this way, the VSP control infrastructure and the client application may avoid communications transmitted by the user device to the one or more endpoints being transmitted via the encrypted tunnel and the VPN server. As a result, the VSP control infrastructure and the client application may enable efficient utilization of VPN resources (e.g., processing resources, management resources, memory resources, network bandwidth, power consumption, etc.) to perform suitable tasks associated with providing VPN services and/or mesh network services. Additionally, the VSP control infrastructure and the client application may avoid unnecessary addition of a plurality of hops between the user device and the one or more endpoints, thereby mitigating increase in consumption of network resources and avoiding introducing a delay in the data being received by the one or more endpoints. The VSP control infrastructure and the client application may also enable optimum utilization of existing mesh network resources (e.g., connection between the first user device and the second user device) that are dedicated for communication of data between the user device and the one or more endpoints. Further, a single client application provided by the VSP control infrastructure may enable the user device to securely communicate data in the mesh network and to privately communicate data via the VPN network, thereby enabling conservation and efficient utilization of user device resources (e.g., processing resources, memory resources, power consumption, battery life, etc.). In some aspects, a processor (e.g., processing unit116) associated with a user device may establish a virtual private network (VPN) connection with a VPN server and a meshnet connection with an endpoint in a mesh network, and may be configured to: determine that a destination associated with a transmission packet to be transmitted by the device is the endpoint; and transmit the transmission packet utilizing the meshnet connection based at least in part on determining that the destination is the endpoint. FIG.3is an illustration of an example flow300associated with enabling a secure mesh network, according to various aspects of the present disclosure. The example flow300may include a first user device (e.g., first endpoint), a VSP control infrastructure104, and a second user device (e.g., second endpoint) in communication with each other. In some aspects, the first user device and the second user device may be similar to a user device102discussed above with respect toFIG.1. The first user device and the second user device may be located locally (e.g., in the same room, in the same building, etc.) or maybe located remotely (e.g., in different buildings, in different cities, in different states, in different countries, etc.) with respect to each other. In some aspects, the first user device may install a first client application (e.g., client application114) and the second user device may install a second client application (e.g., client application114), the first client application and the second client application being associated with the VSP control infrastructure104. The first user device and the second user device may use the respective client applications to communicate with an application programming interface (API) and a processor (e.g., processing unit110, processor920) associated with the VSP control infrastructure104. In some aspects, the first user device, the VSP control infrastructure104, and the second user device may communicate with each other over a network (e.g., network118). As discussed elsewhere herein, the VSP control infrastructure may enable the first user device and/or the second user device to obtain VPN services and/or mesh network services. Although only two user devices (e.g., endpoints) are discussed with respect toFIG.3, the present disclosure contemplates the VSP control infrastructure104to provide the VPN services and/or mesh network services to any number of user devices. In some aspects, the client applications may enable the user devices to receive information to be processed by the client applications and/or by the VSP control infrastructure104. Each of the client applications may include respective graphical user interfaces to receive the information via local input interfaces (e.g., touch screen, keyboard, mouse, pointer, etc.) associated with the user devices. The information may be received via text input or via a selection from among a plurality of options (e.g., pull down menu, etc.). In some aspects, the first client application and/or the second client application may activate and/or enable, at a time associated with the registration (e.g., after the registration), the graphical interface for receiving the information. For instance, the first client application (or the second client application) may cause a screen (e.g., local screen) associated with the first user device (or the second user device) to display, for example, a pop-up message to request entry of the information. Further, the client applications may enable transmission of at least a portion of the information to the VSP control infrastructure104. In some aspects, the first client application may utilize a first processing unit (e.g., processing unit116) included in the first user device to perform processes/operations associated with obtaining the VPN services and/or mesh network services and the second application may utilize a second processing unit (e.g., processing unit116) included in the second user device to perform processes/operations associated with obtaining the VPN services and/or mesh network services. As shown by reference numeral310, the first user device may register a first account with the VSP control infrastructure104and the second user device may register a second account with the VSP control infrastructure104. In some aspects, during registration, the first user device may provide registration information such as, for example, identity of an owner of the first user device, a phone number associated with the first user device, an email address associated with the first user device, or the like. In some aspects, the first user device may set up an access system including login information (e.g., access information) such as, for example, username, password, or the like to subsequently gain access to the first account. Similarly, during registration, the second user device may provide registration information such as, for example, identity of an owner of the second user device, a phone number associated with the second user device, an email address associated with the second user device, or the like. In some aspects, the second user device may set up an access system including login information (e.g., access information) such as, for example, username, password, or the like to subsequently gain access to the second account. In some aspects, the first user device and the second user device may be associated with a single registered account and may utilize the associated access system including login information to access the single registered account. As shown by reference numeral320, the first client application and the second client application may determine information based at least in part on the registration of the respective accounts with the VSP control infrastructure104. In an example, the first client application may determine a first asymmetric assigned key pair associated with the first user device. The first assigned key pair may be unique to the first user device and may include a first assigned public key and a first assigned private key. In this way, the first assigned public key and the first assigned private key may be device-specific and maybe associated with the first account. In some aspects, the first assigned public key and the first assigned private key may be associated with each other via, for example, a mathematical function. As a result, data encrypted using the first assigned public key may be decrypted by utilizing the first assigned private key. Similarly, the second client application may determine a second asymmetric assigned key pair associated with the second user device. The second assigned key pair may be unique to the second user device and may include a second assigned public key and a second assigned private key. In this way, the second assigned public key and the second assigned private key may be device-specific and maybe associated with the second account. In some aspects, the second assigned public key and the second assigned private key may be associated with each other via, for example, a mathematical function. As a result, data encrypted using the second assigned public key may be decrypted by utilizing the second assigned private key. The first user device and the second user device may use the respective login information to access the respective accounts and to communicate with the VSP control infrastructure104. As shown by reference numeral330, the client applications may transmit, and the VSP control infrastructure104may receive, at least a portion of the information determined by the client applications. For instance, the first client application may transmit, for example, the first assigned public key to the VSP control infrastructure104. Additionally, the first client application may determine a first public IP address associated with the first user device and may transmit the first public IP address to the VSP control infrastructure104. In some aspects, the first public IP address may include an IP address assigned by an Internet service provider (ISP) associated with providing network services to the first user device and the second public IP address may include an IP address assigned by an ISP associated with providing network services to the second user device. In some aspects, the first user device may utilize the first public IP address to communicate over the Internet. Similarly, the second client application may transmit, for example, the second assigned public key to the VSP control infrastructure104. Additionally, the second client application may determine a second public IP address associated with the second user device and may transmit the second public IP address to the VSP control infrastructure104. In some aspects, the first public IP address may include an IP address assigned by an Internet service provider (ISP) associated with providing network services to the first user device and the second public IP address may include an IP address assigned by an ISP associated with providing network services to the second user device. In some aspects, the first user device may utilize the first public IP address to communicate over the Internet (e.g., clearnet). In some aspects, the VSP control infrastructure104may determine the first public IP address associated with the first user device and the second public IP address associated with the second user device. In an example, the VSP control infrastructure104may determine the first public IP address based at least in part on inspecting a first communication (e.g., IP packet) including the first assigned public key received from the first user device. In some aspects the first communication may include, for example, a header that indicates the first public IP address as a source IP address associated with the first user device. Similarly, the VSP control infrastructure104may determine the second public IP address based at least in part on inspecting a second communication (e.g., IP packet) including the second assigned public key received from the second user device. In some aspects the second communication may include, for example, a header that indicates the second public IP address as a source IP address associated with the second user device. The VSP control infrastructure104may store and correlate the received information in association with the respective registered accounts and/or with the respective user devices. For instance, the VSP control infrastructure104may store and correlate the first assigned public key and the first public IP address in association with the first account and/or the first user device, and may store and correlate the second assigned public key and the second public IP address in association with the second account and/or the second user device. In some aspects, as discussed elsewhere herein, the first user device may transmit a request to receive the VPN services and/or the mesh network services from the VSP control infrastructure104. Based at least in part on receiving the request, as shown by reference numeral340, the VSP control infrastructure104may enable the first user device to establish a connection with a VPN server associated with the VSP control infrastructure104(e.g.,FIG.1). Further, as shown by reference numeral350, the VSP control infrastructure104may determine that the first user device and the second user device are to be included in a given (e.g., same) secure mesh network. In some aspects, the VSP control infrastructure104may make such a determination regarding the secure mesh network based at least in part on the first client application (or the second client application) transmitting information indicating that the first user device and the second user device are to be included in the same secure mesh network. Such information may include, for example, identification information (e.g., type of device, etc.) associated with the second user device and/or the second account (or the first user device and/or the first account), the second public IP address (or the first public IP address), information associated with the ISP associated with providing network services to the second user device (or to the first user device), or the like. In some aspects, the VSP control infrastructure104may make such a determination regarding the secure mesh network based at least in part on determining that the first user device and the second user device are communicating with the VSP control infrastructure utilizing the same registered account. In an example, the first user device (or the second user device) may share login information associated with the first account (or the second account) with the second user device (or the first user device) to enable the second user device (or the first user device) to utilize the login information to gain access to the VSP control infrastructure104via the first account (or the second account). In some aspects, the second user device may be associated with the first user device because the second user device may be available to a user/owner of the first user device. Based at least in part on determining that the first user device and the second user device are to be included in the same secure mesh network, the VSP control infrastructure104may determine a first mesh net IP address associated with the first user device and a second mesh net IP address associated with the second user device. In some aspects, the first client application may utilize the first meshnet IP address to communicate data with one or more endpoints included in the secure mesh network and the second client application may utilize the second meshnet IP address to communicate with the one or more endpoints included in the secure mesh network. The VSP control infrastructure104may determine the first meshnet IP address and the second meshnet IP address from, for example, IP addresses included in a subnet associated with an internal network of the ISP. In some aspects, the VSP control infrastructure104may determine the first meshnet IP address and the second meshnet IP address from a pool of available reserved IP addresses. Based at least in part on determining that the first user device and the second user device are to be included in the same secure mesh network and/or on the determining the first meshnet IP address and the second meshnet IP address, as shown by reference numeral360, the VSP control infrastructure104may transmit, and the first user device may receive, the second assigned public key, the second public IP address, and the second meshnet IP address associated with the second user device. Similarly, based at least in part on determining that the first user device and the second user device are to be included in the same secure mesh network and/or on the determining the first meshnet IP address and the second meshnet IP address, as shown by reference numeral360, the VSP control infrastructure104may transmit, and the second user device may receive, the first assigned public key, the first public IP address, and the first meshnet IP address associated with the first user device. As discussed below in further detail, the above transmission of assigned public keys, public IP addresses, and meshnet IP addresses may enable the first user device and/or the second user device to communicate securely and privately within the secure mesh network. As shown by reference numeral370, the first user device and the second user device may communicate with each other to set up a meshnet connection (e.g., an encrypted medium) for communicating encrypted data in the secure mesh network. To set up the meshnet connection, the first client application may utilize the second assigned public key and/or the second public IP address to securely (e.g., in encrypted form) communicate with the second user device, and the second client application may utilize the first assigned public key and/or the first public IP address to securely communicate with the first user device. In some aspects, the first user device and the second user device may securely/privately negotiate parameters (e.g., a symmetric encryption/decryption key) associated with the meshnet connection. In some aspects, the parameters may be randomly generated to provide optimized security to the communications. In an example, the first user device and the second user device may privately negotiate a randomly generated symmetric key that is to be utilized by the first user device and the second user device for encrypting and decrypting the data communicated via the meshnet connection. In some aspects, the symmetric key may be determined based at least in part on the first assigned public key associated with the first user device, the second assigned public key associated with the second user device, and/or a random number. Additionally, the first user device and the second user device may utilize a secure protocol (e.g., Wireguard, IP sec, etc.) to communicate the data via the meshnet connection. Further, the first user device and the second user device may start communicating encrypted data via the meshnet connection based at least in part on utilizing the negotiated parameters and the secure protocol. In some aspects, the first user device and the second user device may establish meshnet connections with all other endpoints (e.g., the third user device and/or the fourth user device shown inFIG.2) included in the secure mesh network in a similar and/or analogous manner. Also, the other endpoints may establish meshnet connections with all other endpoints (e.g., the third user device and/or the fourth user device) included in the secure mesh network in a similar and/or analogous manner. Also, the other endpoints may establish meshnet connections with the other endpoints included in the secure mesh network in a similar and/or analogous manner. In some aspects, the first user device and the second user device may push (e.g., transmit) data to each other. For instance, when the first user device has data available for transmission to the second user device, the first user device may push a notification to the second user device indicating that the first user device wishes to transmit data to the second user device. In some aspects, the push notification may identify the data to be transmitted. Further, based at least in part on transmitting the push notification, the first user device may transmit the data to the second user device via the meshnet connection. In some aspects, prior to transmitting the data, the first user device may wait to receive a confirmation message from the second user device indicating that the second user device is ready to receive the data. In some aspects, the first user device may transmit the data even when the second user device is not included (e.g., temporarily disconnected) in the secure mesh network. In this case, the first client application may suspend transmission of the data and may automatically resume transmission of the data based at least in part on determining that the second user device is included (e.g., reconnected) in the secure mesh network. The second user device may push data to the first user device in a similar and/or analogous manner. In some aspects, the first user device and the second user device may pull (e.g., request) data from each other. For instance, when the first user device wishes to receive data from the second user device, the first user device may transmit a request to the second user device indicating that the first user device wishes to receive data from the second user device. In some aspects, the request may identify the data to be received. Further, based at least in part on receiving the request, the second user device may transmit the data to the first user device via the meshnet connection. In some aspects, the first user device may transmit the request even when the second user device is not included (e.g., temporarily disconnected) from the secure mesh network. In this case, the first client application may suspend transmission of the request and may automatically resume transmission of the request based at least in part on determining that the second user device is included (e.g., reconnected) in the secure mesh network. The second user device may pull data from the first user device in a similar and/or analogous manner. As shown by reference numeral380, the first client application may monitor and route incoming traffic (e.g., received communication) and outgoing traffic (e.g., transmission communication) associated with the first user device. With respect to outgoing traffic, the first client application may intercept and route the outgoing traffic based at least in part on determining a destination associated with the outgoing traffic. In some aspects, although the first user device is associated with the VPN connection and with the meshnet connections, instead of routing all outgoing traffic via the VPN connection, the first client application may determine the destination associated with the outgoing traffic and may route the outgoing traffic based at least in part on the determined destination. The first client application may determine the destination based at least in part on information indicated in the outgoing traffic. In some aspects, the outgoing traffic may include a transmission communication (e.g., transmission packet to be transmitted). In an example, the first client application may analyze metadata associated with the transmission communication to determine the destination of the transmission communication. In the case that the transmission communication includes an IP packet, the first client application may analyze header information (e.g., metadata) associated with the IP packet and may determine the destination based at least in part on analyzing a destination IP address included in the header information. When the first client application determines that the destination of the IP packet is an endpoint within the mesh network, the first client application may route the IP packet to be transmitted to the endpoint via the meshnet connection established between the first user device and the endpoint. In some aspects, the first client application may determine that the destination is an endpoint within the mesh network based at least in part on analyzing and determining the destination IP address (e.g., destination information) to include a meshnet IP address associated with the endpoint. In an example, the first client application may determine that the destination is the second user device based at least in part on analyzing and determining the destination IP address to include the second meshnet IP address associated with the second user device. In another example, the first client application may determine that the destination is the second user device based at least in part on comparing the destination IP address with one or more known meshnet IP addresses, and determining that the destination IP address matches the second meshnet IP address. In such cases, the first client application may route the IP packet to be transmitted to the second user device via the meshnet connection established between the first user device and the second user device. In some aspects, the first client application may transmit the IP packet to the second user device based at least in part on utilizing the negotiated and exchanged parameters and the secure protocol. Alternatively, when the first client application determines that the destination of the IP packet is a device other than an endpoint within the mesh network, the first client application may route the IP packet to be transmitted to the device via the VPN connection established between the first user device and the VPN server. In some aspects, the first client application may determine that the destination is a device other than the endpoint within the mesh network based at least in part on analyzing and determining the destination IP address (e.g., destination information) to not include the meshnet IP address associated with an endpoint. In an example, the first client application may determine that the destination is a device other than the second user device (or the third user device or the fourth user device) based at least in part on analyzing and determining the destination IP address to not include the second meshnet IP address associated with the second user device (or a third meshnet IP address associated with the third user device or a fourth meshnet IP address associated with the fourth user device). In another example, the first client application may determine that the destination is a device other than the second user device (or the third user device or the fourth user device) based at least in part on comparing the destination IP address with one or more known meshnet IP addresses, and determining that the destination IP address fails to match any known meshnet IP address (e.g., the second meshnet IP address, the third meshnet IP address, or the fourth meshnet IP address). In such cases, the first client application may route the IP packet to be transmitted to the device other than the second user device (or the third user device for the fourth user device) via the VPN connection established between the first user device and the VPN server. By utilizing the above systems and techniques associated with enabling efficient communication in a hybrid network, the VSP control infrastructure and the client application may avoid all communications transmitted by the user device being transmitted via the encrypted tunnel. As a result, the VSP control infrastructure and the client application may enable efficient utilization of VPN resources (e.g., processing resources, management resources, memory resources, network bandwidth, power consumption, etc.) to perform suitable tasks associated with providing VPN services and/or mesh network services. Additionally, the VSP control infrastructure and the client application may avoid unnecessary addition of a plurality of hops between the user device and the one or more endpoints, thereby mitigating increase in consumption of network resources and avoiding introducing a delay in the data being received by the one or more endpoints. The VSP control infrastructure and the client application may also enable optimum utilization of existing mesh network resources (e.g., connection between the first user device and the second user device) that are dedicated for communication of data between the user device and the one or more endpoints. Further, a single client application provided by the VSP control infrastructure may enable the user device to securely communicate data in the mesh network and to privately communicate data via the VPN network, thereby enabling conservation of user device resources (e.g., processing resources, memory resources, power consumption, battery life, etc.). As indicated above,FIG.3is provided as an example. Other examples may differ from what is described with regard toFIG.3. FIG.4is an illustration of an example process400associated with enabling efficient communication in a hybrid network, according to various aspects of the present disclosure. In some aspects, the process400may be performed by a memory and/or a processor/controller (e.g., processing unit116, processor920) associated with a user device (e.g., user device102) executing a client application. As shown by reference numeral410, process400may include determining, by a first device having an established virtual private network (VPN) connection with a VPN server and an established meshnet connection with a second device in a mesh network, that a destination associated with a transmission packet to be transmitted by the first device is the second device in the mesh network. For instance, the user device may utilize the associated memory and/or processor to determine, by a first device having an established virtual private network (VPN) connection with a VPN server and an established meshnet connection with a second device in a mesh network, that a destination associated with a transmission packet to be transmitted by the first device is the second device in the mesh network, as discussed elsewhere herein. As shown by reference numeral420, process400may include transmitting, by the first device, the transmission packet utilizing the meshnet connection based at least in part on determining that the destination is the second device in the mesh network. For instance, the user device may utilize a communication interface (e.g., communication interface970) and the associated memory and/or processor to transmit the transmission packet utilizing the meshnet connection based at least in part on determining that the destination is the second device in the mesh network, as discussed elsewhere herein. Process400may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, in process400, determining that the destination is the second device includes determining that destination information in the transmission packet includes a meshnet address associated with the second device. In a second aspect, alone or in combination with the first aspect, in process400, determining that the destination is the second device includes comparing destination information in the transmission packet with a meshnet address associated with the second device. In a third aspect, alone or in combination with the first through second aspects, in process400, transmitting the transmission packet includes encrypting data included in the transmission packet utilizing a symmetric encryption key. In a fourth aspect, alone or in combination with the first through third aspects, process400may include communicating with the second device to determine a symmetric key to be utilized for encrypting or decrypting data communicated over the meshnet connection. In a fifth aspect, alone or in combination with the first through fourth aspects, process400may include utilizing a single client application to establish the VPN connection and the meshnet connection. In a sixth aspect, alone or in combination with the first through fifth aspects, process400may include establishing the VPN connection with the VPN server; and establishing the meshnet connection with the second device. AlthoughFIG.4shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.4. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.4is provided as an example. Other examples may differ from what is described with regard toFIG.4. FIG.5is an illustration of an example process500associated with enabling efficient communication in a hybrid network, according to various aspects of the present disclosure. In some aspects, the process500may be performed by a memory and/or a processor/controller (e.g., processing unit116, processor920) associated with a user device (e.g., user device102) executing a client application. As shown by reference numeral510, process500may include determining, by a first device having an established virtual private network (VPN) connection with a VPN server and an established meshnet connection with a second device in a mesh network, a transmission packet to be transmitted by the first device. For instance, the user device may utilize an associated communication interface (e.g., communication interface970) along with the memory and/or processor to determine, while having an established virtual private network (VPN) connection with a VPN server and an established meshnet connection with a second device in a mesh network, a transmission packet to be transmitted by the first device, as discussed elsewhere herein. As shown by reference numeral520, process500may include transmitting, by the first device, the transmission packet to the second device utilizing the meshnet connection based at least in part on determining that a destination associated with the transmission packet is the second device or to the VPN server utilizing the VPN connection based at least in part on determining that the destination associated with the transmission packet is a device other than the second device. For instance, the user device may utilize an associated communication interface (e.g., communication interface970), memory, and/or processor to transmit the transmission packet to the second device utilizing the meshnet connection based at least in part on determining that a destination associated with the transmission packet is the second device or to the VPN server utilizing the VPN connection based at least in part on determining that the destination associated with the transmission packet is a device other than the second device, as discussed elsewhere herein. Process500may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, process500may include determining, by the first device, whether the transmission packet is to be transmitted to the second device by utilizing the meshnet connection or to the VPN server by utilizing the VPN connection based at least in part on metadata included in the transmission packet. In a second aspect, alone or in combination with the first aspect, process500may include determining, by the first device, whether the transmission packet is to be transmitted to the second device by utilizing the meshnet connection or to the VPN server by utilizing the VPN connection based at least in part on a result of comparing a destination address associated with the transmission packet and a meshnet address associated with the second device. In a third aspect, alone or in combination with the first through second aspects, process500may include determining, by the first device, whether the transmission packet is to be transmitted to the second device by utilizing the meshnet connection or to the VPN server by utilizing the VPN connection based at least in part on determining whether a destination address associated with the transmission packet includes a meshnet address associated with the second device. In a fourth aspect, alone or in combination with the first through third aspects, process500may include comparing destination information associated with the transmission packet with a meshnet address associated with the second device to determine whether the transmission packet is to be transmitted to the second device by utilizing the meshnet connection or to the VPN server by utilizing the VPN connection. In a fifth aspect, alone or in combination with the first through fourth aspects, in process500, transmitting the transmission packet includes transmitting the transmission packet to the second device or to the VPN server by utilizing a single client application. In a sixth aspect, alone or in combination with the first through fifth aspects, in process500, transmitting the transmission packet to the second device includes encrypting the transmission packet by utilizing a symmetric key negotiated between the first device and the second device. AlthoughFIG.5shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.5. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.5is provided as an example. Other examples may differ from what is described with regard toFIG.5. FIG.6is an illustration of an example process600associated with enabling efficient communication in a hybrid network, according to various aspects of the present disclosure. In some aspects, the process600may be performed by a memory and/or a processor/controller (e.g., processing unit116, processor920) associated with a user device (e.g., user device102) executing a client application. As shown by reference numeral610, process600may include monitoring, by a processor associated with a first device having an established VPN connection with a VPN server and an established meshnet connection with a second device, transmission of transmission packets to be transmitted by the first device. For instance, the user device may utilize the associated memory and/or a processor to monitor, while having an established VPN connection with a VPN server and an established meshnet connection with a second device, communication of transmission packets to be transmitted by the first device, as discussed elsewhere herein. As shown by reference numeral620, process600may include receiving, by the processor, a transmission packet to be transmitted by the first device. For instance, the user device may utilize an associated communication interface (e.g., communication interface970), memory, and/or processor to receive a transmission packet to be transmitted by the first device, as discussed elsewhere herein. As shown by reference numeral630, process600may include determining, by the processor, a destination associated with the transmission packet based at least in part on metadata included in the transmission packet. For instance, the user device may utilize the associated memory and/or processor to determine a destination associated with the transmission packet based at least in part on metadata included in the transmission packet, as discussed elsewhere herein. As shown by reference numeral640, process600may include routing, by the processor, the transmission packet for transmission via the VPN connection or for transmission via the meshnet connection based at least in part on determining whether the second device is the destination associated with the transmission packet. For instance, the user device may utilize the associated memory and/or processor to route the transmission packet for transmission via the VPN connection or for transmission via the meshnet connection based at least in part on determining whether the second device is the destination associated with the transmission packet, as discussed elsewhere herein. Process600may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, in process600, routing the transmission packet includes routing the transmission packet for transmission via the VPN connection based at least in part on determining that the second device is not the destination associated with the transmission packet. In a second aspect, alone or in combination with the first aspect, in process600, routing the transmission packet includes routing the transmission packet for transmission via the meshnet connection based at least in part on determining that the second device is the destination associated with the transmission packet. In a third aspect, alone or in combination with the first through second aspects, in process600, routing the transmission packet includes utilizing a single client application to route the transmission packet for transmission via the VPN connection or for transmission via the meshnet connection. In a fourth aspect, alone or in combination with the first through third aspects, in process600, routing the transmission packet for transmission via the meshnet connection includes encrypting the transmission packet by utilizing a symmetric key negotiated between the first device and the second device. In a fifth aspect, alone or in combination with the first through fourth aspects, in process600, determining that the second device is the destination associated with the transmission packet includes determining that a destination address associated with the transmission packet matches a meshnet address associated with the second device. In a sixth aspect, alone or in combination with the first through fifth aspects, in process600, determining that the second device is the destination associated with the transmission packet includes determining that a destination address associated with the transmission packet includes a meshnet address associated with the second device. AlthoughFIG.6shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.6. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.6is provided as an example. Other examples may differ from what is described with regard toFIG.6. FIG.7is an illustration of an example process700associated with enabling efficient communication in a hybrid network, according to various aspects of the present disclosure. In some aspects, the process700may be performed by a memory and/or a processor/controller (e.g., processing unit116, processor920) associated with a user device (e.g., user device102) executing a client application. As shown by reference numeral710, process700may include establishing, by a first device, a virtual private network (VPN) connection with a VPN server. For instance, the user device may utilize an associated communication interface (e.g., communication interface970) with the associated memory and/or a processor to establish a virtual private network (VPN) connection with a VPN server, as discussed elsewhere herein. As shown by reference numeral720, process700may include establishing, by the first device during the established VPN connection, a meshnet connection with a second device in a mesh network. For instance, the user device may utilize the associated communication interface, memory, and/or processor to establish, during the established VPN connection, a meshnet connection with a second device in a mesh network, as discussed elsewhere herein. As shown by reference numeral730, process700may include determining, by the first device, whether the second device is a destination associated with a transmission packet to be transmitted by the first device. For instance, the user device may utilize the associated memory and/or processor to determine whether the second device is a destination associated with a transmission packet to be transmitted by the first device, as discussed elsewhere herein. As shown by reference numeral740, process700may include transmitting, by the processor, the transmission packet by utilizing the VPN connection or by utilizing the meshnet connection based at least in part on determining whether the second device is the destination associated with the transmission packet. For instance, the user device may utilize the associated communication interface, memory, and/or processor to transmit the transmission packet by utilizing the VPN connection or by utilizing the meshnet connection based at least in part on determining whether the second device is the destination associated with the transmission packet, as discussed elsewhere herein. Process700may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, in process700, transmitting the transmission packet includes transmitting the transmission packet by utilizing the VPN connection based at least in part on determining that the second device is not the destination associated with the transmission packet. In a second aspect, alone or in combination with the first aspect, in process700, transmitting the transmission packet includes transmitting the transmission packet by utilizing the meshnet connection based at least in part on determining that the second device is the destination associated with the transmission packet. In a third aspect, alone or in combination with the first through second aspects, in process700, transmitting the transmission packet includes transmitting the transmission packet by utilizing the VPN connection or by utilizing the meshnet connection via a single client application. In a fourth aspect, alone or in combination with the first through third aspects, in process700, transmitting the transmission packet by utilizing the meshnet connection includes encrypting the transmission packet by utilizing a symmetric key negotiated between the first device and the second device. In a fifth aspect, alone or in combination with the first through fourth aspects, in process700, determining that the second device is the destination associated with the transmission packet includes comparing destination information associated with the transmission packet with a meshnet address associated with the second device. In a sixth aspect, alone or in combination with the first through fifth aspects, in process700, the transmission packet includes metadata indicating the destination associated with the transmission packet. AlthoughFIG.7shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.7. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.7is provided as an example. Other examples may differ from what is described with regard toFIG.7. FIG.8is an illustration of an example process800associated with enabling efficient communication in a hybrid network, according to various aspects of the present disclosure. In some aspects, the process800may be performed by a memory and/or a processor/controller (e.g., processing unit116, processor920) associated with a VSP control infrastructure (e.g., VSP control infrastructure104). As shown by reference numeral810, process800may include determining, by a first device having an established virtual private network (VPN) connection with a VPN server and an established meshnet connection with a second device in a mesh network, a transmission packet to be transmitted by the first device. For instance, the VSP control infrastructure may utilize the associated memory and/or a processor to enable a device to determine, while having an established virtual private network (VPN) connection with a VPN server and an established meshnet connection with a second device in a mesh network, a transmission packet to be transmitted by the first device, as discussed elsewhere herein. As shown by reference numeral820, process800may include determining, by the first device, whether the transmission packet is to be transmitted by utilizing the VPN connection or by utilizing the meshnet connection based at least in part on determining a destination associated with the transmission packet. For instance, the VSP control infrastructure may utilize the associated memory and/or processor to determine whether the transmission packet is to be transmitted by utilizing the VPN connection or by utilizing the meshnet connection based at least in part on determining a destination associated with the transmission packet, as discussed elsewhere herein. Process800may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, in process800, determining whether the transmission packet is to be transmitted by utilizing the VPN connection or by utilizing the meshnet connection includes determining that the transmission packet is to be transmitted by utilizing the VPN connection based at least in part on determining that the destination associated with the transmission packet is a device other than the second device. In a second aspect, alone or in combination with the first aspect, in process800, determining whether the transmission packet is to be transmitted by utilizing the VPN connection or by utilizing the meshnet connection includes determining that the transmission packet is to be transmitted by utilizing the meshnet connection based at least in part on determining that the destination associated with the transmission packet is the second device. In a third aspect, alone or in combination with the first through second aspects, in process800, determining the destination associated with the transmission packet includes comparing destination information associated with the transmission packet with a meshnet address associated with the second device. In a fourth aspect, alone or in combination with the first through third aspects, process800may include transmitting, via a single client application, the transmission packet by utilizing the VPN connection or by utilizing the meshnet connection. In a fifth aspect, alone or in combination with the first through fourth aspects, process800may include encrypting, by the first device, the transmission packet by utilizing a symmetric key negotiated between the first device and the second device based at least in part on determining that the destination associated with the transmission packet is the second device. In a sixth aspect, alone or in combination with the first through fifth aspects, in process800, the transmission packet includes metadata indicating the destination associated with the transmission packet. AlthoughFIG.8shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.8. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.8is provided as an example. Other examples may differ from what is described with regard toFIG.8. FIG.9is an illustration of example devices900associated with enabling efficient communication in a hybrid network, according to various aspects of the present disclosure. In some aspects, the example devices900may form part of or implement the systems, servers, environments, infrastructures, components, devices, or the like described elsewhere herein (e.g., VSP control infrastructure, VPN server, user device, etc.) and may be used to perform example processes described elsewhere herein. The example devices900may include a universal bus910communicatively coupling a processor920, a memory930, a storage component940, an input component950, an output component960, and a communication interface970. Bus910may include a component that permits communication among multiple components of a device900. Processor920may be implemented in hardware, firmware, and/or a combination of hardware and software. Processor920may take the form of a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component. In some aspects, processor920may include one or more processors capable of being programmed to perform a function. Memory930may include a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use by processor920. Storage component940may store information and/or software related to the operation and use of a device900. For example, storage component940may include a hard disk (e.g., a magnetic disk, an optical disk, and/or a magneto-optic disk), a solid state drive (SSD), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, along with a corresponding drive. Input component950may include a component that permits a device900to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, and/or a microphone). Additionally, or alternatively, input component950may include a component for determining location (e.g., a global positioning system (GPS) component) and/or a sensor (e.g., an accelerometer, a gyroscope, an actuator, another type of positional or environmental sensor, and/or the like). Output component960may include a component that provides output information from device900(via, for example, a display, a speaker, a haptic feedback component, an audio or visual indicator, and/or the like). Communication interface970may include a transceiver-like component (e.g., a transceiver, a separate receiver, a separate transmitter, and/or the like) that enables a device900to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. Communication interface970may permit device900to receive information from another device and/or provide information to another device. For example, communication interface970may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, and/or the like. A device900may perform one or more processes described elsewhere herein. A device900may perform these processes based on processor920executing software instructions stored by a non-transitory computer-readable medium, such as memory930and/or storage component940. As used herein, the term “computer-readable medium” may refer to a non-transitory memory device. A memory device may include memory space within a single physical storage device or memory space spread across multiple physical storage devices. Software instructions may be read into memory930and/or storage component940from another computer-readable medium or from another device via communication interface970. When executed, software instructions stored in memory930and/or storage component940may cause processor920to perform one or more processes described elsewhere herein. Additionally, or alternatively, hardware circuitry may be used in place of or in combination with software instructions to perform one or more processes described elsewhere herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. The quantity and arrangement of components shown inFIG.9are provided as an example. In practice, a device900may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.9. Additionally, or alternatively, a set of components (e.g., one or more components) of a device900may perform one or more functions described as being performed by another set of components of a device900. As indicated above,FIG.9is provided as an example. Other examples may differ from what is described with regard toFIG.9. Persons of ordinary skill in the art will appreciate that the aspects encompassed by the present disclosure are not limited to the particular exemplary aspects described herein. In that regard, although illustrative aspects have been shown and described, a wide range of modification, change, and substitution is contemplated in the foregoing disclosure. It is understood that such variations may be made to the aspects without departing from the scope of the present disclosure. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the present disclosure. The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the aspects to the precise form disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the aspects. As used herein, the term “component” or “device” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. As used herein, a processor is implemented in hardware, firmware, or a combination of hardware and software. As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, or not equal to the threshold, among other examples, or combinations thereof. It will be apparent that systems or methods described herein may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems or methods is not limiting of the aspects. Thus, the operation and behavior of the systems or methods were described herein without reference to specific software code—it being understood that software and hardware can be designed to implement the systems or methods based, at least in part, on the description herein. Even though particular combinations of features are recited in the claims or disclosed in the specification, these combinations are not intended to limit the disclosure of various aspects. In fact, many of these features may be combined in ways not specifically recited in the claims or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various aspects includes each dependent claim in combination with every other claim in the claim set. A phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (for example, a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).
80,145
11863533
DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS FIG.1shows a first machine M1, a second machine M2and a third machine M3, which can set up functionally safe connections to a first autonomous mobile robot unit AMR1, a second autonomous mobile robot unit AMR2and a third autonomous mobile robot unit AMR3. Here, functionally safe means in particular that the recipient can check whether the received data come from the correct transmitter, and have not been received from a different transmitter, e.g., on account of a network error or mobile radio interference. In the general case, the data transmission is performed in both directions, i.e., a bidirectional data transmission occurs here. The autonomous mobile robot units AMR1, AMR2, AMR3are in a factory building with an extent that can be described by way of a coordinate system XYZ. According toFIG.1, the communication system KS used is, such as a radio standard with a superimposed PROFIsafe protocol. It would also be possible to use a PROFINet standard, in which communication occurs via Ethernet, but this would require the robots to mechanically dock on the machines. The third autonomous mobile robot AMR3has the first communication subscriber A and the third machine M3has the second communication subscriber B. The two communication subscribers A, B form an arrangement CCC, which is described in more detail with reference toFIG.3. When the third autonomous mobile robot AMR3approaches the machine M3, it sets up a connection to the second communication subscriber B via the first communication subscriber A, and it therefore makes a first unidirectional connection UV1and a second unidirectional connection UV2. The third autonomous mobile robot AMR3additionally has a position ascertaining means PE that it can use to ascertain its position data X, Y, Z. Additionally, the third autonomous mobile robot AMR3has an ascertaining means EM configured to access a configuration database KD, the ascertaining means additionally being designed to take the position data X, Y, Z as a basis for using the configuration database KD to ascertain the third address identifier ID_PGB. FIG.2shows another exemplary embodiment of functionally safe connection setup between mobile devices and machines. The first machine M1, the second machine M2and the third machine M3can be operated using different control panels BP1, BP2. Here, a second control panel PB2has registered with the second machine M2and uses a side channel SK to set up a first safe unidirectional connection UV1and a second safe unidirectional connection UV2. The communication system KS used is a time-sensitive network TSN. FIG.3shows an arrangement CCC for controller-controller communication. A first communication subscriber A sets up two unidirectional functionally safe connections UV1, UV2to a second communication subscriber B. The first communication subscriber A has a first data consumer CIAhaving a first address identifier ID_CIAand a first data provider PIAhaving a second address identifier ID_PIA. The second communication subscriber B has a second data provider PGBhaving a third address identifier ID_PGBand a second data consumer CGB. Additionally, means are present for setting up a first unidirectional functionally safe connection UV1between the first data consumer CIAand the second data provider PGBand a second unidirectional functionally safe connection UV2between the first data provider PIAand the second data consumer CGB. The first communication subscriber A is configured to ascertain the third address identifier ID_PGBof the second data provider PGB. The first communication subscriber A has a mapping unit AE configured to use a computation rule f, which is applied to the second address identifier ID_PIA, to produce the first address identifier ID_CIA. Additionally, the mapping unit AE is configured to forward the second address identifier ID_PIAto the first data consumer CIA. The first data consumer CIAis configured to transmit the second address identifier ID_PIAto the second data provider PGBin a first request message RQ1. The second data provider PGBis configured to respond to the first request message RQ1with a first response message Res1. The first response message Res1contains first safety-oriented data FB-Data and the third address identifier ID_PGB. The first data consumer CIAhas checking means PMA. These checking means PMAare configured to check whether the first response message Res1contains the third address identifier ID_PGB. This check can be performed by the checking means PMAbecause the first communication subscriber A has retrieved the third address identifier ID_PGBvia a side channel SK in an earlier step. Additionally, the checking means PMAis configured so as, if the result of this check is positive, to declare the first safety-oriented data FB-Data to be valid, and otherwise to reject them, as a result of which the first unidirectional connection UV1is functionally protected. The second communication subscriber B has a reverse mapping unit RAE configured to use the computation rule f to recover the second address identifier ID_PIAfrom the first address identifier ID_CIAand to transfer the second address identifier to the second data consumer CGBfor a later request. The first data provider PIAand the second data consumer CGBare now configured to use the second address identifier ID_PIAthat is now known to them on both sides to functionally protect the second unidirectional connection UV2between the first data provider PIAand the second data consumer CGB. To this end, the second data consumer CGBessentially has a cross-checking means PMBconfigured to check whether the second response message Rest contains the second address identifier ID_PIAand, if the result of this check is positive, then the second safety-oriented data FA-Data are accepted, and otherwise rejected. As safety-oriented data, it would be possible, for example, for the data signal of an emergency off switch100to be passed on. An emergency stop command101is forwarded to the first communication subscriber A via the first unidirectional connection UV1as a functionally safe datum FB-Data. Safety-oriented data could also be a ready signal102from a robot. These would then be forwarded from the first communication subscriber A to the second communication subscriber B via the second unidirectional connection UV2. FIG.4shows a timing sequence for request messages RQ1and response messages Res1. The right-hand side depicts the first communication subscriber A in the form of an autonomous mobile robot unit AMR3, in principle. The first communication subscriber A has the first data consumer CIAand the first data provider PIA. The method for functionally safe connection identification involves the first communication subscriber A ascertaining the third address identifier ID_PGB, for example, via a side channel SK, in a first step1. The third address identifier ID_PGBis now known to the first communication subscriber A. In a second step2, the computation rule f is used in the first communication subscriber A to calculate the first address identifier ID_CIA, and this first address identifier ID_CIAis transmitted to the second data provider PGBin a first request message RQ1. The second data provider PGBresponds with a first response message Res1containing first safety-oriented data FB-Data and the third address identifier ID_PGB. In a third step3, a check is performed in the first communication subscriber A or in the first data consumer CIAto determine whether the first response message Res1contains the third address identifier ID_PGB, and, if the result of this check is positive, then the first safety-oriented data FB-Data are accepted, and otherwise rejected. In a fourth step4, the computation rule f is used to likewise produce the second address identifier ID_PIAin the second communication subscriber B and to forward the second address identifier to the second data consumer CGB. The second data consumer now sends a second request message RQ2to the first data provider PIAin a fifth step5. The first data provider PIAresponds with a second response message Res2containing second safety-oriented data FA-Data and the second address identifier ID_PIA. In a sixth step6, a check is then performed to determine whether the second response message Res2contains the second address identifier ID_PIA, and, if the result of this check is positive, then the second safety-oriented data FA-Data are accepted, and otherwise rejected. FIG.5depicts an alternative timing sequence for request and response messages. This method would be employed if a maximum value of the second address identifier ID_PIAexceeds a word length available in a protocol that is used. The first data consumer CIAwould then split the second address identifier ID_PIAinto parts part1, part2, part3and transmit the parts part1, part2, part3to the second data provider PGBusing partial request messages RQ11, RQ12, RQ13. The parts part1, part2, part3are reassembled in the second data provider PGBand the second address identifier ID_PIAis ascertained. Accordingly, the address identifier ID_PGBis again ascertained via a side channel SK in a step1. In an alternative first step11, a first partial request message RQ11is used to transmit the first part part1. In an alternative second step12, this transmission is answered with a response message Res. In an alternative third step13, a second partial request message RQ12is used to transmit the second part part2. In an alternative fourth step14, a third partial request message RQ13is used to transmit the third part part3. In an alternative fourth intermediate step14a, the address identifier is now assembled from the three parts part1, part2, part3, and the computation rule f is applied to the assembled parts part1, part2, part3. A final request message FRQ1is now transmitted, which is answered with a final response message FRES. This final response message FRES contains the safety-oriented data FA-Data and the second address identifier ID_PIA. Although the second address identifier ID_PIAis now an address with a long word length, the special feature of the protocol employed is that there is provision for more space in the response messages than in the request messages. FIG.6is a flowchart of the method for functionally safe connection identification for a bilateral data interchange of safety-oriented data FBdata, FA data between two communication subscribers A, B in a communication system KS, where safety oriented data is interchanged via safety-oriented communication, address relationships comprising destination addresses and source addresses are planned for the safety-oriented communication, a first data consumer CIAhaving a first address identifier ID_CIAand a first data provider PIA are operated in a first communication subscriber A, a second data provider PGB having a third address identifier ID_PGB is operated, a second data consumer CGBis additionally operated, in a second communication subscriber B, a first unidirectional connection UV1is set up between the first data consumer CIAand the second data provider PGB, and a second unidirectional connection UV2is set up between the first data provider PIA and the second data consumer CGB. The method comprises ascertaining, by the first communication subscriber A, the third address identifier ID_PGB, as indicated in step610. Next, an identifier is produced in the first communication subscriber A utilizing a computation rule f that is applied to a unique value, as indicated in step620. Here, the identifier is communicated to the first data consumer CIA. Next, the first data consumer CIAtransmits the unique value to the second data provider PGB in a first request message RQ1, as indicated in step630. Next, the second data provider PGB responds with a first response message Res1) containing first safety-oriented data FB-Data and the third address identifier ID_PGB, as indicated in step640. Next, a check is performed in the first data consumer CIAto determine whether the first response message Res1contains the third address identifier ID_PGB, and the first safety-oriented data FB-Data is accepted if a result of the check is positive, and otherwise rejecting the first safety-oriented data FB-Data if the result of the check is negative, as indicated in step650. Next, the identifier is produced in the second communication subscriber B utilizing the computation rule f, as indicated in step660. Next, the identifier is utilized to functionally protect the second unidirectional connection UV2between the first data provider PIA and the second data consumer CGB, as indicated in step670. Thus, while there have been shown, described and pointed out fundamental novel features of the invention as applied to a preferred embodiment thereof, it will be understood that various omissions and substitutions and changes in the form and details of the methods described and the devices illustrated, and in their operation, may be made by those skilled in the art without departing from the spirit of the invention. For example, it is expressly intended that all combinations of those elements and/or method steps which perform substantially the same function in substantially the same way to achieve the same results are within the scope of the invention. Moreover, it should be recognized that structures and/or elements and/or method steps shown and/or described in connection with any disclosed form or embodiment of the invention may be incorporated in any other disclosed or described or suggested form or embodiment as a general matter of design choice. It is the intention, therefore, to be limited only as indicated by the scope of the claims appended hereto.
13,804
11863534
DETAILED DESCRIPTION Embodiments of the present disclosure are described herein. It is to be understood, however, that the disclosed embodiments are merely examples and other embodiments can take various and alternative forms. The figures are not necessarily to scale; some features could be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the embodiments. As those of ordinary skill in the art will understand, various features illustrated and described with reference to any one of the figures can be combined with features illustrated in one or more other figures to produce embodiments that are not explicitly illustrated or described. The combinations of features illustrated provide representative embodiments for typical applications. Various combinations and modifications of the features consistent with the teachings of this disclosure, however, could be desired for particular applications or implementations. A virtual private network (VPN) extends a private network across a public network and enables users to transmit and receive data across shared or private networks as if the user's computer is directly connected to the private network. Network engineers commonly setup VPN architecture across networks because specialized skills and experience support solving issues that may be encountered when setting up the VPN architecture. Another route for granting access is internet protocol (IP) whitelisting. IP whitelisting grants access only to specific IP addresses. For example, an authorized user can share a home IP address with a network engineer (e.g., network administrator), who enters the IP address on a whitelist granting network access. While IP whitelisting provides an easy and secure way to access private network resources, whitelisting an IP address may compromise security of a user as well as server reliability for other users. IP masking may be used to hide a user's IP address from others by replacing it with a different IP address. IP masking may be accomplished using a VPN but doing so has drawbacks. For example, IP masking using a VPN may slow down a user's internet connection. Moreover, IP masking may also make it more difficult to install and set up products based on proxies. Another networking technique to provide access to network devices and resources is virtual local area networks (“VLAN”). A VLAN may be a broadcast domain partitioned and isolated in a computer network at the data link layer (i.e., the second layer of a seven-layer open system interconnection (OSI) model of computer networking). “Virtual” in VLAN may refer to a physical object recreated and/or altered by additional logic within a local area network. While a VLAN has benefits such as allowing network administrators to automatically limit access to specified user groups by dividing workstations into isolated LAN segments, VLANs may also have one or more drawbacks. For instance, a data packet on a VLAN may leak from a first VLAN to a second VLAN. Data packets may be injected into a VLAN. These injected packets may lead to a cyber-attack. The network may require additional routers and may cause interoperability issues. As described above, achieving accessibility across a network using a VPN involves considerations of openness and isolation (e.g., privacy and security) across the network. For instance, some proposed solutions may provide access to other users of the network, so they have access to each other's traffic and data packets, but this does not achieve isolation. Also, achieving isolation at scale may be difficult due to the manual nature of setting up these VPN infrastructures using network engineers and/or administrators. Masking a considerable number of VPNs together at scale is not feasible using current solutions. In light of the foregoing, what is needed is a scalable network interface system that addresses one or more of drawbacks identified above. The scalable network interface system of one or more embodiments to dynamically create one to one port networks for scalability while achieving requisite isolation, thereby creating isolation at scale. The system component configured to dynamically create one to one port networks for scalability may be a VPN credentialing module or a portion thereof. The dynamic creation of the networks may use network address translation (NAT) (e.g., NATing directly between the VPN service and a network service). In one or more embodiment, NAT may refer to a mapping of an IP address space into another by modifying network address information in the IP header of packets while the packets are in transit across a traffic routing device. One or more embodiments of the scalable network interface system may be used to provide network access to network devices behind a firewall of a remote network. The network interface may route between first and second VPNs. The network interface may be configured to control aspects of a domain name system (DNS). Non-limiting examples of network devices include Internet of Things (IoT) devices, alarm panels, digital video recorders (DVRs), network video recorders (NVRs), network cameras, intercoms, and video door stations. One or more embodiments of the scalable network interface system may be used with commercial and/or residential digital alarm systems. The scalable network interface system may enable access to one or more network devices remotely through cloud-based applications instead of servers installed on premises within a remote network. This enabling technology of one or more embodiments allows programming of each of the scalable network interface connected devices through a web browser. In one or more embodiments, the network interface system may be implemented over a broadband network and/or internet service provider (ISP) IP address network. The scalable network interface network of one or more embodiments may create an encrypted, encapsulated communication path over a network, thereby allowing network management of devices and cybersecurity protections. The network interface data may be automatically embedded into network devices such as cameras. The logic and/or algorithms of the scalable network interface system of one or more embodiments is built on (and in some embodiments, solely in) the application layer (layer 7) of the OSI model. The logic and/or algorithms in layer 7 may instruct the tunneling to take place at a lower layer (e.g., layer 1, 2 or 3). The application layer may be used by end-user software such as web browsers and email clients. The scalable network interface system may enable proxies used for communication with the network device. Non-limiting examples of proxies include session initiation protocol (SIP) proxy and hypertext transfer protocol secure (HTTPS) proxy. SIP refers to a signaling protocol that enables voice over internet protocol (VoIP) by defining messages sent between endpoints to manage the elements of the call. SIP may be used to support voice calls, video conferencing, instant messaging, and/or media distribution. The enabled proxies may be used to provide access to devices that use the proxies. The logic and/or algorithms may be built in an operating system, such as Linux or Unix. The operating system may create conflicts by sending multiple routes to the same gateway (e.g., a bridge between first and second networks permitting communication and data transfer therebetween). The layer 7 logic and/or algorithms may be built into a gateway. The layer 7 logic and/or algorithms may be implemented as routing protocols and functions using kernel-based routing within a kernel of the operating system. In one or more embodiments, kernel-based routing is used instead of IP routing performed on packets. In one or more embodiments, a layer 7 application (e.g., built in Linux) examines the header of a first packet in a transport layer protocol (e.g., transmission control protocol (TCP)) stream without disrupting the rest of the stream. FIG.1depicts computer system10configured to initiate and use a scalable network interface system according to one or more embodiments. Computer system10includes remote network12, cloud network14, and local network16. Remote network12may be configured to obtain outputs from network devices. These outputs may be used for alarm monitoring and dispatch. As another non-limiting example, the network devices may be IoT devices such as smart refrigerators, lighting systems, thermostats, etc. Cloud network14is configured to include one or more remote servers in a cloud computing architecture (e.g., Amazon Web Services (AWS)). Local network16may be configured with local computers executing client applications using the outputs obtained from network devices from remote network12. Cloud network14may be part of the world-wide web or the internet. Cloud network14may establish a standard communication protocol between computing devices in remote network12and local network16. Remote network12and local network16are configured to host server and client computers configured to host a website or webpage from which outputs obtained from network devices of remote server12may be obtained. Remote network12includes remote router18and local network16includes local router20. Remote router18may include a remote network interface and a wired or wireless Ethernet router. Remote router18is configured to establish a remote network with one or more servers and/or client computers. Remote router18may be further configured to provide a communication interface to cloud network14. Local router20may include a local network interface and a wired or wireless Ethernet router. Local router20is configured to establish a local network with one or more servers and/or client computers. Local router20may be further configured to provide a communication interface to cloud network14. Remote network12also includes firewall22connected to remote router18. Firewall22is configured to monitor and control network traffic incoming and outgoing from remote router18. Firewall22is configured to create a barrier between a trusted network (e.g., remote network12) and an untrusted network (e.g., cloud network14). In one or more embodiments, firewall22may be replaced with another network device capable to enable a network interface (e.g., an access point). Remote network12further includes switch24connected to firewall22. Switch24is configured to connect network devices of remote network12by using packet switching to receive and forward data to a destination (e.g., through firewall22and remote router18and cloud network14to local network16). Switch24may be configured with a multiport network bridge using media access control (MAC) addresses to forward data. The network devices in communication with switch24may have unique MAC addresses. Switch24may be a SonicWall switch available from SonicGuard.com of Cary, North Carolina. Switch24may be directly connected to cloud network14(as opposed to indirectly connected to cloud network14through firewall22and remote router18) to provide direct cloud access between switch24and cloud network14. As discussed above, remote network12includes network devices. In the embodiment shown inFIG.1, the network devices include DVR26, camera28, alarm panel30, intercom32, and video door station34. WhileFIG.1depicts certain network devices, other network devices may be included within remote network12. Non-limiting examples of other network devices that may be used in one or more embodiments include artificial intelligence (AI) voice assistants, intelligent lighting systems, learning thermostats, air quality monitors, home voice controllers, and/or mesh Wi-Fi systems. DVR26is configured to receive digital video clips and/or digital video frames from one or more network cameras36A,36B, and36C and transmit these forms of output to switch24. While DVR26is shown as part of remote network12onFIG.1, remote network12may also include one or more network video recorders (NVRs). Remote network12also includes camera28(e.g., a digital camera) configured to transmit digital video clips and/or digital video frames directly to switch24. DVR26and/or one or more NVRs may communicate with switch24using a proxy (e.g., an HTTPS proxy). Alarm panel30is configured to receive sensor output from one or more sensors38A,38B, and38C. Alarm panel30includes an alarm controller having different channels configured for each specific sensor. The alarm controller is configured to transmit the sensor output to switch24. The alarm controller may also be configured to transmit alarm alerts in response to the sensor output. Non-limiting examples of one or more sensors38A,38B, and38C include, without limitation, motion detectors (e.g., passive infrared motion detectors), smoke detectors, breakage detectors (e.g., glass break detectors), temperature detectors, ultrasonic detectors, microwave detectors, magnetic switches, photoelectric beams, and gas sensors. Alarm panel30may communicate with switch24using a proxy (e.g., an HTTPS proxy). Intercom32is configured to transmit data to and/or receive data from relay40and microphone42. Although only a single relay40and a single microphone42are shown inFIG.1, multiples of each or both may be included with remote network12. Intercom32is configured to enable two-way communication between people. Intercom32may be utilized to grant remote access through an access point of a building or residence (e.g., entry door, garage door, and/or gate). Intercom32may be an IP7 intercom and paging amplifier. Relay40may be configured to be activated upon entry of a valid code to provide access to a building or residence. Microphone42may be configured to translate sound vibrations (e.g., a human voice) into electronic signals that can be broadcast through a speaker and/or recorded to a recording medium. Video door station34is configured to transmit data to and/or receive data from switch24of remote network12. Video door station34may include one or more input/output devices such as a button, a microphone, and/or a video camera. Video door station34may be configured to provide a digital door bell feature. Video door station34and switch24may be configured to communicate with each other using a protocol (e.g., SIP protocol). Computer system10also includes local network16. Local network16includes local router20, VPN application programming interface (API)44, proxy consumer46, proxy API48, and user computer50. WhileFIG.1depicts these devices/components located on a single local network, these devices/components may be spread across multiple local networks. For example, proxy consumer46, proxy API48, and user computer50may be on a first local network with a first local router, and VPN API44may be on a second local network with a second local router. User computer50may include an alarm monitoring module and an alarm monitoring database. The alarm monitoring module may be configured to display graphical user interfaces (GUIs) on user computer50. As described below, a user computer may receive data from and transmit data to protected devices using scalable network interfaces in accordance with one or more embodiments. The user of user computer50may be a subscriber of alarm services associated with remote network12. The user may be an operator at a central station or a client site. The alarm monitoring module may be configured to receive digital video clips and/or digital video frames through cloud network14. The alarm monitoring database may be configured to selectively store digital video clips and/or digital video frames received through cloud network14. In one or more embodiments, user computer50may include a video client computer application configured for live viewing, control, search and/or playback features for any camera connected to a network. Non-limiting examples of cameras include cameras28,36A,36B, and36C. Non-limiting examples of a network include the internet. The video client computer application may be physically installed on user computer50. Alternatively, the video client computer application may be virtually served to user computer50using cloud network14. FIG.2depicts sequence diagram50of the steps to initiate and use scalable network interfaces according to one embodiment. In one or more embodiments, the steps to initiate and use scalable network interfaces may be executed using central processing unit (CPU) clock cycles using a low-level programming language (e.g., assembly language). The low-level programming language may be used to directly control the hardware identified inFIG.2. The CPU clock cycles of user computer50may be used to initiate and use scalable network interfaces. In one or more embodiments, the steps of sequence diagram50can be used within the framework of computer system10to dynamically scale network interfaces between the resource of local network16and one or more protected devices (e.g., DVR26, camera28, alarm panel30, intercom32, and/or video door station34). While five (5) potentially protected devices are shown inFIG.2, the methods and systems of one or more embodiments are capable of scaling thousands of network interfaces dynamically while maintaining isolation and not causing significant degradation of network performance. Network interface API44may be executed on a local computer in local network16via web browser software installed physically or virtually on the local computer. Network interface API44may be built into the web browser software. The features of network interface API44may be provided through the web browser software and/or web apps. Network interface API44may be configured to receive and to transmit data and instructions from and to local router20and/or proxy consumer46. Network interface API44may utilize features from JavaScript, extensible markup language (XML), dynamic hypertext markup language (DHTML), and/or document object model (DOM). Proxy consumer46may be executed on a local computer in local network16. Proxy consumer46may be configured to create a connection to a server of a web service (e.g., a web service executed on user computer50). The features of proxy consumer46may be provided through web browser software or web apps. Proxy consumer46may be configured to receive and to transmit data and instructions from and to network interface API44, local router20, and/or proxy API48. Proxy API48may be executed on a local computer in local network16via web browser software installed physically or virtually on the local computer. Proxy API48may be built into the web browser software. The features of proxy API48may be provided through the web browser software and/or web apps. Proxy API48may be configured to receive and to transmit data and instructions from and to user computer50and proxy consumer46. Proxy API48may utilize features from JavaScript, extensible markup language (XML), dynamic hypertext markup language (DHTML), and/or document object model (DOM). In one embodiment, network interface API44, local router20, and remote18may be used in combination to provide a scalable number of network interfaces (e.g., VPNs) to network devices on remote servers. In one or more embodiments, the network interfaces provide one to one isolated communication paths to network devices at scale without sacrificing security and/or connectivity speed. These individual network interfaces may be used to access data and information output by the network devices. For instance, a first network interface may be established between a first remote network device and a cloud network and/or local network configured to access the first remote network device and data and information output therefrom, and a second network interface may be established between a second remote network device and the cloud network and/or local network. As depicted in operation52of scalable network interface creation/access process54as shown inFIG.2, network interface API44receives a network interface request. The network interface request may be received from a device or resource on cloud network14and/or local network16. In one or more embodiments, the network interface request is the first step for establishing a network interface between a network device and on a remote server and a cloud network and/or local server/computer. The network interface request includes one or more identifiers (e.g., identification of a remote router, one or more protected devices, etc.). As depicted in operation56of scalable network interface creation/access process54as shown inFIG.2, network interface API44transmits a create network interface command in response to receiving the network interface request. In one or more embodiments, the network interface command is configured to initiate one or more network interface services (e.g., creation of a network interface between a network device on a remote server and a cloud network and/or local server/computer). The network interface command may also be used to generate status information in connection with one or more network interface services. In the embodiment shown inFIG.2, the network interface command is transmitted to local router20residing on local network16. As depicted in operation58of scalable network interface creation/access process54as shown inFIG.2, local router20transmits an establish network interface instruction in response to receiving the create network interface command. In the embodiment shown inFIG.2, the establish network interface command is transmitted to remote router18through cloud network14. As shown inFIG.1, network interface communication path60is established between switch24of remote network12and local router20of local network16as part of the establishment of network interface command. As shown inFIG.1, network interface communication path60extends through cloud network14, remote router18, and firewall22, between switch24and local router20. In one or more embodiments, network interface communication path60is established behind firewall22of remote network12. In one or more embodiments, the network interface communication path may extend between a remote router and a virtual router of a cloud network. The virtual router may be a software application hosted in the cloud network and configured with features of hardware routers (e.g., connectivity hot spot, enabling online access, etc.). Network interface communication path60is configured to support a scalable number of network interfaces between individual network devices on remote network12and cloud network14and/or local network16. Network interface communication path60enables one (1) to one (1) communication with individual network devices at scale while maintaining isolation and network connectivity. The individually created network interfaces provide access to network devices by user applications hosted on local network16and/or cloud network14. The individually created network interfaces are configured to simultaneously tunnel through network interface communication path60. The individually created network interfaces are configured to extend from network interface communication path60to an individual protected device (e.g., cameras36A,36B, and/or36C of DVR26, camera28, sensors38A,38B, and/or38C of alarm panel30, relay40and/or microphone42of intercom32, and/or video door station34). Network interface communication path60enables direct access between a protected device in user applications hosted by local network16and/or cloud network14, instead of an architecture where such user applications are installed and executed on remote server12behind firewall22. As depicted by operation61of scalable network interface creation/access process54as shown inFIG.2, user computer50transmits a proxy request to proxy API48. The proxy used in the proxy request may be, but is not limited to, a real time streaming protocol (RTSP) proxy, a session initiation protocol (SIP) proxy, and a HyperText Transfer Protocol (HTTP) proxy. The RTSP proxy may be a software application configured to receive RTSP streams (e.g., video clips and video streams) and to make those RTSP streams available to other users. The SIP proxy may be a server configured to manage SIP calls within a network (e.g., process requests from user agents to place and to terminate calls). The HTTP proxy may be a software application configured to filter Web traffic content (e.g., identify suspicious content, viruses, or other intrusions, and protect HTTP servers from attacks). As depicted by operation62of scalable network interface creation/access process54as shown inFIG.2, proxy API48starts a proxy consumer in response to receiving a proxy request. The proxy request may be received from user computer50. The proxy consumer may be used in an application to call or to consume an application (e.g., a web service). Once the proxy consumer is generated, it can be used by applications available on local network16and cloud network14. As depicted by operation64of scalable network interface creation/access process54as shown inFIG.2, proxy consumer46transmits a network interface owner request in response to proxy API48starting a proxy consumer. As shown onFIG.2, the router owner request is transmitted to network interface API44. The network interface owner may have rights to administer and to configure aspects (e.g., all aspects) of the network interface (e.g., a VPN). Network interface API44may be configured to transmit data related to the network interface owner to proxy consumer46in response to receiving the network interface owner query. The network interface owner data may include owner identification data, network interface administration data, and network interface configuration data. As depicted by operation66of scalable network interface creation/access process54as shown inFIG.2, proxy consumer46transmits a proxy request to local router20in response to proxy API48starting a proxy consumer. In one or more embodiments, the proxy request may include network interface owner data. In one or more embodiments, the proxy request may be transmitted simultaneously with the network interface owner query. In other embodiments, the proxy request may be transmitted after receiving network interface owner data at proxy consumer46. As depicted by operation68of scalable network interface creation/access process54as shown inFIG.2, local router20transmits the proxy request to remote router18in response to receiving the proxy request from proxy consumer46. The proxy request may be transmitted through cloud network14. As depicted by operation70of scalable network interface creation/access process54as shown inFIG.2, remote router18transmits the proxy request to a protected device in response to receiving the proxy request from remote router18. Operations61,62,64,66,68, and70may be executed in combination to create a network interface between user computer50and a protected device. The created network interface passes through network interface communication path60. A scalable number of network interfaces, each for an individual, different protected device may tunnel through network interface communication path60. Once the scalable network interface has been created, protected device and user computer50are configured to communicate through the scalable network interface. For instance, user computer50may transmit commands through one or more user software applications through the network interface. User computer50may also receive data (e.g., target content) from the protected device through the VPN. As depicted by operation72of scalable network interface creation/access process54as shown inFIG.2, target content or other data is transmitted from a protected device to remote router18. As depicted by operation74of scalable network interface creation/access process54as shown inFIG.2, target content or other data is transmitted from remote router18to local router20. As depicted by operation76of scalable network interface creation/access process54as shown inFIG.2, target content or other data is transmitted from local router20to proxy consumer46. As depicted by operation78of scalable network interface creation/access process54as shown inFIG.2, target content or other data is transmitted from proxy consumer46to proxy API48. As depicted by operation80of scalable network interface creation/access process54as shown inFIG.2, target content or other data is transmitted from proxy API48to user computer50. In one or more embodiments, the protected device may be user computer50and the network interface may be used to secure connections at scale to other devices or applications on a network (e.g., on the cloud or remote server remote from user computer50). FIG.3depicts a sequence diagram of the steps to prepare a subdomain in connection with a scalable VPN according to one embodiment. As shown inFIG.3, user100, via user computer50or other computing device, initiates operation102to view an IP address on a VPN (e.g., a scalable VPN according to one or more embodiments). For instance, operation102may be referred to as XMVPROXY and the viewing command may be viewing 10.1.1.2:80 on VPN having an identification (ID) abcdef. The ID may identify a client or a customer. Operation102checks the VPN ID against database104to determine authorization between database104of a proxy (e.g., a layer 7 proxy). Decision block106determines whether user100has permission to the VPN based on the VPN ID. If user100does not have permission to the VPN having the VPN ID abcdef, then user100receives a forbidden message as represented by arrow108. If user100has permission to the VPN having the VPN ID abcdef, a POST URL, TOKEN, IP, and PORT are generated in response to the VPN ID abcdef (e.g., NETID) as depicted in operation110. In one non-limiting example, the POST URL is https://customer-name-securemcloud.com/proxy, the TOKEN is WXYZ, the IP is 10.1.1.2, and the PORT is 80. As shown by arrow112, the TOKEN is passed to decision block112. Decision block114determines if the TOKEN is a good token. If the TOKEN is not a good token, then the bad TOKEN is sent to operation116. Operation116parses the bad TOKEN and sends it to user100operating on computer50or other computing device. If TOKEN is a good token, then operation118is performed. In one embodiment, operation118generates two (2) random strings where the random strings include all lower case letters with no special characters. In other embodiments, the random strings may include special characters. In the example shown inFIG.3, the random strings are assigned variable rand1 and rand2. The HSET command may be used to create a hash from rand1 and an endpoint. Along with the HSET endpoint:rand1, TOKEN rand2, NETID abcdef, IP 10.1.1.2, and/or PORT 80 may be transmitted to device120as shown by arrow122. The HSET command is a Redis (Remote Dictionary Server) command used to set the value of a field in a hash stored at a key. As shown in operation124, rand1 and rand2 are used to construct a URL. For example, the URL may be https://rand1.mivapps,customer-name.securemcioud.com/proxyauth/rand2. The URL may be sent to operation126, which parses the URL and sends it to user100. FIG.4depicts a sequence diagram of the steps to obtain a cookie in connection with a scalable VPN according to one embodiment. As shown inFIG.4, the URL constructed by operation124(e.g., https://rand1.mivapps.customer-name.securemcloud.com/proxyauth/rand2) is transmitted to operation150by user100. At operation150, an HAProxy command (or other command to configure or manage the behavior of the proxy server) is configured to do a TLS (transport layer security) termination for *.mivapps.customer-name.securemcloud.com. TLS is a protocol used by applications to communicate securely across a network, resisting tampering with messaging (e.g., email), web browsing, and other protocols. The termination may also be performed on a secure sockets layer (SSL). As shown by arrow152, the URL is sent to a router/firewall platform154. The router/firewall platform154may be executed on local router20. The router/firewall platform154may also be executed on remote router18and firewall22. The router/firewall platform154may execute an open-source network operating system. Along with the URL, other information may be transmitted to router/firewall platform154(e.g., VPN ID abcdef, a POST URL, TOKEN, IP, PORT, rand1, and rand2). Router/firewall platform154may be configured to determine horizontal scaling needs for provisioning additional cloud servers. The horizontal scaling may split workloads between servers to limit the number of requests any individual server is receiving. Horizontal scaling may add additional instances to support additional VPNs, thereby making one or more embodiments configured to provide scalable cloud-based VPNs. As shown in decision block156, the information transmitted to router/firewall platform154is searched to find rand1 and rand2. If rand1 and rand2 are not found, then access to the VPN is forbidden and a message to this affect is transmitted to user100. If rand1 and rand2 are found, then control is passed to operation158. Operation158is configured to look up the endpoint associated with rand1. As shown by arrow160, an HMGET endpoint:rand1 command is executed to obtain the endpoint from device120. As shown in decision block162, the result of the endpoint look up operation158is transmitted to decision block162along with other information passed through the sequence loop (e.g., VPN ID abcdef, a POST URL, TOKEN, IP, and PORT). Decision block162determines if the endpoint was found and whether the TOKEN matches and a VPN ID exists. If all these are not true, then access to the VPN is forbidden and a message to this affect is transmitted to user100. If all these are true, then control is passed to operation164. Operation164is configured to cache the results of previous decisions and/or operations (e.g., operation154, decision block156, operation158, and decision block162). The results may be cached using Redis software or other in-memory data structure store, used as a distributed, in-memory key-value database, cache, and message broker. Operation164may also be configured to generate a session ID. The session ID may be a third random string including all lower case letters with no special characters. The third random string may be referred to as rand3. Operation164may be further configured to delete the endpoint:rand1 key. As depicted by arrow166, the DEL endpoint:rand1 operation is transmitted to device to delete the key. Operation168is configured to transmit a cookie (e.g., to user computer50) and to perform a redir command to “/” (e.g., user computer50). In one or more embodiments, the redir command in Linux is configured to redirect input or output from a command to a file or another device. The redir command may redirect transmission control protocol (TCP) connections coming into a local port to a specified address and port combination. The URL associated with operation168may be https://rand1.mivapps.custorner-name,securemcloud.com. The set-cookie command may be performed by the command sid=rand3. FIG.5depicts a sequence loop for consuming a proxy resource (e.g., a HTTP proxy resource) in connection with a scalable VPN according to one embodiment. As shown inFIG.5, user100transmits the redir command URL (e.g., https://rand1.mivapps.customer-name.securemcloud.com) and the cookie SID command (e.g., cookie sid=rand3) to operation200. At operation200, an HAProxy command is configured to do a TLS termination for *.mivapps.customer-name.securecloud.com. As shown by arrow202, the redir command website is sent to a router/firewall platform204. The router/firewall platform204may be executed on local router20. The router/firewall platform204may also be executed on remote router18and firewall22. The router/firewall platform204may execute an open source network operating system. Along with the redir command URL, other information may be transmitted to router/firewall platform204(e.g., VPN ID abcdef, a POST URL, TOKEN, IP, PORT, rand1, and rand2). Router/firewall platform204may be configured to determine horizontal scaling needs for provisioning additional cloud servers. The horizontal scaling may split workloads between servers to limit the number of requests any individual server is receiving. Horizontal scaling may add additional instances to support additional VPNs, thereby making one or more embodiments configured to provide scalable cloud-based VPNs. As shown in decision block206, decision block206determines whether rand1 is in local cache and whether cookie rand3 matches. If either of these conditions is false, then access to the VPN is forbidden and a message to this affect is transmitted to user100. If both these conditions are true, then control is passed to operation208. Operation208is configured to set up a proxy socket to a resource (e.g., video camera resource210or other protected device) in response to determining rand1 is in local cache and cookie rand3 matches. Operation208may also be configured to transmit proxied data to user100. FIG.6depicts data functions of a user application for maintaining scalable VPNs according to one embodiment. According to the data functions shown inFIG.6, networks may be added to one or more groups. The users may be assigned permissions to one or more network groups. In one or more embodiments, users with unlimited permissions may access any network. The data structure includes XMNETMNT (Program) data functions table250, XMNETDET data functions table252, XMNETGRP data functions table254, XMNETLST data functions table256, XMNETUSR data functions table258, and MWUSERS data functions table260. As shown inFIG.6, XMNETMNT (Program) data functions table250includes the following data functions: ADD_USER, REMOVE_USER, UPDATE_USER, LIST_USERS, ADD_GROUP, REMOVE_GROUP, UPDATE_GROUP, LIST_GROUPS, ADD_NETWORK_TO_GROUP, REMOVE_NETWORK_FROM GROUP, LIST_GROUP_NETWORKS, REMOVE_NETWORK, UPDATE_NETWORK, LIST_NETWORKS, ADD_NETWORK, and ADD_HTTP_PROXY. In one or more embodiments, the functions in data functions table250may be modified, deleted, and/or supplemented depending on the implementation of cloud scalable VPNs. The data from one or more of the functions in data functions table250may be transmitted to device120. Device120is configured to perform the functions shown in functional block262relating to {dvpn}endpoint. As shown inFIG.6, the functions include {dvpn}endpoint:RANDOM_STR including netid, token: RANDOM_STRING, ip, and port. These functions may be used to generate the VPN ID, TOKEN, IP and PORT as referred to inFIGS.3to5. The data from one or more of the functions in data functions table250may be transferred to router/firewall platform264for horizontal scaling purposes. Router/firewall platform264may be configured to utilize the data and transmit resulting data to functional block262. As shown inFIG.6, XMNETDET data functions table252includes the following data functions: NETWORK DESCRIPTION. As shown inFIG.6, XMNETGRP data functions table254includes the following data functions: NETWORK_GROUP DESCRIPTION. As shown inFIG.6, XMNETLST data functions table256includes data associated with the NETWORK DESCRIPTION and NETWORK_GROUP DESCRIPTION. In one or more embodiments, the functions in data functions table256may be modified, deleted, and/or supplemented depending on the implementation of cloud scalable VPNs. As shown inFIG.6, XMNETUSR data functions table258includes the NETWORK_GROUP DESCRIPTION and USER LOGIN functions (e.g., CREATE, UPDATE, DELETE, HTTP, and RTSP). MWUSERS data functions table260includes USER LOGIN functions such as USER_PROFILE and OPER_CODE. Data from XMNETUSR data functions table258is used by MWUSERS data functions table260. In one or more embodiments, the functions in data functions table258may be modified, deleted, and/or supplemented depending on the implementation of cloud scalable VPNs. FIGS.7A and7Bdepict schematic views of implementations of a digital video alarm system.FIG.7Adepicts a prior art digital video alarm system where digital video alarm system monitoring software is hosted on premises by a company that performs monitoring services. The digital video alarm system monitoring software is hosted on hardware located on site. As shown inFIG.7A, hardware300is located on site at the monitoring company. Hardware300hosts alarm monitoring software. The alarm monitoring software hosted by local hardware300is configured to communicate with sites through communication paths302. Hardware300resides behind a firewall. Because the alarm monitoring software is located behind a firewall, setting up VPN architecture across the networks represented by communication paths302is commonly performed by network engineers given the complexity of solving issues that may be encountered when setting up the VPN architecture. As opposed to the architecture shown inFIG.7A,FIG.7Bdepicts an architecture utilizing cloud scalable VPNs of one or more embodiments disclosed herein. As shown inFIG.7B, cloud based alarm monitoring software is hosted on cloud servers304A,304B, and304C. Each of the communication paths extending from cloud servers304A,304B, and304C may be VPNs initiated using one or more embodiments disclosed herein. The use of cloud scalable VPNs enables load balancing between cloud resources and protected devices and provides fail over mechanism that can be implemented within the cloud without the need to fix hardware on site. The protected devices of one or more embodiments may have a backdoor granting access to unauthorized systems and/or individuals. These backdoors may be disabled by the creation of the scalable VPNs of one or more embodiments disclosed herein. If an unauthorized system or individual attempts to attack one of the protected devices, the potential hacker is presented a mirror and wall with full encryption. Therefore, the potential hacker is given no access to the device itself These safeguards are enabled by security keys that can be changed frequently (e.g., on the order of seconds). FIG.8depicts graphical user interface (GUI)350configured to perform VPN maintenance functions and to display VPN maintenance information using one or more embodiments disclosed herein. The VPNs displayed on GUI350may be VPNs initiated by the cloud scalable VPN processes and systems of one or more embodiments. GUI350includes add VPN button352, view permissions354, search field356, and VPN information display358. VPN information display358includes VPN name, dealer, status (e.g., connected or disconnected), and number of devices columns. VPN information display358includes rows displaying a VPN name, a dealer name associated with the VPN name, a status of the VPN, and a number of devices connected to the VPN. The rows also include an edit button360configured to edit the information displayed in the respective row upon selection and a delete button362configured to delete the VPN in the respective row upon selection. Upon selecting the delete button362, a window is displayed to confirm the deletion of the VPN. The deletion confirmation window may include the phrase “Are you sure you want to permanently delete VPN name? This action cannot be undone.” The dealer and status columns include drop down box selection arrows364and366, respectively. In response to selecting drop down box selection arrow364, a window is displayed with a search field configured to search for dealer names entered into the system. A dealer name from the dealer names returned by the search may be selected using a radio button. In response to selecting drop down box selection arrow366, a window is displayed with a toggle button to select between connected and disconnected. A user may toggle between an up arrow and down arrow associated with the number of devices column to sort the VPN names based on the lowest and highest number of devices, respectively, associated with the VPNs.FIG.8shows an up arrow368associated the number of devices column. FIGS.9A,9B, and9Cdepict GUI400configured to add a VPN using one or more embodiments disclosed herein. GUI400may be displayed upon selecting add VPN button352from GUI350. GUI400includes a name field402configured to receive input of a VPN name, a dealer drop down menu404configured to receive input from a user of a dealer name, a cancel button406to cancel the process of creating a VPN, and a next button408configured to display the next GUI in the add the VPN sequence. As shown inFIG.9B, key entry field410is displayed for entering a security key associated with the VPN to be created. The key may be generated by VPN creation/access process54using one or more identifiers (e.g., identification of a remote router, one or more protected devices, etc.). The key may be transmitted to the user so that the user may enter the key into the key entry field410. GUI400, as depicted inFIG.9B, also includes back button412configured to switch to GUI400as shown inFIG.9Aupon selection. After entry of the key, next button414may be selected to advance to the next step in the add VPN process carried out using GUI400. As shown inFIG.9C, after the entered key is accepted by VPN creation/access process54, VPN creation/access process54generates a configuration file configured to be downloaded using download button416. GUI400, as depicted inFIG.9C, also includes back button418configured to switch to GUI400as shown inFIG.9Bupon selection. After downloading the configuration file using download button416, the user can select the finish button420to finish the add VPN process. FIG.10depicts GUI450configured to edit a VPN using one or more embodiments disclosed herein. GUI450includes VPN name entry field452configured to accept the name of a VPN within the database. GUI450includes dealer drop down box454configured for selecting a dealer name associated with the selected VPN name. GUI450includes a delete VPN button456, which upon selection, deletes the selected VPN. The regenerate key button458is configured to regenerate a key for the selected VPN name and dealer name combination. The cancel button460may be selected to cancel out of the edit VPN GUI450. The save button462may be selected to save the entered VPN name and the selected dealer name. FIG.11depicts GUI500configured to perform user permission functions and to display permission set information using one or more embodiments disclosed herein. GUI500may be displayed upon selecting the view permissions button354of GUI350. GUI500may be used to add and edit permissions that users have to particular VPNs. GUI500includes add permissions button502, view VPNs button504, search field506, and permission information display508. Permissions information display508includes permission set, dealer name, permission type, and user list columns. Permissions information display508includes rows displaying a permission name, a dealer name associated with the permission name, a permission type associated with the permission name, and a user list associated wot the permission name. The rows also include an edit button510configured to edit the information displayed in the respective row upon selection and a delete button512configured to delete the permission set in the respective row upon selection. Upon selecting the delete button512, a window is displayed to confirm deletion of the permission set. The deletion confirmation window may include the phrase “Are you sure you want to permanently delete Permission Set? This action cannot be undone.” The dealer, type, and user columns include drop down box selection arrows514,516, and518, respectively. In response to selecting drop down box selection arrow514, a window is displayed with a search field configured to search for dealer names entered into the system. A dealer name from the dealer names returned by the search may be selected using a radio button. In response to selecting drop down box selection arrow516, a window is displayed with a toggle button to select between dealer, VPN, and VPN/user. In response to selecting drop down box selection arrow518, a window is displayed with a search field configured to search for user names entered into the system. A user name from the user names returned by the search may be selected using a radio button. FIG.12depicts GUI550configured to edit a permission set using one or more embodiments disclosed herein. GUI550includes name input field552configured to receive an input of a permission set name. GUI550also includes a permission type drop down box554, a dealer drop down box556, a VPN drop down box558, and a user name drop down box560. Upon selecting permission type drop down box554, a drop down box is displayed with a toggle button to select between dealer, VPN, and VPN/user. As shown onFIG.12, the permission type drop down box554defaults to the current permission type associated with the permission set name. Upon selecting dealer drop down box556, a window is displayed with the possible choices for the dealer name for selection by a user. As shown inFIG.12, the dealer drop down box556defaults to the current dealer associated with the permission set name. Upon selecting VPN drop down box558, a window is displayed with the possible choices for VPN name for selection by a user. As shown inFIG.12, the VPN drop down box558defaults to the current dealer associated with the permission set name. Upon selecting user name drop down box560, a window is displayed with the possible choices for user name for selection by a user. As shown inFIG.12, the user name drop down box560defaults to the current user name associated with the permission set name. Cancel button562may be selected to cancel the current changes made to a permission set through the permission type drop down box554, the dealer drop down box556, the VPN drop down box558, and the user name drop down box560. Save button564may be selected to save the current changes made to a permission set through the permission type drop down box554, the dealer drop down box556, the VPN drop down box558, and the user name drop down box560. Delete permissions button566may be selected to delete the existing permissions associated with the permission set. GUI550includes checkboxes and associated toggle buttons in region568. A checkbox may be associated with a tag or characteristic associated with a permission set and the associated toggle button may be used to associate a value with the tag or characteristic. GUI550also includes a notes area570for entering notes associated with the entered permission set. FIGS.13A,13B,13C,13D,13E, and13Fdepict GUI600configured to add a permission set using one or more embodiments disclosed herein. GUI600may be displayed upon selecting add permissions button502from GUI500. GUI600includes a name field602configured to receive input of a permission set name and a permission type drop down menu604configured to obtain a permission type for the added permission set. GUI600also includes a cancel button configured to cancel the name entered into the name field602and the permission type entered into drop down menu604. GUI600also includes finish button608configured to save the added permission set with the permission type information entered throughFIGS.13B,13C, and13D.FIG.13Bis displayed when the permission type dealer is selected. Dealer drop down menu610is then displayed so a window is displayed with a search field configured to search for dealer names entered into the system. A dealer name from the dealer names returned by the search may be selected using a radio button.FIG.13Cis displayed when the permission type VPN is selected. VPN drop down menu612is then displayed so a window is displayed with the possible choices for VPN name for selection by a user.FIG.13Dis displayed when permission type VPN/user is selected. User drop down menu614is then displayed so a window is displayed with the possible choices for user name for selection by a user.FIG.13Edepicts checkboxes and associated toggle buttons. A checkbox may be associated with a tag or characteristic associated with a permission set and the associated toggle button may be used to associate a value with the tag or characteristic.FIG.13Fdepicts the possible choices when type drop down box604is selected through GUI600. The following application is related to the present application: U.S. patent application Ser. No. 18/105,585 filed on Feb. 3, 2023. The processes, methods, or algorithms disclosed herein can be deliverable to/implemented by a processing device, controller, or computer, which can include any existing programmable electronic control unit or dedicated electronic control unit. Similarly, the processes, methods, or algorithms can be stored as data and instructions executable by a controller or computer in many forms including, but not limited to, information permanently stored on non-writable storage media such as ROM devices and information alterably stored on writeable storage media such as floppy disks, magnetic tapes, CDs, RAM devices, and other magnetic and optical media. The processes, methods, or algorithms can also be implemented in a software executable object. Alternatively, the processes, methods, or algorithms can be embodied in whole or in part using suitable hardware components, such as Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs), state machines, controllers or other hardware components or devices, or a combination of hardware, software and firmware components. Any combination of computer-readable media may be utilized to implement the systems and processes of any embodiment disclosed herein. Computer-readable media may be a computer-readable signal medium and/or a computer-readable storage medium. A computer-readable storage medium may include any suitable tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. A computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, and/or any suitable combination thereof. A computer-readable signal medium may include any computer-readable medium that is not a computer-readable storage medium and that is capable of communicating, propagating, or transporting a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including but not limited to wireless, optical fiber cable, RF, and/or the like, and/or any suitable combinations thereof. Computer program code for carrying out operations for aspects of the systems described herein may be written in one or any combination of programming language such as Linux, Java, Smalltalk, C++, and conventional procedural programming languages, such as C. Mobile apps may be developed using any suitable language, including those previously mentioned, as well as Objective-C, Swift, c #, and HTML5. While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms encompassed by the claims. The words used in the specification are words of description rather than limitation, and it is understood that various changes can be made without departing from the spirit and scope of the disclosure. As previously described, the features of various embodiments can be combined to form further embodiments of the invention that may not be explicitly described or illustrated. While various embodiments could have been described as providing advantages or being preferred over other embodiments or prior art implementations with respect to one or more desired characteristics, those of ordinary skill in the art recognize that one or more features or characteristics can be compromised to achieve desired overall system attributes, which depend on the specific application and implementation. These attributes can include, but are not limited to cost, strength, durability, life cycle cost, marketability, appearance, packaging, size, serviceability, weight, manufacturability, ease of assembly, etc. As such, to the extent any embodiments are described as less desirable than other embodiments or prior art implementations with respect to one or more characteristics, these embodiments are not outside the scope of the disclosure and can be desirable for particular applications.
56,980
11863535
DETAILED DESCRIPTION A combination of OSCORE and (D)TLS can be used to provide network authentication and validation without compromising on E2E security, but this may lead to decreased performance at the gateway/proxy. Therefore, there is a need for providing secure and encrypted communications over a network, particularly a network comprising constrained nodes, without significantly decreasing the performance at the gateway/proxy or at the end nodes. The present disclosure provides secure and encrypted communications over a network, particularly a network comprising constrained nodes, without significantly increasing the bandwidth and power requirements and without decreasing the performance. Advantageously, the solution provided herein can be implemented using network hardware, e.g. using a gateway, without the need to employ powerful and costly systems; relatively small and inexpensive hardware may protect the entire network. According to a first aspect of the present disclosure, there is provided a method for secure communications over a network. The method comprises receiving a data packet from a first device, the data packet comprising an encrypted data part and a metadata part, the metadata part comprising a cleartext part and removable metadata, the removable metadata comprising a network access code that is authenticatable by means of a network access key; validating the data packet, wherein validating the data packet comprises authenticating the network access code using the network access key; removing the removable metadata from the data packet after validating the data packet, thereby altering the data packet; and transmitting the altered data packet to a second device. In some embodiments, the network may comprise parts and channels wherein the communications are done wirelessly and/or via wirelines. The communications may comprise the transmitting and receiving steps of the method. In some embodiments, the removable metadata may be in cleartext or comprise a cleartext portion (a portion in cleartext). The term “cleartext” may be interpreted as meaning data (i.e. information) that is transmitted or stored unencrypted (‘in clear’). The network access code and/or the entire removable metadata may also be encrypted (ciphered), and accordingly, the network access key may be required for decrypting (deciphering) the network access code and/or for decrypting (deciphering) the entire removable metadata when the latter is encrypted (ciphered). As described herein, the third device can be provided as a gateway, but alternatively, it may be a proxy device. Likewise, the third device can serve as a proxy to the second device. Further, the third device can serve as a gateway and a proxy to the second device. In the latter case, the gateway, i.e., the gateway device, may be configured to perform both proxying and protocol translation. It can be understood, that in the context of the present disclosure the encrypted data part of the data packet and the cleartext part may be destined to be received by the second device, wherein the encrypted data part is to be decrypted. Therefore, the method of the first aspect of the disclosure may also comprise receiving at the second device the altered data packet transmitted from the third device. Likewise, the method may also comprise decrypting at the second device the encrypted data part. The altered data packet may comprise the encrypted data part, and may also comprise the cleartext part that may be destined to be received by the second device. The altered data packet does not comprise the removable metadata which, according to the method, are removed from the data packet, thereby altering the latter upon (after) authenticating the network access code at the third device. Likewise, it is contemplated optionally providing to the second device an encryption key that is required for decrypting the encrypted data part of the data packet. Providing the encryption key may comprise deriving the encryption key from a particular trusted source, preferably a particular hardware root of trust, or may comprise pre-sharing or negotiating between the first device and the second device the encryption key that allows for, i.e. is required for, decrypting the encrypted data part of the data packet. The pre-sharing or negotiating the encryption key may comprise the first and the second device communicating between them via the same network over which the data packet and the altered data packet are transmitted, or via a different network or different communication channel/link. The features of authenticating at the third device, e.g., at the gateway, the network access code for the purpose of validating the data packet, and also removing at the third device the metadata part comprising the code, directly contribute to increasing the resource efficiency in the network and in the second device, especially when the latter is a constrained node, i.e., a constrained communication node. The aspects of firstly having an original data packet that is structured so that it can be validated at the gateway, secondly validating at the proxy/gateway the data packet, and thirdly altering the original packet so that its altered form may be light and compatible with any bandwidth and energy restrictions of the second device, allows using the third device for protecting the second device and the overall communication network against malicious attacks which might try to exploit the restrictions. This is a unique aspect of the present disclosure, given that, in existing networks, a gateway is commonly a blind forwarder of information. Moreover, since in the present method the data packet is authenticated/validated at the third device, e.g., at the gateway, it allows for the possibility to safely accumulate or queue data packets at the third device before forwarding them to the second device. This may be useful in cases in which the second device due to power limitations or its specific functionality temporarily stops communicating with the third device and/or the rest of the network, and/or when the second device temporarily is turned off or enters a sleep or low power consumption mode. The latter is common in IoT configurations. For example, when the second device is a relatively small module comprising a sensor such as a temperature sensor, it is common that the module enters temporarily in a sleep mode for saving power. Therefore, the method of the first aspect of the disclosure may comprise queuing at the third device a plurality of data packets. Moreover, the method may comprise altering at the third device the plurality of data packets in the aforementioned way, thereby producing a plurality of altered data packets which may be sent/transmitted from the third to the second device when the latter communicates with the third device e.g. when the second device exits a sleep mode. From the above, it will now be understood that the present disclosure contemplates a case wherein the second device may exit and/or enter sleep mode. Likewise, the present disclosure contemplates a case wherein the second device may exit sleep mode and may contact the third device. Moreover, the third device may send the queued packets to the second device, preferably after or upon the third device being contacted by the second device as described above. As is explained further below, a data packet may comprise an expiration time. If and when the data packets are queued at the third device, and the data packets comprise an expiration time (or more than one expiration time), it is contemplated that the third device may check if the expiration time (or the more than one expiration time) has (have) passed or not. The third device may send the queued data packets to the second device after checking that the expiration time (or the more than one expiration time)) has (or have) not passed. Preferably, the third device may obtain a current time via the first device. In a non-binding example, the first device is a cloud server and the third device obtains from the cloud server a current time. Likewise, the third device may have its own clock. In a non-binding example, the third device has a GPS clock that provides a time to the third device. For decreasing the possibility that an unauthorized or malicious third party intercepts and validates the network access code of the data packet, the network access key required for the validation may be provided to the third device, e.g., to the gateway, via a trusted and secure way. Therefore, in some embodiments, the method of the first aspect of the disclosure may further comprise providing to the third device the network access key, and providing the network access key may comprise at least one of:deriving the network access key from a trusted source, preferably a hardware root of trust; orpre-sharing the network access key between the first device and the third device; ornegotiating the network access key between the first device and the third device. The aforementioned hardware root of trust may be connected to, or comprised by, the third device. Likewise, the hardware root of trust may have a secure environment and may be configured to generate the key, protect the key, as well as to perform cryptographic functions within the secure environment. Likewise, the trusted source may be a root of trust that comprises hardware, and/or software and/or firmware components. The aforementioned pre-sharing or negotiating the network access key between the first device and third device may be performed with the devices communicating and exchanging information between them via the same network or network channel/link used for transmitting the data packet, or via a different network or different channel/link. Further, there is contemplated a case wherein the first device and the third device may be connected via a secure communication channel (link) used for pre-sharing and/or negotiating the network access key, before or during the transmittance via the network of the data packet or of a set of data packets. The secure communication channel may be part of a separate network within which communication signals are exchanged wirelessly and/or via wire connections. Likewise, the network access key may be provided or inputted to the third device manually, or by means of connecting to the third device an information storage medium that contains the network access key. For example, the information storage medium may be a magnetic storage medium or an optical storage medium (e.g. an optical disc). The data packet may have time sensitive information which renders the data packet utile only if the latter is received at the third device within a certain period of time after the data packet was transmitted by the first device. The time sensitive information may be an expiration time. The time sensitive information may be part of the metadata part, or preferably may be part of the removable metadata. Accordingly, the metadata part, preferably the removable metadata, comprise the expiration time. In a non-limiting example, the expiration time is used because the encrypted data part of the data packet that is to be decrypted at the second device contains a set of information or instructions that are to be received and/or executed by the second device within a specific period of time. In another non limiting example, the expiration time exists to discard data packets that contain information that, for the purpose related to the secure communication, becomes outdated beyond the expiration time. In another non-limiting example, the expiration time exists as a means to check at the third device whether the communication of the data packet from the first device to the third device was not delayed due to an undesired interception of the data packet by a malicious or unauthorized third party. Therefore, in the method according to the first aspect of the disclosure, the metadata part, preferably the removable metadata, may comprise an indication of an expiration time, and validating the data packet may comprise validating at the third device whether the expiration time has passed, and discarding the data packet if the expiration time has passed. The discarding of the data packet may comprise deleting the data packet, or storing the data packet for further processing and/or for record keeping. Likewise, validating whether the expiration time has passed may comprise comparing the expiration time with a current first device time. When the first device is a server, such as for example a cloud server, the current first device time may be the server time. Accordingly, the method may comprise providing to the third device the current first device time. The providing may be achieved via communicating the first device time from the first device to the third device via the same network over which the data packet is transmitted, or via another network such as the aforementioned different network/channel via which the first device and the third device may pre-share or negotiate the network access key. It is noted that, the aforementioned expiration time may be in the form of a timestamp along with the data packet's authentication code. The network access code (NAC) may be a keyed message authentication code (MAC) based on the network access key. The NAC may be created/verified by the first device and the third device because the devices may have the network access key (NAK). Likewise, the authentication code may be a digital signature. The digital signature may be calculated out of the network access key and the metadata information, e.g., expiration time. From the above, it is understood that the data packet may contain verifiable timing information which can be used for checking with the third device whether the data packet was received within a certain time. Likewise, the inclusion of timing information in data packets that constitute a set of data packets, may allow for checking that each or any of the set's data packets is sent and/or received in a correct chronological order, or with a desired chronological proximity with respect to when the other data packets of the set have been sent by the first device, or have been received by the third device. The aforementioned timing information may be a timestamp. The timestamp may indicate the moment in which the data packet is either created, or is present in the first device or is transmitted by the first device. Therefore, the moment may be indicated by the timestamp according to the first device time. Related to the latter, it is contemplated that the first device may comprise or may be connected with a clock that indicates the first device time, and the timestamp may indicate the first device time given by the clock the moment the timestamp is created. In some embodiments, the time given by the clock when validating the time packet at the third device is the aforementioned current device time. According to all the above, the verifiable timing information may alternatively comprise a sequence number. The sequence number may serve as an indication about the moment that the data packet was transmitted or received or generated with respect to the other data packets. Some non-binding examples of what the sequence number may be, are: i) a number in a sequence of numbers each of which is assigned to a data packet of a set of data packets successively generated or sent/transmitted by the first device, or ii) a number of a sequence of numbers wherein in the sequence the numerical difference (ΔN=N2−N1) of any two numbers (e.g. N1and N2) of the sequence is linearly correlated (e.g. via the relationship ΔN=a*Δt, wherein a is a constant) with a specific period of time (e.g. Δt). Overall, since a sequence number may serve the purpose of correlating the data packet with a set of data packets, or the purpose of identifying the (chronological) position of the data packet with respect to the other members of the set, in the latter case the sequence number being valuable within a specific context related to the set of data packets, the sequence number, or the entire sequence containing the number, may be pre-shared or negotiated between the first device and the third device so that it may be known at the third device the context within which the sequence number is to be evaluated. An optional case is contemplated in which the first and the third device may negotiate or share or exchange between them, by any mechanism, a starting sequence number. The starting sequence number may be the sequence number of a first data packet of a sequence or set of data packets to be communicated between the first and the third device. The sequence number may be incremented with every communicated data packet of the set. The first data packet may be the set's first data packet being communicated between the first and the third device. Accordingly, it is contemplated preferably incrementing, for example with respect to the first sequence number, the corresponding sequence numbers of the set's data packets that may be communicated between the first and the second device after the first data packet has been communicated between the same devices. Likewise, the pre-sharing or negotiating the sequence number, may serve for anticipating receiving at the third device the data packet or the set of data packets. Considering the above, in the method of the first aspect of the disclosure, the removable metadata may comprise verifiable timing information, the verifiable timing information comprising a timestamp or a sequence number. In the latter case, the method may further comprise pre-sharing or negotiating between the first device and the third device the sequence number. The pre-sharing or negotiating between the first device and the third device the sequence number, or preferably the starting sequence number, is primarily contemplated for when (the case that) the metadata part comprises a sequence number that may denote chronological arrival of data packets at the third device, or chronological transmittance of data packets from the first device, or chronological creation of data packets. The creation may occur at the first device or at a data packet generation device connected to and communicating with the first device. Likewise, when the removable metadata comprises verifiable timing information that comprises a sequence number, it is contemplated that the method may comprise pre-sharing or negotiating between the first device and the third device at least one of: a rule or a recurrence relation that defines a sequence that includes the sequence number; a first sequence number of the sequence; or an initial term of the recurrence relation. A rule, that is also commonly called a recurrence relation, may define the sequence that is a number sequence. Some non-binding examples of a rule are: that all the numbers of the sequence are prime numbers; any number of the sequence differs compared to the previous number of the sequence by a fixed value e.g. −1 or +1; that each number an+1of the sequence is related to the previous number an within the sequence by the relation an+1=b*an+c, wherein n is an integer that is ≥1, b and c are integral numbers, and a1that corresponds to n=1 is the first sequence number of the sequence. In the latter example, some non-binding examples of an initial term of the recurrence relation are: b=1 or 2; c=1 or 2; a1=931. If the two devices negotiate the rule or the recurrence relation that defines the sequence, then this negotiation may include negotiating a description of the sequence. Two non-binding examples of the description of the sequence are: prime numbers; and prime numbers starting from 11. Advantageously, when the removable metadata comprises verifiable timing information that comprises a sequence number, the method may comprise pre-sharing or negotiating between the first device and the third device a description of the sequence that includes the data packet's sequence number, and the first number of the sequence. Accordingly, in a non-binding example the method may comprise negotiating between the first and the third device that the sequence is of prime numbers, and that the first number of the sequence is 11. In the latter case the sequence number of the first data packet would be 11, the sequence number of the second data packet would be 13, the sequence number of the third data packet would be 17, etc. The aforementioned optional verifiable timing information may be checked or validated, with the purpose of determining at the third device whether the data packet is valid or allowable, and thus, whether the data packet should be altered and subsequently passed to the second device (or not). Checking or validating the verifiable timing information allows for blocking at the third device any data packet that is either not received on time, or that is a non-secure data packet that was not sent by the first device. Such a non-secure data packet may for example have been sent by a malicious third party as part of an attack. If the data packet is determined to be invalid or non-allowable, then the data packet may be discarded. Preferably discarding the data packet comprises any of deleting, archiving (putting/registering in a record or list) or reporting/sending to a security device/server or authority the data packet or parts (e.g. the timestamp or sequence number) of it. Therefore, in the method of the first aspect of the disclosure, in the case that the data packet's removable metadata comprises verifiable timing information, the method, particularly the step of validating the data packet, may further comprise validating (verifying) the verifiable timing information for determining whether the data packet is valid, and discarding at the third device the data packet when determining that the data packet is not valid. In the latter case there are contemplated the following two options: when the verifiable timing information comprises the timestamp, then validating the verifiable timing information may comprise comparing the timestamp with a first device time; when the verifiable timing information comprises the sequence number then validating the verifiable timing information may comprise comparing the sequence number with an expected sequence number. The expected sequence number may be deduced on the basis of the negotiation that may take place between the first and third device regarding the sequence that includes the sequence number as described further above. Accordingly, in the context of determining whether the data packet is valid or not, verifying the network access code may advantageously be complemented by validating the verifiable timing information. Likewise, validating or verifying the verifiable timing information may include: verifying the network access code using the network access key; checking whether an expiration time has passed and/or checking whether or not a time stamp or sequence number are valid or expired. Determining whether the data packet is valid may be interpreted as meaning determining whether the data packet is allowable or relevant to the second device. Likewise, determining that the data packet is not valid may be interpreted as meaning that the data packet is not allowable or not relevant to the second device. The aforementioned timing information and sequence number may serve as a basis for deciding whether the data packet is allowable (or not). Deciding on the allowability of the data packet is most relevant to the case that a plurality of data packets is transmitted at (by) the first device and is received at (by) the second device, the data packets having sequence numbers that belong to the same or different sequences. Therefore, in the method of the first aspect of the disclosure, the data packet's verifiable timing information may comprise a sequence number, and the method may further comprise also receiving at the third device another data packet that has a corresponding (removable) metadata part that has another sequence number that belongs to a specific sequence of numbers, and the data packet may be allowable when any of the following a)-d) is true:a) The sequence number of the data packet is different from the another sequence number of the another data packet.If a) above is true, then that may signify that the data packet received at the third device is allowable because it is not part of an attack comprising transmitting to the third device several data packets that have the same sequence number.b) The sequence number of the data packet belongs to the specific sequence; and/or,c) the sequence number of the data packet belongs to the specific sequence, and the sequence number and the another sequence number are arranged within the sequence in an order that is the chronological order in which the data packet and the another data packet are received at the third device.If c) and/or b) above are true, that may signify that the data packet is allowable because it is anticipated at the third device because it is part of a set of data packets which according to their numbers and the sequence the numbers belong to, are anticipated at the third device because they are consecutively and/or orderly received at the third device, and/or because the sequence and/or the latter's numbers (members) have been additionally pre-shared or negotiated between the first and the third device.(d) The sequence number of the data packet belongs to the specific sequence, and in between receiving at the third device the data packet and the another data packet, no third data packet having a corresponding third sequence number that does not belong to the specific sequence is received at the third device.If d) above is true, that may signify that the data packet is allowable because its sequence number or the sequence within which the number belongs, has not been red-flagged or correlated or marked in relation to a suspicious or malicious attack against the network or any of the first, second or third device. The altered data packet, that is the data packet resulting from altering at the third device the (original) data packet transmitted by the first device, may be (i.e. comprises/has a structure that is) according to the OSCORE protocol. Therefore, it is also contemplated that the method of the first aspect of the disclosure may comprise, before transmitting the data packet from the first device, providing an initial data packet that is according to the OSCORE protocol and comprises an encrypted data part and a metadata part, the metadata part comprising removable metadata, and the method further comprises modifying the initial data packet by adding removable metadata to the metadata part, the removable metadata comprising a network access code that is authenticatable by means of a network access key. According to a second aspect of the present disclosure, there is provided a third device for secure communications over a network, and the third device may be the gateway mentioned above in relation to the first aspect of the disclosure. Therefore, the disclosure in its second aspect is a device that is or comprises a third device, such as a gateway, for secure communications over a network. The third device comprises a receiver configured to receive a data packet from a first device, the data packet comprising an encrypted data part and a metadata part, the metadata part comprising a cleartext part and removable metadata, the removable metadata comprising a network access code that is authenticatable by means of a network access key; a validator coupled to the receiver and comprising: a memory storing the network access key and instructions, and a processor configured to execute the instructions to cause the validator to validate the received data packet by: authenticating the network access code using the network access key; and removing the removable metadata from the data packet to alter the data packet; and a transmitter coupled to the validator and configured to receive the altered data packet and transmit the altered data packet to a second device. In some embodiments, the validator may be circuits for computation. Alternatively, the validator can be a system or a combination of elements arranged in a computer or electronic device so as to perform tasks described herein. In some embodiments, the validator may comprise one or more processors and one or more memories storing instructions for performing the tasks specified herein. The third device of the second aspect of the disclosure may have a transceiver that may be configured to perform the functions of each of the aforementioned receiver and transmitter. When the third device comprises the transceiver, then it may not comprise the aforementioned separate receiver and transmitter. Each of the aforementioned transceiver, receiver and transmitter may be configured for connecting to the network and receiving and/or transmitting data packets wirelessly and/or via wire. The third device may further comprise a second receiver and/or a second transmitter and/or a second transceiver for pre-sharing and/or negotiating the network access key and/or the aforementioned data packet's optional timing information and/or first device time, as described further above in relation to the method of the first aspect of the disclosure. The third device, may comprise the firmware and/or software that is required for the functions that the third device is configured for executing when operated. According to a third aspect of the present disclosure, there is provided a system for secure communications over a network. The system comprises a second device configured to receive an altered data packet from a third device; and the third device comprising: a receiver configured to receive a data packet from a first device, the data packet comprising an encrypted data part and a metadata part, the metadata part comprising a cleartext part and removable metadata, the removable metadata comprising a network access code that is authenticatable by means of a network access key; a validator coupled to the receiver and comprising: a memory storing the network access key and instructions, and a processor configured to execute the instructions to cause the validator to validate the received data packet by: authenticating the network access code using the network access key; and removing the removable metadata from the data packet to alter the data packet; and a transmitter coupled to the validator and configured to receive the altered data packet and transmit the altered data packet to the second device. In some embodiments, in the system of the third aspect of the disclosure, the first device may be a cloud server and the second device may be a constrained node. Likewise, in the system the third device may be a proxy to the second device. Overall, the system of the third aspect of the disclosure may be configured for executing the method of the first aspect of the disclosure. FIG.1shows a schematic diagram of a system according to an aspect of the disclosure. The system shown inFIG.1comprises a first device1which is a server, specifically a cloud server, a third device3, and a second device2a. This particular system also comprises additional devices2band2cwhich are similar in terms of functionality to the second device, and similarly to the latter they are constrained communications nodes, and are communicatively connected to the third device3. The third device3is a gateway and a proxy to the second device2and the aforementioned additional second devices2band2c. Therefore, communications between the first device1and the second device2are mediated by the third device3. The devices are interconnected via several links13,32,32b,32c,22b,22cof the network, as shown inFIG.1. Any of the links can be wired or wireless. FIG.2shows a flow diagram of a method according to an aspect of the disclosure. The method shown inFIG.2comprises the following steps:In step1001a data packet is transmitted from the first device to the third device. The data packet comprises an encrypted data part and a metadata part, the metadata part comprises removable metadata, the removable metadata comprises a network access code that is authenticatable by means of a network access key;in step1002the data packet is received at the third device;in step1003the network access code in the removable metadata is authenticated and the data packet is validated at (by) the third device;in step1004the data packet is altered by removing the removable metadata, thereby producing an altered data packet;in step1005the altered data packet is transmitted from the third device to the second device;in step1006the altered data packet is received at the second device;in step1007(decipher) the encrypted data part of the data packet is decrypted at the second device. In the example ofFIG.2, validating, in step1003, at the third device the data packet and authenticating the network access code (NAC) comprises checking whether the NAC and optionally another part of the removable metadata is valid or not, and proceeding to altering, in step1004, the data packet only if the NAC or the another part is determined to be valid; if the NAC or another part of the removable metadata are determined as being not valid, then proceed to discarding, in step1009, the data packet. Moreover, the example shown inFIG.2comprises providing, in step1008, to the third device the network access key before authenticating, in step1003, at the third device the network access code. Likewise, in the example ofFIG.2advantageously the removable metadata comprises timing information. Likewise, the example shown inFIG.2comprises providing, in step1010, to the second device the encryption key before decrypting, in step1007, at the second device the data packet's encrypted data part using the encryption key. Likewise, in the example ofFIG.2transmitting, in step1001, the data packet from the first to the third device is optionally preceded by providing, in step1000, the data packet to the first device. The providing, in step1000, the data packet(s) can be either sending/inputting/transmitting the data packet to the first device, and/or preferably generating the data packet at the first device or in a data packet generating device that is communicatively connected to the first device. Therefore, preferably the first device, or alternatively the data packet generating device, is configured for generating the data packet(s). FIG.3is a flow diagram of a part of a method according to an aspect of the disclosure. In the example ofFIG.3the data packet's removable metadata of the metadata part comprises an indication of an expiration time, and the method, and more specifically validating, in step1003, the data packet, comprises validating, in step1003a, at the third device whether the expiration time has passed. Moreover, for validating, in step1003a, the expiration time, the method comprises providing, in step1011, a current first device time to the third device, because in the specific embodiment shown, validating, in step1003, whether the expiration time has passed comprises comparing the expiration time with a current first device time. Moreover, the method ofFIG.3comprises discarding, in step1009, the data packet if the expiration time has passed, or altering, in step1004, the data packet if the expiration time has not passed. By discarding, in step1009, expired or invalid or non-authentic data packets, the unnecessary traffic from the third to the second device is eliminated or minimized, thereby optimizing the use of the bandwidth and power resources of the second device. FIG.4is a flow diagram of a part of an alternative method according to an aspect of the disclosure. In the example ofFIG.4the data packet's removable metadata comprises a network access code and verifiable timing information, the verifiable timing information comprising a timestamp or a sequence number. Likewise, in the example ofFIG.4validating, in step1003, the data packet comprises authenticating the network access code and validating, in step1003b, the verifiable timing information. Moreover, the method ofFIG.4comprises providing, in step1012, to the third device a first device time, such as the aforementioned current first device time, and also comprises pre-sharing or negotiating, in step1013, between the third and the first device the aforementioned data packet's sequence number. Moreover, the method ofFIG.4comprises discarding, in step1009, the data packet if the latter is deemed to be not valid, and altering, in step1004, the data packet if the latter is deemed to be valid. Different examples of data packets used in the method of an aspect of the disclosure, are schematically illustrated inFIGS.5A-5D. In the example inFIG.5A, the data packet20comprises a header201, an encrypted data part203, and removable metadata202that comprises an expiration time and a network access code. The header201is in cleartext form. The network access code may be a message authenticate code (MAC). The MAC may be in cleartext. In some embodiments, the data packet comprises at least one of optional timing information, a timestamp, a sequence number, or an indication of an expiration time. At least one of the network access code, the optional timing information, the timestamp, the sequence number, or the indication of the expiration time, is in cleartext. In the example inFIG.5Bthe data packet30comprises the header301awhich can be considered as being metadata, some other metadata301b, an encrypted data part303, and a network access code302. In the example inFIG.5B, parts301a,301bof the data packet are in cleartext. Moreover, the network access code302is removable, and the data packet portion comprising the header301a, the other metadata301band the encrypted data part is compatible with the OSCORE protocol. In the example inFIG.5C, the data packet40acomprises the header401a, the other metadata401b, a time stamp403a, a network access code403band an encrypted data part402. After altering the data packet40aofFIG.5Cby removing the time stamp403aand the network access code403b, there is left the data packet40bthat is shown inFIG.5Dand comprises the header401a, the other metadata401band the encrypted data part402. The altered data packet40binFIG.5Dis compatible with the OSCORE protocol. A gateway3athat is an example of a third device according to an aspect of the disclosure is shown in the schematic diagram ofFIG.6Awhich further shows that the gateway3acomprises:a receiver4configured for receiving a data packet from a first device, the data packet comprising an encrypted data part and a metadata part, the metadata part comprising removable metadata, the removable metadata comprising a network access code that is authenticatable by means of a network access key;a validator6coupled to the receiver4, validator6comprising a memory7to store the network access key and instructions, and a processor9configured to execute the instructions to cause the validator to validate the received data packet by authenticating the network access code using the network access key, and upon validation, removing from the data packet the removable metadata, thusly altering the data packet;a transmitter5coupled to the validator6to receive from the latter the altered data packet and configured for transmitting the altered data packet to a second device. Another gateway3athat is another example of a third device according to an aspect of the disclosure is shown in the schematic diagram ofFIG.6Bwhich further shows that the another gateway3ais similar to the gateway3ashown inFIG.3A, with the main difference being that in the another gateway3areceiver4and transmitter5have been replaced by a single transceiver8which functions as both a transmitter and a receiver. Although specific terms are used in the previous description for the sake of clarity, these terms have been presented for the purposes of illustration and description of the disclosure. It is not intended to be exhaustive or limit the disclosure to the precise form described, and many modifications and variations are possible in light of the teaching above. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications. This description will enable others skilled in the art to best utilize and practice the invention in various embodiments and with various modifications as are suited to a particular use.
40,726
11863536
DETAILED DESCRIPTION OF EMBODIMENTS Some of the various embodiments of the present invention relate to remotely accessing data on a secured server. According to some of the various embodiments, SaaS (Software as A Service) tools may securely query and process data that is sitting inside a secure network by using a user workstation as a bridge to pass instructions to a secured data server and receiving the results for further processing and reporting. In this way, embodiments may provide security and scalability of enterprise tools to the SaaS based tools. A volume data comprising the data may remain in the host environment and processing of data done on the host environment. This may enable higher security of data. Since the processing occurs on the host environment, the latency of data may be low. Scalability may be increased since the SaaS data tools do not need the same storage and computing capacity as the host environment. This may allow some of the various embodiments to provide high service levels to the end user. Some of the various embodiments may enable businesses to leverage new technologies and solutions that are owned and operated by third parties to work directly with their enterprise data in a secure fashion and the without the need to install these applications in their own network or by providing direct access to data to these applications. FIG.1is an example block diagram showing a system100for remotely accessing data145on a remote secured server140according to some of the various embodiments of the present invention. As illustrated in this example system, an assistant computing device100assists a requesting computing device120to access a data set145via a remote computing device140over a network160. In this alternative embodiment, requesting computing device120and assistant computing device110may reside outside of physically secured data center150. Remote computing device140may reside inside physically secured data center150. Assistant computing device110may communicate to requesting computing device120through network160via communication links122and112. Assistant computing device110may communicate to remote computing device140through network160and firewall130via communication links112,132and152. Requesting computing device may communicate to remote computing device140through network160and firewall130via communication links122,132and152. The remote computing device140may comprise a computing device such as, but not limited to: a personal computing device (PC, tablet or phone), a distributed computing device (e.g. a server) that comprises the data which the requester is trying to query and analyze, a combination thereof, and/or the like. According to some of the various embodiments, the remote computing device140could be the same as the requesting computing device120(when the dataset145is located on the same device) but more often than not, the remote computing device140and requesting computing device120are separate devices. The remote computing device may serve data remotely by receiving and processing queries received. (An example of a typical query format is Sequential Query Language (SQL)). According to some of the various embodiments, the remote computing device140may reside in a physically secured data center150. The term “data center,” as applied herein may to specially designed computer rooms. A data center may comprise a facility used to house computer systems and associated components, such as telecommunications and storage systems. Data center(s) generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and various security devices. Communications in data centers may be based on networks running, for example, an IP protocol suite. Data centers may comprise a set of routers and switches that transport traffic between the servers and to the outside world. Redundancy of the Internet connection may be provided by using two or more upstream service providers. Some of the servers at the data center may be employed for running the basic Internet and intranet services needed by internal users in the organization, e.g., e-mail servers, proxy servers, and DNS servers. Network security elements may also be deployed. Examples of network security elements may comprise, but are not limited to: firewalls, VPN gateways, intrusion detection systems, combinations thereof, and/or the like. Also common are monitoring systems for the network and some of the applications. Additional off site monitoring systems are also typical, in case of a failure of communications inside the data center. The data center150may be secured. For example, physical access to the site may be restricted to selected personnel, with controls such as, for example, a layered security system. A layered security system may comprise elements such as fencing, bollards and mantraps. Video camera surveillance and permanent security guards may be present. A mantrap, may comprise a physical security access control system comprising a small space with two sets of interlocking doors, such that the first set of doors must close before the second set opens. In a manual mantrap, a guard may lock and unlock each door in sequence. An intercom and/or video camera may be employed to allow the guard to control the trap from a remote location. In an automatic mantrap, identification may be required for each door, sometimes even possibly different measures for each door. For example, a key may open the first door, but a personal identification number entered on a number pad opens the second. Other methods of opening doors include proximity cards or biometric devices such as fingerprint readers, facial recognition systems, iris recognition scans, combinations thereof, and/or the like. Metal detectors may be built in, in order to prevent entrance of people carrying weapons. According to some of the various embodiments, the physically secure data center150may comprise a physical facility that is owned or leased. The physically secure data center150may house the remote computing device140and/or the dataset145being accessed. The physical facility could be the same location where the requesting computing device120is located or could be located in a different place. The dataset145may be located in remote computing device140and be accessible by the requesting computing device120using access credentials. The access credentials may according to some embodiments, be optional. According to some of the various embodiments, the remote computing device140may be in communication with a data set145. A data set145(or data set145) may comprise a collection of data. The collection of data may correspond to contents of database(s). Examples of databases comprise, but are not limited to: a relational database data set; (e.g. Oracle, DB2, Access); a non-relational database data set; (e.g. NOSQL); a web service query responsive data set; (e.g. SalesForce.com); an application specific query responsive data set; (e.g. SAP); a comma-separated-values (CSV) data set; a spreadsheet data set; (e.g. Microsoft Excel); a plain text data set; hierarchical format database(s); propriety format(s) (example Microsoft Excel); combinations thereof, and/or the like. According to some embodiments, a data set145may correspond to the contents of a statistical data matrix. The data set145may comprise value(s) for variable(s). Each value may be referred to as a datum. The data set145may comprise data for one or more members. According to some of the various embodiments, the term data set145may refer to the data in a collection of closely related tables, corresponding to a particular experiment or event. The data set145may be located, for example, on a network accessible drive, within the remote computing device140, or other location with communication of the remote computing device140. The remote computing device by its definition can serve the data remotely by receiving and processing queries received. (example of typical query format is Sequential Query Language (SQL)) The data set145may be stored on a data storage device. A data storage device may comprise a device for recording and/or storing information (data). Examples of data storage devices comprise, but are not limited to: tangible storage mediums, Read-only memory, Random Access memories, flash drives, disk drives, network accessible drives, magnetic tape, optical drives, combinations thereof, and/or the like. According to some of the various embodiments, the remote computing device140may be configured to communicate with an external network through a firewall. A firewall may comprise a network security system that controls incoming and outgoing network traffic based on an applied rule set. A firewall may establish a barrier between a trusted, secure internal network and another network (e.g., the Internet) that is assumed not to be secure and trusted. Firewalls may exist both as software to run on general purpose hardware and as a hardware appliance. Many firewalls may also offer other functionality to the internal network they protect, such as acting as a DHCP server for that network. Some firewalls may be implemented as software in combination with hardware and/or virtual. According to some of the various embodiments, the firewall may comprise a routing function abilities that pass data between networks and components. A security Appliance may comprise a network security system that controls the incoming and outgoing network traffic based on an applied rule set. The appliance may establish a barrier between a trusted, secure internal network and another network (e.g., the Internet) that is assumed not to be secure and trusted. The security appliance may exist as a hardware appliance, software appliance or software program. An example of security appliance is a firewall. Requesting computing device120may comprise a computing device configured to initiate a request such as, but not limited to: a personal computing device (e.g. PC, Tablet, and Phone), a distributed computing device (server), combinations thereof, and/or the like. The first step in the flow of information may be initiated by the requesting computing device120. During the time of this request, the requesting computing device120may be located within a company's internal network or have access to a company's network (For example, over a virtual private network (VPN)). According to some of the various embodiments, a request may be initiated on a requesting computing device120by a requester. A requester may, for example without limitation, comprise a human user or a machine program that initiates the request for information. A human user may initiate a request when he or she needs the information. The machine program may comprise, for example, a monitoring program that may be configures to initiate a request for information based on the occurrence of an event. The event could, for example, be the passage of time or could be a trigger event that occurs. For example, a trigger event could be the arrival of a new data file or completion of a batch schedule. According to some of the various embodiments, a request may employ, for example without limitation, hypertext transfer protocol. (HTTP or HTTPS). The request may, for example, be specifically related to data that exists on the remote computing device140. The request may be configured, for example, to query data (read-only), manipulate data. (write), combinations thereof, and/or the like. Examples of requests may comprise without limitation: 1) run an audit on a date of birth field of a people dataset; 2) profile columns of a people dataset; 3) if the date of birth format is mm-dd-yyyy, then convert to mm/dd/yyyy; 4) retrieve sales by quarter; combinations thereof, and/or the like. According to some of the various embodiments, a requesting computing device120may be configured to employ credentials to communicate remote instructions to the remote computing device140over an external network160and through firewall130. Credentials may comprise, for example access credentials. Access credentials may comprise a set of information required to connect and query the remote computing device140. The information may comprise, for example, one or more of remote server address(es), port number(s), database or application instance name(s), database schema name(s), login(s), password(s), file path name(s), combinations thereof, and/or the like. According to some of the various embodiments, a requesting computing device120may be configured to receive query results from the remote computing device140. The query results may be generated by the remote computing device140executing remote instructions. Query Results may comprise data received back from the remote computing device140as a result of processing query instruction(s). Data received back may comprise, for example, a single value, a result set which consists of a set of rows from a database, metadata comprising the name of the column of data, combinations thereof, and/or the like. For data manipulation queries, the result returned may comprise, for example, metadata representing the success or failure of an operation. For example, the result returned may comprise a number of rows updated. The requesting computing device120may be configured to convert the query results into a Flexible Data Representation (FDR) format. A Flexible Data Representation (FDR) may comprise a language independent format that employs human-readable text to transmit data objects as attribute-value pairs. An FDR may be employed to transmit query results between requesting computing device(s)120and assistant computing device(s)110. The format may enables transmitting data in a byte-optimized format configured to support attributes or columns of data and various number of records. According to some of the various embodiments, an assistant computing device110may be a distributed computing device configured to handle requests from requesting computing device(s), process and analyze the request(s), co-ordinate the flow of information; provide answers to a requester, combinations thereof, and/or the like. Assistant computing device110may comprise a server, a personal computer, an embedded system, combinations thereof, and/or the like. According to some of the various embodiments, an assistant computing device110may be configured to receive a request from the requesting computing device120to query the data set145. The request may be configured to identify the remote computing device140. The assistant computing device110may be configured to communicate with the requesting computing device120via various mechanisms such as, but not limited to: an external network (e.g. Internet), an internal network, a wide area network WAN, a Local Area Network LAN, a virtual private network (VPN), a combination thereof, and/or the like. According to some of the various embodiments, the assistant computing device110may be configured to identify the access credential requirements to allow the requesting computing device120to access the remote computing device140. According to some of the various embodiments, the assistant computing device110may be configured to generate access credentials, employing at least in part, the access credential requirements. According to some of the various embodiments, the assistant computing device110may be configured to identify remote processing requirements for the remote computing device150to access the data set145identified in the request. The assistant computing device110may be further configured to generate remote processing instructions, employing at least in part, the remote processing requirements, the remote processing instructions may be configured to be executable by the remote computing device to satisfy the request; (few flow diagrams may be useful). Remote processing instructions may comprise data processing instruction set(s) specific to the data source in the remote computing device140that may be employed to process and retrieve data145. Example of instructions in the data processing instruction may comprise, for example, retrieving data (e.g. querying and selecting), manipulating data (e.g. writing data like Add, Delete, and Update), combinations thereof, and/or the like. Example of a remote instruction may comprise: 1) for a direct database query: SELECT COUNT(*) FROM EMPLOYEES; 2) for an SAP application: Call Function Module Z_ABC and send parameters; 3) for a web application: Call Web Service Method getEmployees passing the filter criteria; and/or the like. According to some of the various embodiments, the assistant computing device110may be configured to encrypt the access credentials to generate encrypted access credentials. Similarly, the assistant computing device110may be configured to encrypt the remote processing instructions to generate encrypted remote processing instructions. Encryption may convert the access credentials and/or remote processing instructions into non-readable text by applying a cryptographic algorithm. Examples of crypto algorithm are RSA, SHA-1, SHA-2 with 64, 128 or 256 bits of encryption. A cipher may be employed to perform encryption and decryption. A cipher may comprise a pair of algorithms that create the encryption and the reversing decryption. Ciphers may be categorized as symmetric key algorithms and asymmetric key algorithms. Examples of ciphers comprise, but are not limited to: AES_128 (a private key algorithm) and ECDHE_RSA (a public key algorithm). According to some of the various embodiments, the assistant computing device110may be configured to employ the encrypted access credentials to electronically communicate the encrypted remote processing instructions to the requesting computing device. The encrypted access credentials may be configured to include at least one of the following: remote login instructions; remote computing device information name; remote computing device login password; remote computing device port number; remote computing device data store name; remote computing device login name; physically secured data center information name; physically secured data center access password; physically secured data center port number; physically secured data center login name; a cryptographic key, a combination thereof, and/or the like. According to some of the various embodiments, the assistant computing device110may be configured to receive at least one set of encrypted results from the requesting computing device. According to some of the various embodiments, the assistant computing device110may be configured to decrypt the encrypted results to obtain results. According to some of the various embodiments, the assistant computing device110may be configured to generate a report of results. The report may comprise a presentation of quantitative and qualitative information to a user based on factual data, interpreted data, user input, combinations thereof, and/or the like. For example, a report on the result of a data quality audit to validate date of birth in mm/dd/yyyy format could comprise: quantitative information like total number of records processed, total number of records failing the audit and the detailed records themselves; qualitative information like whether the audit passed the audit threshold, trend information based on reconciling data with history, any system generated or user input comments; combinations thereof, and/or the like. The report may further comprise additional information, the additional information comprising at least one of the following: logic; trending information; template information; intelligence information; benchmark information; data quality information; decision support information; data analysis information; combinations thereof, and/or the like. Additionally, the report may be configured to be accessed via a browser. According to some of the various embodiments, the assistant computing device110may be configured to communicate the report to the requesting computing device through network160via communication links112and122. FIG.2is another example block diagram showing a system200for remotely accessing data from a dataset245on a remote secured server240according to some of the various embodiments of the present invention. This alternative embodiment illustrates assistant computing device210communicating with requesting computing device220via link212outside of network260. Example embodiments of the invention as illustrated inFIG.2are described with reference to the accompanying drawings, wherein like parts are designated by like reference numerals toFIG.1throughout. So for example, the decryption with respect to remote computing device140may also be applicable to remote computing device240. The remote computing device240may comprise a computing device such as, but not limited to: a personal computing device (PC, tablet or phone), a distributed computing device (e.g. a server) that comprises the data which the requester is trying to query and analyze, a combination thereof, and/or the like. According to some of the various embodiments, the remote computing device may serve data remotely by receiving and processing received queries. (An example of a typical query format is Sequential Query Language (SQL)). According to some of the various embodiments, the remote computing device240may reside in a physically secured data center250. According to some of the various embodiments, the physically secure data center250may comprise a physical facility that is owned or leased. The physically secure data center250may house the remote computing device240and/or the dataset245being accessed. The physical facility could be the same location where the requesting computing device220is located or could be located in a different place. The dataset245may be located in remote computing device240and be accessible by the requesting computing device220using access credentials. The access credentials may according to some embodiments, be optional. According to some of the various embodiments, the remote computing device240may be in communication with a data set245. A data set245(or data set245) may comprise a collection of data. The collection of data may correspond to contents of database(s). The remote computing device may be configured to serve data remotely by receiving and processing received queries. The data set245may be stored on a data storage device. According to some of the various embodiments, the remote computing device240may be configured to communicate with an external network260through a firewall230via communication links252and232. Requesting computing device220may comprise a computing device configured to initiate a request such as, but not limited to: a personal computing device (e.g. PC, Tablet, and Phone), a distributed computing device (server), combinations thereof, and/or the like. The flow of information may be initiated by the requesting computing device220. As illustrated, requesting computing device is outside the physically secured data center250and may communicate to the physically secured data center250via links222,232and252through network260and firewall230. According to some of the various embodiments, a request may be initiated on a requesting computing device220by a requester. According to some of the various embodiments, a requesting computing device220may be configured to employ credentials to communicate remote instructions to the remote computing device240over an external network260and through firewall230via communication links222,232, and252. Credentials may comprise, for example access credentials. Access credentials may comprise a set of information required to connect and query the remote computing device240. According to some of the various embodiments, a requesting computing device220may be configured to receive query results from the remote computing device240. The query results may be generated by the remote computing device240executing remote instructions. Query Results may comprise data received back from the remote computing device240as a result of processing query instruction(s). The requesting computing device220may be configured to convert the query results into a Flexible Data Representation (FDR) format. An FDR may be employed to transmit query results between requesting computing device(s)220and assistant computing device(s)210via communications link212. According to some of the various embodiments, assistant computing device210may be a distributed computing device configured to handle requests from requesting computing device(s), process and analyze the request(s), co-ordinate the flow of information; provide answers to a requester, combinations thereof, and/or the like. Assistant computing device210may comprise a server, a personal computer, an embedded system, combinations thereof, and/or the like. According to some of the various embodiments, an assistant computing device210may be configured to receive a request from the requesting computing device220via communications link212to query the data set245. The request may be configured to identify the remote computing device240. The assistant computing device210may be configured to communicate with the requesting computing device220via various mechanisms such as, but not limited to: an external network (e.g. Internet), an internal network, a wide area network WAN, a Local Area Network LAN, a virtual private network (VPN), a combination thereof, and/or the like. According to some of the various embodiments, the assistant computing device210may be configured to identify the access credential requirements to allow the requesting computing device220to access the remote computing device240. According to some of the various embodiments, the assistant computing device210may be configured to generate access credentials, employing at least in part, the access credential requirements. According to some of the various embodiments, the assistant computing device210may be configured to identify remote processing requirements for the remote computing device250to access the data set245identified in the request. The assistant computing device210may be further configured to generate remote processing instructions, employing at least in part, the remote processing requirements, the remote processing instructions may be configured to be executable by the remote computing device to satisfy the request; (few flow diagrams may be useful). Remote processing instructions may comprise data processing instruction set(s) specific to the data source in the remote computing device240that are employed to process and retrieve data245. According to some of the various embodiments, the assistant computing device210may be configured to encrypt the access credentials to generate encrypted access credentials. Similarly, the assistant computing device210may be configured to encrypt the remote processing instructions to generate encrypted remote processing instructions. According to some of the various embodiments, the assistant computing device210may be configured to employ the encrypted access credentials to electronically communicate the encrypted remote processing instructions to the requesting computing device. According to some of the various embodiments, the assistant computing device210may be configured to receive at least one set of encrypted results from the requesting computing device. According to some of the various embodiments, the assistant computing device210may be configured to decrypt the encrypted results to obtain results. According to some of the various embodiments, the assistant computing device210may be configured to generate a report of results. According to some of the various embodiments, the assistant computing device210may be configured to communicate the report to the requesting computing device220via link212. FIG.3is another example block diagram showing a system300for remotely accessing data from a dataset345on a remote secured server340according to some of the various embodiments of the present invention. In this alternative embodiment, requesting computing device320and remote computing device340may reside inside physically secured data center350. Assistant computing device310may reside outside of physically secured data center350. Assistant computing device310may communicate to requesting computing device320through network360via communication links322and312. Assistant computing device310may communicate to remote computing device340through network360and firewall330via communication links312,332and352. Requesting computing device may communicate to remote computing device340through network360and firewall330via communication links322,332and352. Example embodiments of the invention as illustrated inFIG.3are described with reference to the accompanying drawings, wherein like parts are designated by like reference numerals toFIG.1andFIG.2throughout. So for example, the decryption with respect to remote computing device140may also be applicable to remote computing device340. The remote computing device340may comprise a computing device such as, but not limited to: a personal computing device (PC, tablet or phone), a distributed computing device (e.g. a server) that comprises the data which the requester is trying to query and analyze, a combination thereof, and/or the like. According to some of the various embodiments, the remote computing device340could be the same as the requesting computing device320(when the dataset345is located on the same device) but more often than not, the remote computing device340and requesting computing device320may be separate devices. The remote computing device may serve data remotely by receiving and processing queries received. (An example of a typical query format is Sequential Query Language (SQL)). According to some of the various embodiments, the remote computing device340may reside in a physically secured data center350. According to some of the various embodiments, the physically secure data center350may comprise a physical facility that is owned or leased. The physically secure data center350may house the remote computing device340and/or the dataset345being accessed. The physical facility could be the same location where the requesting computing device320is located or could be located in a different place. The dataset345may be located in remote computing device340and be accessible by the requesting computing device320using access credentials. The access credentials may according to some embodiments, be optional. According to some of the various embodiments, the remote computing device340may be in communication with a data set345. A data set345(or data set345) may comprise a collection of data. The collection of data may correspond to contents of database(s). The remote computing device may be configured to serve data remotely by receiving and processing received queries. The data set345may be stored on a data storage device. According to some of the various embodiments, the remote computing device340may be configured to communicate with an external network360through a firewall330via communication links352and332. Requesting computing device320may comprise a computing device configured to initiate a request such as, but not limited to: a personal computing device (e.g. PC, Tablet, and Phone), a distributed computing device (server), combinations thereof, and/or the like. The flow of information may be initiated by the requesting computing device320. As illustrated, requesting computing device is physically located inside the physically secured data center350and may communicate to the remote computing device340via network360over communications link322, and through firewall330via communications links332and352. According to some of the various embodiments, a request may be initiated on a requesting computing device320by a requester. According to some of the various embodiments, a requesting computing device320may be configured to employ credentials to communicate remote instructions to the remote computing device340over an external network360and through firewall330via communication links322,332, and352. Credentials may comprise, for example access credentials. Access credentials may comprise a set of information required to connect and query the remote computing device340. According to some of the various embodiments, a requesting computing device320may be configured to receive query results from the remote computing device340. The query results may be generated by the remote computing device340executing remote instructions. Query Results may comprise data received back from the remote computing device340as a result of processing query instruction(s). The requesting computing device320may be configured to convert the query results into a Flexible Data Representation (FDR) format. An FDR may be employed to transmit query results between requesting computing device(s)320and assistant computing device(s)310via communications link312. According to some of the various embodiments, assistant computing device310may be a distributed computing device configured to handle requests from requesting computing device(s), process and analyze the request(s), co-ordinate the flow of information; provide answers to a requester, combinations thereof, and/or the like. Assistant computing device310may comprise a server, a personal computer, an embedded system, combinations thereof, and/or the like. According to some of the various embodiments, an assistant computing device310may be configured to receive a request from the requesting computing device320via communications link312to query the data set345. The request may be configured to identify the remote computing device340. The assistant computing device310may be configured to communicate with the requesting computing device320via various mechanisms such as, but not limited to: an external network (e.g. Internet), an internal network, a wide area network WAN, a Local Area Network LAN, a virtual private network (VPN), a combination thereof, and/or the like. According to some of the various embodiments, the assistant computing device310may be configured to identify the access credential requirements to allow the requesting computing device320to access the remote computing device340. According to some of the various embodiments, the assistant computing device310may be configured to generate access credentials, employing at least in part, the access credential requirements. According to some of the various embodiments, the assistant computing device310may be configured to identify remote processing requirements for the remote computing device350to access the data set345identified in the request. The assistant computing device310may be further configured to generate remote processing instructions, employing at least in part, the remote processing requirements, the remote processing instructions may be configured to be executable by the remote computing device to satisfy the request; (few flow diagrams may be useful). Remote processing instructions may comprise data processing instruction set(s) specific to the data source in the remote computing device340that are employed to process and retrieve data345. According to some of the various embodiments, the assistant computing device310may be configured to encrypt the access credentials to generate encrypted access credentials. Similarly, the assistant computing device310may be configured to encrypt the remote processing instructions to generate encrypted remote processing instructions. According to some of the various embodiments, the assistant computing device310may be configured to employ the encrypted access credentials to electronically communicate the encrypted remote processing instructions to the requesting computing device. According to some of the various embodiments, the assistant computing device310may be configured to receive at least one set of encrypted results from the requesting computing device. According to some of the various embodiments, the assistant computing device310may be configured to decrypt the encrypted results to obtain results. According to some of the various embodiments, the assistant computing device310may be configured to generate a report of results. According to some of the various embodiments, the assistant computing device310may be configured to communicate the report to the requesting computing device320through network360via links312and322. FIG.4is example block diagram showing communication flow between components in a system400for remotely accessing data445on a remote secured server440according to some of the various embodiments of the present invention. According to some of the various embodiments, a request450may be made by a requesting computing device420to an assistant computing device410to query a dataset445in communication with a remote computing device440. The remote computing device440may reside in a physically secured data center and may not be directly accessible to the assistant computing device410. According to some of the various embodiments, the assistant computing device410may identify access credential requirements to allow the requesting computing device420to access the remote computing device440identified in the request450. According to some of the various embodiments, the assistant computing device410may identify remote processing requirements for the remote computing device440to access the dataset445identified in the request450. According to some of the various embodiments, the assistant computing device410may generate access credentials, employing at least in part, the access credential requirements. According to some of the various embodiments, the assistant computing device410may generate remote processing instructions, employing at least in part, the remote processing requirements. The remote processing instructions may be configured to be executable by the remote computing device440to satisfy the request450. According to some of the various embodiments, the assistant computing device410may encrypt the access credentials to generate encrypted access credentials460. According to some of the various embodiments, the assistant computing device410may encrypt the remote processing instructions to generate encrypted remote processing instructions470. According to some of the various embodiments, the assistant computing device410may communicate the encrypted access credentials460to the requesting computing device420. According to some of the various embodiments, the assistant computing device410may communicate the encrypted remote processing instructions470to the requesting computing device420. The encrypted access credentials460may be configured to allow the requesting computing device420to access the remote computing device440. The encrypted remote instructions470may comprise remote instructions configured to enable the remote computing device440to execute at least one of the following: at least one data query; and at least one data manipulation. According to some of the various embodiments, the requesting computing device420may decrypt the encrypted access credentials460to obtain access credentials465. According to some of the various embodiments, requesting computing device420may decrypt the encrypted remote instructions470to obtain remote instructions475. The remote computing device440may be behind a firewall430. According to some of the various embodiments, requesting computing device420may access the remote computing device440using the access credentials465. According to some of the various embodiments, requesting computing device420may communicate the remote instructions475to the remote computing device440. According to some of the various embodiments, the remote computing device440may reside in a physically secured data center and not be directly accessible to the assistant computing device410. According to some of the various embodiments, the remote computing device440may receive the remote instructions475. The remote instructions may comprise remote instructions configured to enable the remote computing device440to execute at least one of the following: (1) at least one data query; and (2) at least one data manipulation. According to some of the various embodiments, the remote computing device440may execute the remote instructions475to generate query results480. According to some of the various embodiments, the remote computing device440may communicate the query results480to the requesting device420. At least part of the query results may be configured to be employable by the assistant computing device410to generate a report490. According to some of the various embodiments, the requesting computing device420may receive the query results. According to some of the various embodiments, the requesting computing device420may convert the query results480into a flexible data representation485of the query results480. The conversion may involve encrypting the query results480. According to some of the various embodiments, the requesting computing device420may communicate the flexible data representation485to the assistant computing device410. According to some of the various embodiments, the assistant computing device410may receive the flexible data representation485from the requesting computing device420. According to some of the various embodiments, the assistant computing device410may process the flexible data representation485to obtain the query results480. The processing may involve decrypting flexible data representation485. According to some of the various embodiments, the assistant computing device410may generating a report of results490employing at least part of the query results480. According to some of the various embodiments, the assistant computing device410may communicate the report490to the requesting computing device420. FIGS.5,6and7are example flow diagrams that together illustrate embodiments where a requesting computing device may access secured data from a remote computing device employing the assistance of an assistant computing device. Specifically,FIG.5illustrates remote access of secured data from the perspective of a requesting computing device,FIG.6illustrates remote access of secured data from the perspective of an assistant computing device, andFIG.7illustrates remote access of secured data from the perspective of a remote computing device. Additionally,FIGS.5,6and7are to be interpreted with respect to the descriptions of various embodiments above of the requesting computing device, remote computing device, the assistant computing device, and their interconnections. FIG.5is an example flow diagram illustrating remote access of secured data from the perspective of a requesting computing device according to some of the various embodiments of the present invention. According to some of the various embodiments, a request may be made from a requesting computing device to an assistant computing device to query a dataset in communication with a remote computing device at510. The remote computing device may reside in a physically secured data center. The remote computing device may not be directly accessible to the assistant computing device. According to some of the various embodiments, encrypted access credentials and encrypted remote instructions may be received at the requesting computing device from the assistant computing device at515. The encrypted access credentials may be configured to allow the requesting computing device to access the remote computing device. The encrypted remote instructions may comprise remote instructions configured to enable the remote computing device to execute at least one of the following: at least one data query; and at least one data manipulation. According to some of the various embodiments, the encrypted access credentials may be decrypted by the requesting computing device to obtain access credentials at520. Similarly, the encrypted remote instructions may be decrypted at the requesting computing device to obtain remote instructions at525. According to some of the various embodiments, the requesting computing device may access the remote computing device using the access credentials at530. The remote instructions may be communicated from the requesting computing device to the remote computing device at535. Query results may be generated by the remote computing device executing the remote instructions. According to some of the various embodiments, query results from the remote computing device may be received at the requesting computing device at540. The requesting computing device may generate encrypted query results by encrypting the query results at545. The encrypted query results may be communicated from the requesting computing device to the assistant computing device at550. At555, the requesting computing device may receive a report from the assistant computing device. The report may comprise, at least in part, a decrypted version of at least a part of the encrypted query results. FIG.6is an example flow diagram illustrating remote access of secured data from the perspective of an assistant computing device according to some of the various embodiments of the present invention. According to some of the various embodiments, a request may be receiving at an assistant computing device over a network from a requesting computing device to query a dataset located on a remote computing device at610. The remote computing device may reside in a physically secured data center. The remote computing device may not be directly accessible to the assistant computing device. According to some of the various embodiments, access credential requirements may be identified to allow the requesting computing device to access the remote computing device identified in the request at615. Similarly, remote processing requirements may be identified for the remote computing device to access the dataset identified in the request at620. According to some of the various embodiments, access credentials may be generated at625employing at least in part, the access credential requirements. Similarly, remote processing instructions may be generated at630employing at least in part, the remote processing requirements. The remote processing instructions may be configured to be executable by the remote computing device to satisfy the request. According to some of the various embodiments, the access credentials may be encrypted at635to generate encrypted access credentials. Similarly, the remote processing instructions may be encrypted at640to generate encrypted remote processing instructions. According to some of the various embodiment, the encrypted access credentials may be communicated to the requesting computing device at640. Similarly, the encrypted remote processing instructions may be communicated to the requesting computing device at645. According to some of the various embodiments, at least one set of encrypted results may be received the requesting computing device at650. The encrypted results may be decrypted at655to obtain the results. A report of the results may be generated at660. The report may be communicated to the requesting computing device at665. FIG.7is an example flow diagram illustrating remote access of secured data from the perspective of a remote computing device according to some of the various embodiments of the present invention. According to some of the various embodiments, remote instructions may be received at a remote computing device from a requesting device through a firewall at710. The remote computing device may reside in a physically secured data center and not be directly accessible to an assistant computing device. The receiving may be accomplished, at least in part, employing access credentials presented by the requesting device. The encrypted access credentials may be configured to allow the requesting computing device to access the remote computing device. The encrypted remote instructions may comprise remote instructions configured to enable the remote computing device to execute at least one of the following: at least one data query; and at least one data manipulation. According to some of the various embodiments, the remote instructions and access credentials may have been formed by the requesting device as follows. The requesting device may have made a request to the assistant computing device to query a dataset in communication with the remote computing device. The requesting device may have received encrypted access credentials and encrypted remote instructions from the assistant computing device. The requesting device may have decrypted the encrypted access credentials to obtain access credentials. Similarly, the requesting device may have decrypted the encrypted remote instructions to obtain remote instructions. According to some of the various embodiments, remote computing device may execute the remote instructions to generate query results at720. The query results may be communicated to the requesting device at730. At least part of the query results may be configured to be employable by the assistant computing device to generate a report. FIG.8illustrates an example of a suitable computing system environment800on which aspects of some embodiments may be implemented. The computing system environment800is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the claimed subject matter. Neither should the computing environment800be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment800. Embodiments are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with various embodiments include, but are not limited to, embedded computing systems, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, cloud services, telephony systems, distributed computing environments that include any of the above systems or devices, and the like. Embodiments may be described in the general context of computer-executable instructions, such as program modules, being executed by computing capable devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Some embodiments may be designed to be practiced in distributed computing environments where tasks may be performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices. With reference toFIG.8, an example system for implementing some embodiments includes a computing device810. Components of computer810may include, but are not limited to, a processing unit820, a system memory830, and a system bus821that couples various system components including the system memory to the processing unit820. Computer810typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer810and includes both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, and removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disc read-only memory (CD-ROM), digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer810. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media. The system memory830includes computer storage media in the form of volatile and/or nonvolatile memory such as ROM831and RAM832. A basic input/output system833(BIOS), comprising the basic routines that help to transfer information between elements within computer810, such as during start-up, is typically stored in ROM831. RAM832typically comprises data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit820. By way of example, and not limitation,FIG.8illustrates operating system834, application programs835, other program modules836, and program data837. The computer810may also include other removable/non-removable volatile/nonvolatile computer storage media. By way of example only,FIG.8illustrates a hard disk drive841that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive851that reads from or writes to a removable, nonvolatile magnetic disk852, a flash drive reader857that reads flash drive858, and an optical disk drive855that reads from or writes to a removable, nonvolatile optical disk856such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive841is typically connected to the system bus821through a non-removable memory interface such as interface840, and magnetic disk drive851and optical disk drive855are typically connected to the system bus821by a removable memory interface, such as interface850. The drives and their associated computer storage media discussed above and illustrated inFIG.8provide storage of computer readable instructions, data structures, program modules and other data for the computer810. InFIG.8, for example, hard disk drive841is illustrated as storing operating system844, application programs845, program data847, and other program modules846. Additionally, for example, non-volatile memory may include instructions to, for example, discover and configure IT device(s); the creation of device neutral user interface command(s); combinations thereof, and/or the like. A user may enter commands and information into the computer810through input devices such as a keyboard862, a microphone863, a camera864, and a pointing device861, such as a mouse, trackball or touch pad. These and other input devices are often connected to the processing unit820through a user input interface860that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor891or other type of display device may also connected to the system bus821via an interface, such as a video interface890. Other devices, such as, for example, speakers897and printer896may be connected to the system via peripheral interface895. The computer810is operated in a networked environment using logical connections to one or more remote computers, such as a remote computer880. The remote computer880may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer810. The logical connections depicted inFIG.8include a local area network (LAN)871and a wide area network (WAN)873, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet. When used in a LAN networking environment, the computer810is connected to the LAN871through a network interface or adapter870. When used in a WAN networking environment, the computer810typically includes a modem872or other means for establishing communications over the WAN873, such as the Internet. The modem872, which may be internal or external, may be connected to the system bus821via the user input interface860, or other appropriate mechanism. The modem872may be wired or wireless. Examples of wireless devices may comprise, but are limited to: Wi-Fi and Bluetooth. In a networked environment, program modules depicted relative to the computer810, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,FIG.8illustrates remote application programs885as residing on remote computer880. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used. Additionally, for example, LAN871and WAN873may provide a network interface to communicate with other distributed infrastructure management device(s); with IT device(s); with users remotely accessing the User Input Interface860; combinations thereof, and/or the like. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. In this specification, “a” and “an” and similar phrases are to be interpreted as “at least one” and “one or more.” References to “an” embodiment in this disclosure are not necessarily to the same embodiment. Many of the elements described in the disclosed embodiments may be implemented as modules. A module is defined here as an isolatable element that performs a defined function and has a defined interface to other elements. The modules described in this disclosure may be implemented in hardware, a combination of hardware and software, firmware, wetware (i.e. hardware with a biological element) or a combination thereof, all of which are behaviorally equivalent. For example, modules may be implemented using computer hardware in combination with software routine(s) written in a computer language (Java, HTML, XML, PHP, Python, ActionScript, JavaScript, Ruby, Prolog, SQL, VBScript, Visual Basic, Perl, C, C++, Objective-C or the like). Additionally, it may be possible to implement modules using physical hardware that incorporates discrete or programmable analog, digital and/or quantum hardware. Examples of programmable hardware include: computers, microcontrollers, microprocessors, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), and complex programmable logic devices (CPLDs). Computers, microcontrollers and microprocessors are programmed using languages such as assembly, C, C++ or the like. FPGAs, ASICs and CPLDs are often programmed using hardware description languages (HDL) such as VHSIC hardware description language (VHDL) or Verilog that configure connections between internal hardware modules with lesser functionality on a programmable device. Finally, it needs to be emphasized that the above mentioned technologies may be used in combination to achieve the result of a functional module. Some embodiments may employ processing hardware. Processing hardware may include one or more processors, computer equipment, embedded systems, machines a combination thereof, and/or the like. The processing hardware may be configured to execute instructions. The instructions may be stored on a machine-readable medium. According to some embodiments, the machine-readable medium (e.g. automated data medium) may be a medium configured to store data in a machine-readable format that may be accessed by an automated sensing device. Examples of machine-readable media include: magnetic disks, cards, tapes, and drums, flash memory, memory cards, electrically erasable programmable read-only memory (EEPROM), solid state drives, optical disks, barcodes, magnetic ink characters, a combination thereof, and/or the like. While various embodiments have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein without departing from the spirit and scope. In fact, after reading the above description, it will be apparent to one skilled in the relevant art(s) how to implement alternative embodiments. Thus, the present embodiments should not be limited by any of the above described exemplary embodiments. In particular, it should be noted that, for example purposes, the presently described embodiments are discussed with respect to a data center. However, one skilled in the art will recognize that embodiments may be employed to other collections of IT devices over, for example, a distributed network not confined by a single data center, a small collection of IT devices in an Intranet, combinations thereof, and/or the like. In addition, it should be understood that any figures that highlight any functionality and/or advantages, are presented for example purposes only. The disclosed architecture is sufficiently flexible and configurable, such that it may be utilized in ways other than that shown. For example, the steps listed in any flowchart may be re-ordered or only optionally used in some embodiments. Further, the purpose of the Abstract of the Disclosure is to enable the U.S. Patent and Trademark Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract of the Disclosure is not intended to be limiting as to the scope in any way. Finally, it is the applicant's intent that only claims that include the express language “means for” or “step for” be interpreted under 35 U.S.C. 112. Claims that do not expressly include the phrase “means for” or “step for” are not to be interpreted under 35 U.S.C. 112.
63,779
11863537
DETAILED DESCRIPTION For context, in the existing social media paradigm, users utilize free, ad-based apps such as Facebook™, Twitter™, Instagram™, and Linked-In™ to push content (e.g., digital data) out via posts. The content is transferred to a data center which is owned and controlled by the respective social media networks and then pulled back into a user's smart device via feeds. The problem with this model is that once the content is pushed out via posts and stored in a remote data center, users lose ownership of the content and risk the malicious misuse of their shared digital data. Users are typically not allowed to delete their content because the content resides on the social network's data center. Users also relinquish the ability to set and maintain access and distribution to their content. Users do not have control of the feeds that are being returned, or the ability to effectively monitor them. Users have limited exposure to what is actually being posted by friends, followers, connections, and so forth. Also, users cannot aggregate feeds from a plurality of different social networks in a collocated application or service. Additionally, these social networks can sometimes adopt a user's content for their advertising purposes, collect, and sell data about the user and their usage, and serve the user only the content that the social network providers think will help sell paid advertisers' products. Social media network have faced criticism for their privacy settings, ownership rights, and advertising practices. This has caused user concern for their privacy and safety with respect to recent reports of online trolling, where people are stealing social identities and abusing them; as well as data breaches, where thousands of “deleted” images have been stolen and released. Systems and methods of the present disclosure remedies the deficiencies of social media content being stored and transmitted using remote data centers, where the user loses control of their digital data. Instead, the present technology leverages a Federated Cloud Drive System™ to allow users to share content that already exists on their personal cloud drives or other data storage devices such residential network, attached storage (NAS) or business storage servers that implement cloud storage web APIs. These are all generally referred to as private user secure data storage or a secure data storage device. Rather than posting digital data onto a social network, where storage of the digital data by the social network can occur, the present technology allows a user to store their digital data in a secure and controlled location as mentioned above. Rather than uploading or posting the digital content to the social network, the present technology generates and transmits hyperlinks, instead of the digital data. Thus, users maintain total ownership of their content. Users can add content with ease and they can also delete with ease, with absolutely no risk of the data living on. To be sure, the present technology leverages the private storage devices or cloud storage of the user, rather than requiring the user to expose their digital data for unsecured and uncontrolled storage. These systems and methods ensure a high level of security, providing the user with the ability to assign access rights (Federated Access Control System™) as well as encrypt data at rest and in transit (Federated Encryption System™), sharing is guaranteed to be as private and secure as users desire and specify. Users maintain total control of their feeds. Unique grouping functionality enables users to define what they share, who they share it with, as well as what they see and who they see it from. In some embodiments, the present technology employs a subscription based service; there is no advertising, no data mining, and no manipulation or limitation of user feeds to promote products. Users also have new monitoring and sharing capabilities across multiple public social media networks. The present disclosure provides a Federated Public Social Media System™ that aggregates posts and feeds from multiple public social media networks. Users get to personalize how much they want to view and who they want to view it from, without any advertising or filters, as well as monitor activity across multiple accounts. So a parent, for example, can use the systems and methods of the present disclosure to monitor the content being generated and consumed by multiple children on multiple public social media networks as well as share privately or publicly with them all from one centralized location. The present disclosure is a gateway solution that brings existing public social media networks as well as the new private social media network together into one solution. The present disclosure allows users to choose a personalized level of ownership, privacy, control, and monitoring of their social media presence, thereby addressing many of the concerns and criticisms about social media in a very unique and compelling way. All of this is possible with a mobile application supported by the aforementioned Federated Cloud Services. The Federated Cloud Services of the present disclosure, in combination with the mobile application, enables interoperability and information sharing between autonomous and decentralized systems to create a greatly enhanced social media experience based on a massively distributed and secure system. FIG.1illustrates the logical systems architecture of a Federated Cloud Service (service100) and application102. The service100comprises private API servers104, a third party APIs105, and SDK components106. The network connections between connected components are made over Secure Sockets Layer (SSL) connections or other similarly network connection protocols. The service100comprises a share web API server108(shared message server), a feed web API server110, and a profile web API server112, that provide configuration, deployment and management provisioning services, as well as storage, notification, authorization, and authentication functions for the application102and users. The application102runs on a mobile device or computer and is the central component in the systems architecture as it is where most of the application logic resides. The application102integrates with the illustrated services, secure data storage APIs and SDKs, and public social media Web APIs. This unique systems architecture approach creates significant real world efficiencies, including optimized storage space on our Web API servers, optimized available bandwidth over the network to authorized third parties, and optimized storage space and processing resources on third party devices. FIG.2illustrates the logical systems architecture of a private social media network. It depicts the plurality of Web API servers (108-112), the application102, and several types of secure data storage systems (114A-N) and their relationships. The diagram also illustrates unique application layer specific functionality that includes sharing, feeds, and profile. As noted previously, the existing social media paradigm is one in which users post content that originates directly from their device within a pubic social media application (e.g. Facebook™) or through an application that posts content from a storage device using public social media Graph APIs (e.g. OneDrive™ provides an ability to post drive content using the Facebook™ Graph API). Both approaches conclude with the same result being that the user's content is copied to the public social media data center whereupon the user loses control and ownership of their content. In contrast, the systems architecture ofFIGS.1and2, in combination with its server and application software, enables a user to share content from a wide array of secure storage devices (e.g. private cloud, home NAS116, or business secure storage118), in the form of hypermedia. To be sure, hypermedia is generally defined as “an extension of the term hypertext, is a nonlinear medium of information which includes graphics, audio, video, plain text, and hyperlinks.” The present technology leverages hyperlinks to hypermedia stored on the user's secure storage devices (either local or remote). User content is never stored in a data center that is not controllable by the user; instead hyperlinks to hypermedia create a massively distributed and secure system. With the examples of logical architectures set forth above, the following paragraphs illustrate processes facilitated by these example logical architectures.FIG.3illustrates a create and share data activity diagram. The method details a user's actions to add connections, text, content, tags, and geolocation to a shared message, using various tools provided by the system. For example, the system provides an aggregated social network that provides an application that enables users to utilize groups to select connections as well as utilize a mapping functionality to attribute geolocation for a shared message. In one embodiment, users can select multiple connections individually and/or can add connections by selecting multiple groups for a shared message in block302. The application can generate a unique set of connections even if an individual connection is added that is also in a group. In some embodiments, users can add emoticons in addition to the text of a shared message in block304. To be sure, any file type stored on a user's cloud drive can be selected and attached to a shared message in block306. Users can also use the system to manage all their cloud content (e.g., edit, copy, delete, and move file functions) from within one application. According to some embodiments, users can tag other connections for inclusion in a shared message in block308. The tagging functionality can illustrate a connection's profile picture in addition to their profile name. The underlying implementation to accessing a connection's profile hypermedia is through the use of hyperlinks to a connection's text and photos stored on the user's secure data storage device. Users can add a geolocation to the shared message by using the app to add geolocation data from either the device API or a mapping screen, where users can drop a pin to designate location for the shared message in block310. FIG.4illustrates a create and share context activity diagram having share context settings (e.g., access rights) private, rights, expiration, encrypt, and save. The actions provide a robust set of functionality that enables the user to control their digital content that is referenced in a shared message using a URL. Users can instruct the system and application to enable a privacy icon and collapse the shared message in a feed so that the viewer understands that the shared message is meant for their eyes only in block402. Users can also set read, write, download, and/or invite rights to a shared message in block404. The function set prompts users to disambiguate competing group rights depending on which connections and/or groups are selected to receive a shared message. Users can set a time to live that sets a countdown clock until the shared message is no longer available in block406. Users can select extra encryption for their shared message, so that it is not only encrypted while in transit (the default for all shares), but also encrypted while at rest (when stored) in block408. Users can also decide to save the shared message so they have the ability to retrieve it and complete it at a later time410. The application will contain the ability to create a list of stored shared messages that can be easily accessed and seen. When a user has completed creating a shared message, the user can actuate a send button to transmit the shared message to the aggregated social networks, or pushed to the plurality of individual social networks. Actuating the send button can also trigger a functionality to save (block410) and encrypt hypermedia (the digital data) as well as generate hyperlinks that are then sent to the share server (seeFIGS.1and2) in block408. If the share encryption context is set to false, the encrypted aspects of the activity are still applied to content and hyperlinks associated with the hypermedia. The details of save and encrypt functions are outlined inFIG.5, which illustrates a save and share message activity diagram. Send functions are detailed inFIG.6, which illustrates a send share object to share server activity diagram. For context, the ‘share.hypermedia’ elements will be understood to include the digital data stored by the user in their personal storage space(s). These diagrams illustrate how user controlled use of user owned digital data is actually implemented. Again, the systems allow for the transmission of a hyperlink that references the user's digital data. A list of connections receiving the shared message, which includes the hyperlink, and their respective encrypted share key for accessing the share data and context are also created. Note that while content is encrypted and stored on a user's secure data storage device, not all shared data and context needs to be stored in the shared message file. It is possible for the system to transmit some of the share data and context to a back office to achieve efficiencies where limited device processing power and/or network bandwidth negatively impact application performance and user experience. Shared message data and context that is sent to the back office share servers is encrypted with the share key so that it is stored in an encrypted state. The encryption share key for shared message data and context is unique for each shared message in the system. This feature, in combination with the massively distributed nature of the secure data storage, is a limiting factor in the attack surface area. While there is certainly the risk of a private asymmetric key being compromised, a hacker would only be able to access the shared message data and context for which the private key can be used to decrypt symmetric share keys. The attack surface area is further limited by the number of hyperlinks on a given user device. Once the shared message is received by a server in the system, it is stored and processed. For each connection contained in the shared message object, the shared message server posts a copy of the shared message object to the connection's feed storage along with the connection's specific encrypted shared message key. After the feed is stored, the shared message server sends a notification to the connection indicating a new feed is available. Upon a feed refresh, the shared message appears in the connected user's feed. A similar process to the one outlined inFIG.6is followed, but in reverse order, by the user application to compose the shared message in the connection's feed. A method can include each feed object being sent to a connection's device. The feed object comprises a hyperlink to the digital data and that connection's encrypted share key. Using the connection's asymmetric private key, the application decrypts the encrypted share key. The share key is then used to decrypt the hyperlink to the digital data file. The file is then pulled from the user's secure data storage device which hosts the share. The share key is then used to decrypt the digital data file. The file is then deserialized, and the digital data is loaded into the field members of the feed object. Once the feed object is fully hydrated, the application uses the context and data in the object to draw the UI and display the feed. FIG.7illustrates the logical systems architecture of a system that functions as an aggregator of public social media networks. This system can be a particularly purposed configuration of the systems ofFIGS.1and2.FIG.7depicts the profile web API server112, the application102and secure data storage systems114A-N. The system is communicatively coupled with a Facebook™ application, and profile application components, as well as several types of secure data storage systems and their relationships. To be sure, while the Facebook™ application has been discussed, the user can link any number of public social media accounts to the aggregator system. As mentioned previously, systems of the present technology can function as a gateway solution that offers not only a new aggregated and private social media network, but also aggregated access to existing public social media networks such as Facebook™, Instagram™, and Twitter™ with enhanced functionality. Advantageously, the present technology allows users the possibility to aggregate all of their cloud content/hypermedia and share it on a public media site. For example, users can select content and/or create links to content on their secure data storage devices and then copy the content or the hyperlinks to a Facebook™ post. All of the content is accessible and sharable from one application. The systems of the present disclosure allow users to control the amount and type of content viewed, who they want to view the content, and where they want the content viewed, without any marketing or filters usually imposed by public social media networks. For example, users can create groups, put friends in groups, set a default filter to the groups, and then apply the settings to their Facebook™ feeds. Because all settings for posting and viewing feeds are stored off on the profile Web API server112, users will also have a seamless experience across all their devices. To be sure, while certain public social network such as Facebook™ are referred to in certain example use cases, it will be understood that the systems and methods of the present technology can be configured to cooperate with any public social media network using, for example, an API provided by the public social media network. FIGS.8-15are various graphical user interfaces (GUIs) that collectively illustrate features provided by the present technology.FIG.8is a GUI where a user can link one or more private or personal storage devices to the aggregated social network system. The user can select their storage devices/locations during the process of creating a new profile within the system. Again, the aggregated social network is a centralized location where many private storage locations can be linked to provide digital data in shared messages using URL links. The create profile GUI illustrates how a user creates profiles for each of their cloud providers and/or data storage devices. Each cloud provider represents a cloud drive subscription for the user. Examples include Google Drive™, Google+™, Microsoft OneDrive™, Cloud Drive™, and Apple iCloud Drive™. The user can also select other data storage devices used to store content for sharing, such as local storage or network storage. To create a profile, a user taps the plus button, enters a profile name, and then selects a cloud provider from the set up cloud provider drop-down box. Users are then redirected to the cloud provider login screen. To delete a profile the user selects a cloud provider and clicks the X button. Advantageously, parents also use this screen of the application to create cloud provider profiles for their children. This enables parents to aggregate, review, and edit all of their children's social media content in the application. The create profile screen illustrates the Federated Cloud Drive System™ at the application level. The Federated Cloud Drive System™ provides users with complete ownership and control of their content. User content is never stored in the system data center. Thus, it is easy for a user to add and delete without risk of the data living on and/or being hacked. FIG.9illustrates a GUI for linking one or more public social networks. The user can login to the individual public social networks, providing their username and/or password. The system can capture the login credentials and store them for later use when the system pulls or pushes messages for the user. The aggregated social network is a private social network because the user's digital data is not stored on the system that provides the aggregated social network. To reiterate, the digital data resides on a private location for the user so that the public social networks never gain access to the digital data files. To be sure, the aggregated social network creates shared messages that comprise hyperlinks to the digital data. Receiving parties, such as recipients of the shared messages, can access the digital data directly on their end user devices, through an application that resides on their local device, or through a client-facing web interface provided by the system. A user can utilize the GUI ofFIG.9to login to various public social media sites, such as Facebook, Twitter, and Instagram. The system can aggregate these feeds into a single feed so that you can see your feeds and posts aggregated from multiple public social media networks. FIG.10is a GUI that allows a user to create groups. The user can select the social media network for which they wish to create a group. The user can then create a new group name, for example “Immediate Family”. The system can search for connections by name within the aggregate social network. A scroll box of names will then appear with check boxes next to the recipients names used to add them to the new group. FIG.11is a GUI that illustrates a homepage that is a page-based (iOS) or hub (Windows) user interface. The homepage is designed so that the user can swipe left to right to cycle through their private social media network feed as well as the feeds of public social media networks such as Facebook, Instagram or Twitter. The private social media network feed screen enables the user to select a group or key words to filter their feeds. The user can also use the thumbtack button to pin a feed to the top of their feed list. The thumbtack icon in the top feed illustrates feed pinning. A user can comment, emote or ignore each feed by selecting the appropriate link. The comment option opens up the comments to that specific feed and allows the user to add their own comments. The emote option enables a user to add an emoticon to the feed. The ignore option enables a user to mark a specific feed as “ignored” so that it does not appear in the feed list any longer. Users can add new shares or delete existing shares by clicking the add or delete button respectively. The refresh button can be used to refresh the feed. The ellipsis button enables users to access advanced settings such as connection (e.g. add or delete to existing shares), group, sort, and feed configuration (e.g. how much data a user wants cached locally on their device), as well as account management. The private social media network feed screen illustrates how users now can create a highly personalized social media experience with a level of ownership, security, control, and monitoring not currently available with public social media networks. FIG.12illustrates a Facebook feed screen within a public social media network feed. The feed screen enables the user to select a group, friend, or keyword to filter their feed without any of the marketing or filters normally imposed by Facebook. Thus, the public social network feed is imported into the private social network system. The user can also use the thumbtack button to pin a feed to the top of their feed list. The thumbtack icon in the top feed illustrates feed pinning. A user can like or comment on each feed by selecting the appropriate link. The comment option opens up the comments to that specific feed and allows the user to add their own comments. The like option enables a user to like a feed. Users can add or delete posts by clicking the add or delete button respectively. The refresh button is used to refresh the feed. The ellipsis button enables users to access advanced settings such as friend, group, sort, and feed configuration (e.g. how much data a user wants cached locally on their device which can significantly reduce wait time vs. a traditional Facebook feed), as well as, account management. The Facebook feed screen illustrates how users can now access, manage, monitor, and even personalize their public social media networks all in one place—without any marketing. FIG.13is a GUI in the form of a share screen that illustrates how a user can send a post to any of their public social media networks. The user first selects the badge of the desired social network and then enters the desired text or emoticon into the text box. When posting to a public social media network, the sharing functionality (e.g. what kind of content can be posted, who it is posted to, how the content is transmitted) is limited to and governed by the public social media network selected. When posting to the private social media network, new sharing functionality and controls are available. The user picks the individuals or groups they wish to share their post with. Any file type stored on a user's cloud drive can be selected and attached. The user can decide to tag connections and locations to the post by clicking on the geolocation or tag button. If the share contains sensitive material, the user can click the private/“for your eyes only” button. The user can set share rights (read, write, download, and/or invite others to the share). Should the user deem it useful, they can also choose extra encryption for their share, so that it is not only encrypted while in transit (the default for all shares), it is also encrypted while at rest. Additionally, the user can decide to save the post by clicking the save button, so that they can come back to the post and share it at a later time. Finally, they can set a time to live that sets a countdown clock until the share is no longer available. Upon completion of the post, the user then clicks the send button, and the post is shared to the desired individuals or group(s). Again, the systems of the present technology function as a gateway solution where users can share on any of their public social media networks all from one application/service, and if they choose to share on the private social media network they can: (1) control who they share with; (2) they can share more types of content; (3) they maintain ownership of that content; (4) they can make the share as private and secure as they want; (5) they can tag or geolocate as desire; (6) they can set an expiration for the share, and (7) they can save for later. The enhanced ownership, control, and security are all unique to the private social networks of the present technology. FIG.14is a GUI of a cloud explorer screen that allows users to view and have access to all their cloud storage providers and data storage options from within one application. Users can navigate between drives by selecting the appropriate icon for the drive that they want to view. Working like a file system, users will be able to browse through their folders, create new folders, view their data (documents, images, videos, anything they have stored), and select data to be included in a shared message. Users will also be able to manage all their data from the private social network application. This includes uploading, reading, editing, and deleting their stored data, as well as transferring data across storage provider platforms, giving users more options to manage and control their data. For example, if a user is running short on disk space on one cloud or storage drive, they can move data over to another cloud or storage drive. FIG.14illustrates how users can now access, create, read, delete, and transfer data across multiple storage drives from one convenient view, as well as share it easily on public social media networks and/or securely on the private social media network. FIG.15is a GUI that allows a user to manage not only their public social media network accounts, but their children's (or another third party) as well. The groups menu option at the bottom of the screen enables a user to configure groups for public social media network accounts (i.e. create groups and add their friends, followers or connections into groups). These groups can then be used to filter public social media network feeds. The feeds menu option enables users to configure feeds. Users can assign a default group and complex sort order to a public social media network feed. Settings menu option provides users with an array of settings to customize the aggregation of their public social media network accounts. For example, users can specify how many posts or feeds to retrieve at a time or default posting privacy levels such as friends of friends, public or private. The user clicks the check button to return to the previous screen of the application. Referring now toFIG.16, a method is illustrated in a flowchart format. The method involves the storage of digital data, such as a document, image, video, or other electronic file in a private user secure data storage device. This storage space can include user's private device such as a mobile phone and/or private server. In other embodiments, the private user secure data storage device could comprise a cloud based storage space that is dedicated to the user. For example, a user can be provided with a personal storage space within a cloud service such as Dropbox™. The method can include receiving1605a selection of the digital data on a private user secure data storage device from a first user. For example, the user (digital content owner) can select a video that they want to share with other users. Next, the method can include receiving1610a selection of one or more individuals to be given access to the digital data. For example, the user can specify contacts of one or more social networks or other contacts with which they would like to share the selected digital data. In an optional step, the user can select any desired access rights for the digital data that will affect the rights or access to the digital content. For example, the user can specify that the contacts/users can download and view the digital data, but not delete or modify the digital data. Thus, the method can include applying1615access rights for the digital data. To be sure, the user can set these access rights because they own the digital data and the digital data is stored in a private user secure data storage device. The user can access and control the digital data as they desire. In some embodiments, the method can include creating1620a URL that points to a location within the private user secure data storage device where the digital data resides. The URL is a hyperlink pointer that when clicked by another user will launch a browser that accesses the location where the digital data resides. According to some embodiments, the method can include posting1625the URL to a plurality of social networks. As mentioned above, access to the plurality of social networks can be aggregated within a single application or UI by the user. The user can create a single post or message that is pushed to each of the plurality of social networks from this centralized UI. In one embodiment, the message can be posted on an aggregated social network or federated social network that bilaterally communicates with the plurality of social networks specified by the user. Thus, the user creates one message and that message is shared and proliferated through the plurality of social networks. Next, the method includes receiving1630a request from a second user for the digital data when the second user clicks the URL. For example, the second user is a contact that was selected by the user prior to sharing the URL on the aggregated social network or any of the plurality of social networks where the message is pushed. The request can occur when the second user clicks the URL, which launches a browser on their local device (or within an application that resides on the local device). In some embodiments, the method can comprise serving1635the digital data to the second user directly from the private user secure data storage device without storing the digital data on any of the plurality of social networks. Thus, while the second user receives the message comprising the URL in their social network feed, the digital data is served out-of-band with respect to their social network that serves the social network feed. This prevents the social network from gaining access to the digital data, while allowing the second user to access the digital data. The process of serving the digital data can occur, for example, when the private user secure data storage device provides access to the digital data or when the aggregated social network obtains the digital data from the private user secure data storage device. To be sure, the methods described herein can include additional or fewer steps than those illustrated in the flowcharts provided above. As used herein, the term “engine”, “system”, “client”, “module”, “controller”, or “application” may also refer to any of an application-specific integrated circuit (“ASIC”), an electronic circuit, a processor (shared, dedicated, or group) that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. FIG.17is a diagrammatic representation of an example machine in the form of a computer system1, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed. In various example embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a robotic construction marking device, a base station, a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, a portable music player (e.g., a portable hard drive audio device such as an Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system1includes a processor or multiple processors5(e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), and a main memory10and static memory15, which communicate with each other via a bus20. The computer system1may further include a video display35(e.g., a liquid crystal display (LCD)). The computer system1may also include an alpha-numeric input device(s)30(e.g., a keyboard), a cursor control device (e.g., a mouse), a voice recognition or biometric verification unit (not shown), a drive unit37(also referred to as disk drive unit), a signal generation device40(e.g., a speaker), and a network interface device45. The computer system1may further include a data encryption module (not shown) to encrypt data. The drive unit37includes a computer or machine-readable medium50on which is stored one or more sets of instructions and data structures (e.g., instructions55) embodying or utilizing any one or more of the methodologies or functions described herein. The instructions55may also reside, completely or at least partially, within the main memory10and/or within the processors5during execution thereof by the computer system1. The main memory10and the processors5may also constitute machine-readable media. The instructions55may further be transmitted or received over a network via the network interface device45utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP)). While the machine-readable medium50is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals. Such media may also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like. The example embodiments described herein may be implemented in an operating environment comprising software installed on a computer, in hardware, or in a combination of software and hardware. Not all components of the computer system1are required and thus portions of the computer system1can be removed if not needed, such as Input/Output (I/O) devices (e.g., input device(s)30). One skilled in the art will recognize that the Internet service may be configured to provide Internet access to one or more computing devices that are coupled to the Internet service, and that the computing devices may include one or more processors, buses, memory devices, display devices, input/output devices, and the like. Furthermore, those skilled in the art may appreciate that the Internet service may be coupled to one or more databases, repositories, servers, and the like, which may be utilized in order to implement any of the embodiments of the disclosure as described herein. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present technology has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the present technology in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present technology. Exemplary embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, and to enable others of ordinary skill in the art to understand the present technology for various embodiments with various modifications as are suited to the particular use contemplated. Aspects of the present technology are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the present technology. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present technology. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular embodiments, procedures, techniques, etc. in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” or “according to one embodiment” (or other phrases having similar import) at various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Furthermore, depending on the context of discussion herein, a singular term may include its plural forms and a plural term may include its singular form. Similarly, a hyphenated term (e.g., “on-demand”) may be occasionally interchangeably used with its non-hyphenated version (e.g., “on demand”), a capitalized entry (e.g., “Software”) may be interchangeably used with its non-capitalized version (e.g., “software”), a plural term may be indicated with or without an apostrophe (e.g., PE's or PEs), and an italicized term (e.g., “N+1”) may be interchangeably used with its non-italicized version (e.g., “N+1”). Such occasional interchangeable uses shall not be considered inconsistent with each other. Also, some embodiments may be described in terms of “means for” performing a task or set of tasks. It will be understood that a “means for” may be expressed herein in terms of a structure, such as a processor, a memory, an I/O device such as a camera, or combinations thereof. Alternatively, the “means for” may include an algorithm that is descriptive of a function or method step, while in yet other embodiments the “means for” is expressed in terms of a mathematical formula, prose, or as a flow chart or signal diagram. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It is noted that the terms “coupled,” “connected”, “connecting,” “electrically connected,” etc., are used interchangeably herein to generally refer to the condition of being electrically/electronically connected. Similarly, a first entity is considered to be in “communication” with a second entity (or entities) when the first entity electrically sends and/or receives (whether through wireline or wireless means) information signals (whether containing data information or non-data/control information) to the second entity regardless of the type (analog or digital) of those signals. It is further noted that various figures (including component diagrams) shown and discussed herein are for illustrative purpose only, and are not drawn to scale. If any disclosures are incorporated herein by reference and such incorporated disclosures conflict in part and/or in whole with the present disclosure, then to the extent of conflict, and/or broader disclosure, and/or broader definition of terms, the present disclosure controls. If such incorporated disclosures conflict in part and/or in whole with one another, then to the extent of conflict, the later-dated disclosure controls. The terminology used herein can imply direct or indirect, full or partial, temporary or permanent, immediate or delayed, synchronous or asynchronous, action or inaction. For example, when an element is referred to as being “on,” “connected” or “coupled” to another element, then the element can be directly on, connected or coupled to the other element and/or intervening elements may be present, including indirect and/or direct variants. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. The description herein is illustrative and not restrictive. Many variations of the technology will become apparent to those of skill in the art upon review of this disclosure. While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. The descriptions are not intended to limit the scope of the invention to the particular forms set forth herein. To the contrary, the present descriptions are intended to cover such alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims and otherwise appreciated by one of ordinary skill in the art. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments.
47,853
11863538
The Figures described above are a representative set, and are not exhaustive with respect to embodying the invention. DESCRIPTION Disclosed are a system, method, and article of manufacture for generating a symmetric key for mobile device encryption. The following description is presented to enable a person of ordinary skill in the art to make and use the various embodiments. Descriptions of specific devices, techniques, and applications are provided only as examples. Various modifications to the examples described herein can be readily apparent to those of ordinary skill in the art, and the general principles defined herein may be applied to other examples and applications without departing from the spirit and scope of the various embodiments. Reference throughout this specification to “one embodiment,” “an embodiment,” ‘one example,’ or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment. Furthermore, the described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, machine learning techniques, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art can recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention. The schematic flow chart diagrams included herein are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of one embodiment of the presented method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagrams, and they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown. DEFINITIONS Key can be a piece of information (e.g. a parameter) that determines the functional output of a cryptographic algorithm and/or cipher. Key server is a computer that receives and then serves existing cryptographic keys to users or other programs. Mobile device can include smart phones, cell phones, personal digital assistants, tablet computers, wearable computers, smart watches, smart glasses (e.g. Google Glass®), etc. Multifactor authentication (MFA) (e.g. two-factor authentication) can be a user authentication that utilizes the presentation of two or more independent authentication factors: a knowledge factor (“something only the user knows”), a possession factor (“something only the user has”), and an inherence factor (“something only the user is”). After presentation, each factor can be validated by the other party for authentication to occur. Public key can be used to encrypt plaintext (as well as other types of text, images, video, audio, etc.) and/or to verify a digital signature. Exemplary Methods FIG.1illustrates an example process100of creation and distribution of a public key, according to some embodiments. In step102, a user can create a pair of certification or encryption keys with a mobile device application. The pair of certification keys can include a public key. In step104, a public key of the pair of certification keys can be provided to a central repository. The central repository can be a key server. In step106, the central repository can associate the public key with the user's mobile device application. The central repository can authenticate the user and the user's communication identifiers (e.g. the user's mobile device number, email, online social network identifier (e.g. a Facebook® profile, a Twitter® handle, etc.), and the like) with a multifactor user identification authentication. The central repository can use the communication identifier to confirm the user's identity. For example, the central repository can include a text messaging functionality that automatically generates a confirmation text message and communicates it to the user's mobile device. Alternatively, the central repository can automatically generate a confirmation email and communicate it to the user's email account. In step108, the central repository one or more of these communication identifiers can also be associated with the user's public key. In this way, another user's mobile device can request the public key based on an authenticated user communication identifier. For example, a second user can generate an encrypted text message for a first user's mobile device using the first user's public key as obtained from the central repository. Accordingly, the second user's mobile device can include an application that requests the first user's public key from the central repository wherein the first user's public key is identified by the first user's mobile device cellular phone number. In this way, a public key can be created and associated with a communication identifier in an accessible central repository of public keys. The central repository can certify that it has authenticated the communication identifiers associated with a particular public key. It is noted that a single public key can have n-number communication identifiers associated with it. For example, a public key can have a user's former mobile device number and new mobile device number associated with it (as well as one or more emails and/or social network identifiers). It is noted that in other example embodiments, computing devices (e.g. laptops, personal computers, etc.) can be utilized in lieu of a mobile device. FIG.2illustrates an example process200of a second user's application (e.g. a mobile device application) requesting a public key for a first user from the central repository, according to some embodiments. In step202of process200, a second user's application can send the central repository a communication identifier and request a public key associated with said communication identifier. For example, the second user's application can be an email application that is used to send an encrypted email to a first user. The email application may need the public key of the first user to encrypt the email. The email application can have the first user's email address. The email application can send the first user's email address to the central repository with a request for the public key associated with said email address. The first user's public key (e.g. as created utilizing process100) can then be obtained and used to encrypt the email. In step204, the central repository can look up the public key with the communication identifier. In step206, it can be determined if the matching public key is available. If yes, then process200can proceed to step208. In step208, the relevant public key can be obtained and provided to the requesting second user's application. If no, then process200can proceed to step70wherein the public key is not provided to the requesting second user's application. FIG.3illustrates a process300of encrypting a message, according to some embodiments. In step302, a user application can generate message content. For example, a text messaging application can be used to generate a text message. An online social networking application can generate an online social network post, status update, microblog post and the like. In step304, the user application can generate a random message key. In step306, the application can encrypt message(s) content with message key. In step308, the user application downloads the recipient's public key and encrypts the message(s) content. It is noted that message(s) may be a one-to-one message or a one-to-many message. The message(s) can be in a plurality of electronic messaging formats (e.g. text messages, online social networking messages, blog posts, emails, etc.). For example, a user can compose a text message and address the text message to three recipients. Accordingly, three public keys (one for each respective recipient) can be obtained from the central repository. In step310, user application communicates encrypted message. In step312, the user application communicates message key to message key repository. A message key repository can be an online server that stores and manages message keys. A message key can be a key that is required to decrypt a specific message to a particular user's application (e.g. a text message to a specific user, an email to a specified set of users, etc.). The message key can be a symmetric cryptographic key. The message key can be generated using a source of high entropy for its initialization such as a random event detected by a mobile device sensor that is sampled at a randomly selected interval (see infra). When a recipient application would like to decrypt a message, the recipient can request the message key for the specific application from said message key repository. The message key and the recipient's private key can be required to decrypt the message. In step314, it can be determined if user decides to stop access to message content. If not, process300waits for user to decide to stop access to message content. If yes, process300can proceed to step316. In step316, user application instructs the message key repository to delete relevant message keys. FIG.4illustrates an example process400for generating a symmetric key (e.g. a message key), according to some embodiments. In step402of process400, a sensor signal can be received/obtained from a sensor of a mobile device. Example sensors that can be used include a, inter alia, microphone, digital cameras, GPS-related data (e.g. GPS signal time stamps, etc.), accelerometer; compass, gyroscope, Wi-Fi data, etc. In step404, one or more random sampling points on said signal can be determined. A randomization algorithm can be utilized to determine one or more random sampling points. In step406, the sensor signal value can be extracted at the random sample point(s). In step408, a symmetric key can be generated from the sampled sensor signal value(s). It is noted that multiple sensor signal values can be obtained by repeating one or more steps of process400. Additionally, a combination of different sensors can be utilized in some example embodiments (e.g. accelerometer data can be combined with microphone data, etc.). Various symmetric key generation processes can be utilized to generate the symmetric from the sensor sampling value. Exemplary Systems and Computer Architecture FIG.5depicts, in block diagram format, an example system500for increasing message security, according to some embodiments. System500entities communicate electronic messages via various computer and/or cellular data networks502(e.g. the Internet, etc.). System500can include a central repository504. Central repository504can be implemented as a server and/or in a cloud-computing environment. Central repository504can receive a public key, such as a public key generated by secure message application514. Central repository504can include a functionality for automatically authenticating the public key and/or a user of mobile device512. Central repository504can utilize a user's communication identifiers (e.g. the user's mobile device number; email, online social network identifier (e.g. a Facebook® profile, a Twitter® handle, etc.), and the like) in the user authentication process. Central repository504can store various public key, authentication data (e.g. a cellular phone number of mobile device512, an email address, an IP address of mobile device512, etc.) and other related information to public key database506. Central repository504can include an application programming interface (API) for interacting with secure message application514, as well as, other computer applications (e.g. message key repository508, etc.). Central repository504can receive queries from other applications for the public key associated with secure message application514. The queries can include communication identifiers associated with mobile device512and/or a user of mobile device512. Central repository504can identify the relevant public key and communicate it to the requesting mobile device and/other computer application. Secure message application514can generate public keys, private keys and symmetric keys (e.g. message keys). Secure message application514can implement various recipient-side mobile device message security protocols (e.g. such as those provided supra) when mobile device512receives secure messages from other mobile devices in system500. For example, secure message application514can prevent cut and paste operations, screen shots, pictures of the displayed image of the secured message, etc. FIG.6depicts an example computing system600that can be configured to perform any one of the processes provided herein. In this context, computing system600may include, for example, a processor, memory, storage, and I/O devices (e.g., monitor, keyboard, disk drive, Internet connection, etc.). However, computing system600may include circuitry or other specialized hardware for carrying out some or all aspects of the processes. In some operational settings, computing system600may be configured as a system that includes one or more units, each of which is configured to carry out some aspects of the processes either in software, hardware, or some combination thereof. FIG.6depicts computing system600with a number of components that may be used to perform any of the processes described herein. The main system602includes a motherboard604having an I/O section606, one or more central processing units (CPU)608, and a memory section610, which may have a flash memory card612related to it. The I/O section606can be connected to a display614, a keyboard and/or other user input (not shown), a disk storage unit616, and a media drive unit618. The media drive unit618can read/write a computer-readable medium620, which can contain programs622and/or data. Computing system600can include a web browser. Moreover, it is noted that computing system600can be configured to include additional systems in order to fulfill various functionalities. Computing system600can communicate with other computing devices based on various computer communication protocols such a Wi-Fi protocols, Bluetooth® (and/or other standards for exchanging data over short distances includes those using short-wavelength radio transmission), cellular data network protocols, short messaging system protocols, TCP/HTTP protocols, etc. FIG.7is a block diagram of a sample computing environment700that can be utilized to implement some embodiments. The system700further illustrates a system that includes one or more client(s)702. The client(s)702can be hardware and/or software (e.g., threads, processes, computing devices). The system700also includes one or more server(s)704. The server(s)704can also be hardware and/or software (e.g., threads, processes, computing devices). One possible communication between a client702and a server704may be in the form of a data packet adapted to be transmitted between two or more computer processes. The system700includes a communication framework710that can be employed to facilitate communications between the client(s)702and the server(s)704. The client(s)702are connected to one or more client data store(s)706that can be employed to store information local to the client(s)702. Similarly, the server(s)704are connected to one or more server data store(s)708that can be employed to store information local to the server(s)704. FIG.8illustrates an example process800of various safe guards that can be implemented on a receiving user's mobile device to prevent unauthorized viewing, copying and/or forwarding of a message, according to some embodiments. In step802, a process/application can be implemented in the recipient-side mobile device that prevents screen shots, ‘cut and paste’ operations, and/or forwarding of a decrypted message. Furthermore, in step804, the application can access the video feed of a user-facing camera on the mobile device. The feed can be analyzed to identify various entities in the user-facing camera's field of view. In one example, a facial detection algorithm can be used to determine if a face is present (or if a percentage of a face is present). In step806, if a face is not present, the application can remove the message from the mobile device's display. In another example, step806can optionally include an object identification algorithm can be used to determine if another camera is facing the mobile device's display. If another camera is detected, then the application can remove the message from the mobile device's display. It is noted that, in some embodiments, the key repository (e.g. operating in a central repository server) and its associated methods/processes (e.g. as provided supra) can be integrated into third-party infrastructures to secure the messages and/or contents in order to add a new layer of security and of sharing control on those infrastructures. For example, the key repository (e.g. operating in a central repository server) and its associated methods/processes can be utilized in a text messaging system to add an additional layer of security to the text messages exchanged between users. It is noted that, in some embodiments, the processes and/or systems provided supra can be used to mutually authenticate two (2) computing devices (e.g. mobile devices, etc.). Accordingly, the processes and/or systems provided supra can be the guarantor of the identity of the devices. For example, the processes and/or systems provided supra can be utilized in securing signing electronic documents, signing electronic contracts, executing financial transactions in a digital format. The processes and/or systems provided supra can be used to guarantee that the transaction was executed by a particular device (e.g. by using bio-metric authentication to verify the identity of the signer, etc.). This information can then be used to ensure later non-repudiation of the various electronic documents, signatures, legal obligations, etc. It is noted that, in some embodiments, a message (e.g. an electronic message) can be a text message (e.g. SMS, MMS, etc.), a voice-phone call, a word document, voice message, image, a virtual-reality message, etc. FIG.9illustrates an example process900for generating a symmetric key, according to some embodiments. In step902, process900can obtain a sensor signal from a sensor. In step904, process900can determine one or more sampling points on said sensor signal. In step906, process900can extract a sensor signal value at the sampling points. the sampling points of the sensor signal are randomly selected. In step908, process900can generate the symmetric key from the sampled sensor signal value. In one example, the sampling points are specified based on a type of sensor. In one example, the sensor signal value can be the signal strength of WIFI or CELL signal or BLUETOOTH® signal, as well as, any data signal on those protocols. The sampling points can be set at a specified vertical sample rate of the sensor signal or a specified horizontal sample rate of the sensor signal. The sampling points can be set at a specified range at the beginning of the sensor signal. CONCLUSION Although the present embodiments have been described with reference to specific example embodiments, various modifications and changes can be made to these embodiments without departing from the broader spirit and scope of the various embodiments. For example, the various devices, modules, etc. described herein can be enabled and operated using hardware circuitry, firmware, software or any combination of hardware, firmware, and software (e.g., embodied in a machine-readable medium). In addition, it can be appreciated that the various operations, processes, and methods disclosed herein can be embodied in a machine-readable medium and/or a machine accessible medium compatible with a data processing system (e.g., a computer system), and can be performed in any order (e.g., including using means for achieving the various operations). Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. In some embodiments, the machine-readable medium can be a non-transitory form of machine-readable medium.
21,599
11863539
DETAILED DESCRIPTION Examples are described herein in the context of systems and methods for encryption-based device enrollment. Those of ordinary skill in the art will realize that the following description is illustrative only and is not intended to be in any way limiting. Reference will now be made in detail to implementations of examples as illustrated in the accompanying drawings. The same reference indicators will be used throughout the drawings and the following description to refer to the same or like items. In the interest of clarity, not all of the routine features of the examples described herein are shown and described. It will, of course, be appreciated that in the development of any such actual implementation, numerous implementation-specific decisions must be made in order to achieve the developer's specific goals, such as compliance with application- and business-related constraints, and that these specific goals will vary from one implementation to another and from one developer to another. One example method includes a device management system detecting an attempt to access a user account by an unenrolled device. In an illustrative example, a user uses a new mobile device to login to a video conference provider. A device management system of the video conference provider accepts account credentials of the user, but detects that the mobile device has previously not been used. In some instances, the device management system accesses a signature chain of the user account to determine that the device has not been enrolled yet. The device management system identifies another device that the user has previously used and enrolled—a first enrolled device corresponding to the user account—by accessing the signature chain of the user account. In some instances, the signature chain includes a first sequential record identifying the first enrolled device. The device management system then facilitates a transmission of an enrollment request and a corresponding cryptographic signature from the unenrolled device to the first enrolled device. For example, the device management system can notify the unenrolled device to transmit the enrollment request to the first enrolled device. In some instances, the cryptographic signature of the enrollment request is generated by using a private cryptographic key of the unenrolled device, and the enrollment request includes a set of long-term cryptographic keys that the device management system will add as a new sequential record in the signature chain for the unenrolled device. Continuing with the above example, the device management system searches through the signature chain to identify a laptop device of the user that had previously been enrolled. The device management system then notifies the mobile device to initiate a device enrollment process, in which the mobile device generates an enrollment request. The mobile device then transmits the enrollment request and its corresponding cryptographic signature to the laptop device. In this example, the first enrolled device (e.g., the laptop device) is configured to cryptographically validate the enrollment request at least by decrypting the cryptographic signature of the enrollment request using a public cryptographic key of the unenrolled mobile device. This can include the laptop device comparing a hash generated from the received request and another hash obtained by decrypting the cryptographic signature using the public cryptographic key of the mobile device. After verifying the enrollment request, the laptop device displays a 4-digit passcode for the user, and the user inputs the 4-digit passcode into the mobile device. In addition, the laptop device generates an attestation message that is signed using its private cryptographic key then encrypted based on a symmetric cryptographic key derived from the 4-digit passcode. The mobile device directly receives the encrypted attestation message and decrypts it based on a matching symmetric key derived from the inputted 4-digit passcode. As a result, the mobile device obtains a decrypted attestation message that is cryptographically signed by the laptop device. The device management system can receive a decrypted attestation message from the unenrolled device. In some instances, the decrypted attestation message is the encrypted attestation message that was decrypted based on the passcode being correctly inputted into the unenrolled device. The device management system updates the signature chain to include the new sequential record for the unenrolled device. In some instances, the new sequential record includes the decrypted attestation message (which in turn includes the set of long-term cryptographic keys), and the new sequential record indicates that the unenrolled device has been associated with the user account as a new enrolled device. Continuing with the above example, the mobile device transmits the decrypted attestation message to the device management system, which enrolls the mobile device by adding a new sequential record to the signature chain of the user account. Once enrolled, the device management system allows the mobile device to access services provided by the videoconference provider. In future login attempts, the device management system can confirm that the mobile device is enrolled by searching through the signature chain. Certain embodiments described herein provide an improved technique for device enrollment, which can prevent unauthorized users from enrolling devices. For example, the present techniques can prevent MITM attacks, because, even if the encrypted attestation message had been intercepted, the unauthorized users cannot decrypt the attestation message since the passcode for deriving the symmetric key is not transferred between device (rather, the passcode is shown for direct input via user interface). On the other hand, the present techniques can also prevent unauthorized users obtaining the displayed passcode to enroll devices, since the encrypted attestation message only approves the specific unenrolled device to be enrolled in association with the user account. The above advantages facilitate a secure method of enrolling devices in the user account and address potential security vulnerabilities that can be present in conventional techniques. This illustrative example is given to introduce the reader to the general subject matter discussed herein and the disclosure is not limited to this example. The following sections describe various additional non-limiting examples and examples of systems and methods for hiding private user data in public signature chains. I. Example Computing Environment for Encryption-Based Device Enrollment Referring now toFIG.1,FIG.1shows an example system100that provides videoconferencing functionality to various client devices. The system100includes a video conference provider110that is connected to multiple communication networks120,130, through which various client devices140-180can participate in video conferences hosted by the video conference provider110. For example, the video conference provider120can be located within a private network to provide video conferencing services to devices within the private network, or it can be connected to a public network, e.g., the internet, so it may be accessed by anyone. Some examples may even provide a hybrid model in which a video conference provider120may supply components to enable a private organization to host private internal video conferences or to connect its system to the video conference provider120over a public network. The system optionally also includes one or more user identity providers, e.g., user identity provider115, which can provide user identity services to users of the client devices140-160and may authenticate user identities of one or more users to the video conference provider110. In this example, the user identity provider115is operated by a different entity than the video conference provider110, though in some examples, they may be the same entity. Video conference provider110allows clients to create videoconference meetings (or “meetings”) and invite others to participate in those meetings as well as perform other related functionality, such as recording the meetings, generating transcripts from meeting audio, manage user functionality in the meetings, enable text messaging during the meetings, create and manage breakout rooms from the main meeting, etc.FIG.2, described below, provides a more detailed description of the architecture and functionality of the video conference provider110. Meetings in this example video conference provider110are provided in virtual “rooms” to which participants are connected. The room in this context is a construct provided by a server that provides a common point at which the various video and audio data is received before being multiplexed and provided to the various participants. While a “room” is the label for this concept in this disclosure, any suitable functionality that enables multiple participants to participate in a common videoconference may be used. Further, in some examples, and as alluded to above, a meeting may also have “breakout” rooms. Such breakout rooms may also be rooms that are associated with a “main” videoconference room. Thus, participants in the main videoconference room may exit the room into a breakout room, e.g., to discuss a particular topic, before returning to the main room. The breakout rooms in this example are discrete meetings that are associated with the meeting in the main room. However, to join a breakout room, a participant must first enter the main room. A room may have any number of associated breakout rooms according to various examples. To create a meeting with the video conference provider110, a user may contact the video conference provider110using a client device140-180and select an option to create a new meeting. Such an option may be provided in a webpage accessed by a client device140-160or client application executed by a client device140-160. For telephony devices, the user may be presented with an audio menu that they may navigate by pressing numeric buttons on their telephony device. To create the meeting, the video conference provider110may prompt the user for certain information, such as a date, time, and duration for the meeting, a number of participants, a type of encryption to use, whether the meeting is confidential or open to the public, etc. After receiving the various meeting settings, the video conference provider may create a record for the meeting and generate a meeting identifier and, in some examples, a corresponding meeting password or passcode (or other authentication information), all of which meeting information is provided to the meeting host. After receiving the meeting information, the user may distribute the meeting information to one or more users to invite them to the meeting. To begin the meeting at the scheduled time (or immediately, if the meeting was set for an immediate start), the host provides the meeting identifier and, if applicable, corresponding authentication information (e.g., a password or passcode). The video conference system then initiates the meeting and may admit users to the meeting. Depending on the options set for the meeting, the users may be admitted immediately upon providing the appropriate meeting identifier (and authentication information, as appropriate), even if the host has not yet arrived, or the users may be presented with information indicating that the meeting has not yet started or the host may be required to specifically admit one or more of the users. During the meeting, the participants may employ their client devices140-180to capture audio or video information and stream that information to the video conference provider110. They also receive audio or video information from the video conference provider210, which is displayed by the respective client device140to enable the various users to participate in the meeting. At the end of the meeting, the host may select an option to terminate the meeting, or it may terminate automatically at a scheduled end time or after a predetermined duration. When the meeting terminates, the various participants are disconnected from the meeting and they will no longer receive audio or video streams for the meeting (and will stop transmitting audio or video streams). The video conference provider110may also invalidate the meeting information, such as the meeting identifier or password/passcode. To provide such functionality, one or more client devices140-180may communicate with the video conference provider110using one or more communication networks, such as network120or the public switched telephone network (“PSTN”)130. The client devices140-180may be any suitable computing or communications device that have audio or video capability. For example, client devices140-160may be conventional computing devices, such as desktop or laptop computers having processors and computer-readable media, connected to the video conference provider110using the internet or other suitable computer network. Suitable networks include the internet, any local area network (“LAN”), metro area network (“MAN”), wide area network (“WAN”), cellular network (3G, 4G, 4G LTE, 5G, etc.), or any combination of these. Other types of computing devices may be used instead or as well, such as tablets, smartphones, and dedicated video conferencing equipment. Each of these devices may provide both audio and video capabilities and may enable one or more users to participate in a video conference meeting hosted by the video conference provider110. In addition to the computing devices discussed above, client devices140-180may also include one or more telephony devices, such as cellular telephones (e.g., cellular telephone170), internet protocol (“IP”) phones (e.g., telephone180), or conventional telephones. Such telephony devices may allow a user to make conventional telephone calls to other telephony devices using the PSTN, including the video conference provider110. It should be appreciated that certain computing devices may also provide telephony functionality and may operate as telephony devices. For example, smartphones typically provide cellular telephone capabilities and thus may operate as telephony devices in the example system100shown inFIG.1. In addition, conventional computing devices may execute software to enable telephony functionality, which may allow the user to make and receive phone calls, e.g., using a headset and microphone. Such software may communicate with a PSTN gateway to route the call from a computer network to the PSTN. Thus, telephony devices encompass any devices that can make conventional telephone calls and is not limited solely to dedicated telephony devices like conventional telephones. Referring again to client devices140-160, these devices140-160contact the video conference provider110using network120and may provide information to the video conference provider110to access functionality provided by the video conference provider110, such as access to create new meetings or join existing meetings. To do so, the client devices140-160may provide user identification information, meeting identifiers, meeting passwords or passcodes, etc. In examples that employ a user identity provider115, a client device, e.g., client devices140-160, may operate in conjunction with a user identity provider115to provide user identification information or other user information to the video conference provider110. A user identity provider115may be any entity trusted by the video conference provider110that can help identify a user to the video conference provider110. For example, a trusted entity may be a server operated by a business or other organization and with whom the user has established their identity, such as an employer or trusted third-party. The user may sign-in to the user identity provider115, such as by providing a username and password, to access their identity at the user identity provider115. The identity, in this sense, is information established and maintained at the user identity provider115that can be used to identify a particular user, irrespective of the client device they may be using. An example of an identity may be an email account established at the user identity provider110by the user and secured by a password or additional security features, such as biometric authentication, two-factor authentication, etc. However, identities may be distinct from functionality such as email. For example, a health care provider may establish identities for its patients. While such identities may have associated email accounts, the identity is distinct from those email accounts. Thus, a user's “identity” relates to a secure, verified set of information that is tied to a particular user and should be accessible only by that user. By accessing the identity, the associated user may then verify themselves to other computing devices or services, such as the video conference provider110. When the user accesses the video conference provider110using a client device, the video conference provider110communicates with the user identity provider115using information provided by the user to verify the user's identity. For example, the user may provide a username or cryptographic signature associated with a user identity provider115. The user identity provider115then either confirms the user's identity or denies the request. Based on this response, the video conference provider110either provides or denies access to its services, respectively. For telephony devices, e.g., client devices170-180, the user may place a telephone call to the video conference provider110to access video conference services. After the call is answered, the user may provide information regarding a video conference meeting, e.g., a meeting identifier (“ID”), a passcode or password, to allow the telephony device to join the meeting and participate using audio devices of the telephony device, e.g., microphone(s) and speaker(s), even if video capabilities are not provided by the telephony device. Because telephony devices typically have more limited functionality than conventional computing devices, they may be unable to provide certain information to the video conference provider110. For example, telephony devices may be unable to provide user identification information to identify the telephony device or the user to the video conference provider110. Thus, the video conference provider110may provide more limited functionality to such telephony devices. For example, the user may be permitted to join a meeting after providing meeting information, e.g., a meeting identifier and passcode, but they may be identified only as an anonymous participant in the meeting. This may restrict their ability to interact with the meetings in some examples, such as by limiting their ability to speak in the meeting, hear or view certain content shared during the meeting, or access other meeting functionality, such as joining breakout rooms or engaging in text chat with other participants in the meeting. It should be appreciated that users may choose to participate in meetings anonymously and decline to provide user identification information to the video conference provider110, even in cases where the user has an authenticated identity and employs a client device capable of identifying the user to the video conference provider110. The video conference provider110may determine whether to allow such anonymous users to use services provided by the video conference provider110. Anonymous users, regardless of the reason for anonymity, may be restricted as discussed above with respect to users employing telephony devices, and in some cases may be prevented from accessing certain meetings or other services, or may be entirely prevented from accessing the video conference provider110. Referring again to video conference provider110, in some examples, it may allow client devices140-160to encrypt their respective video and audio streams to help improve privacy in their meetings. Encryption may be provided between the client devices140-160and the video conference provider110or it may be provided in an end-to-end configuration where multimedia streams transmitted by the client devices140-160are not decrypted until they are received by another client device140-160participating in the meeting. Encryption may also be provided during only a portion of a communication, for example encryption may be used for otherwise unencrypted communications that cross international borders. Client-to-server encryption may be used to secure the communications between the client devices140-160and the video conference provider110, while allowing the video conference provider110to access the decrypted multimedia streams to perform certain processing, such as recording the meeting for the participants or generating transcripts of the meeting for the participants. End-to-end encryption may be used to keep the meeting entirely private to the participants without any worry about a video conference provider110having access to the substance of the meeting. Any suitable encryption methodology may be employed, including key-pair encryption of the streams. For example, to provide end-to-end encryption, the meeting host's client device may obtain public keys for each of the other client devices participating in the meeting and securely exchange a set of keys to encrypt and decrypt multimedia content transmitted during the meeting. Thus the client devices140-160may securely communicate with each other during the meeting. Further, in some examples, certain types of encryption may be limited by the types of devices participating in the meeting. For example, telephony devices may lack the ability to encrypt and decrypt multimedia streams. Thus, while encrypting the multimedia streams may be desirable in many instances, it is not required as it may prevent some users from participating in a meeting. By using the example system shown inFIG.1, users can create and participate in meetings using their respective client devices140-180via the video conference provider110. Further, such a system enables users to use a wide variety of different client devices140-180from traditional standards-based video conferencing hardware to dedicated video conferencing equipment to laptop or desktop computers to handheld devices to legacy telephony devices. Referring now toFIG.2,FIG.2shows an example system200in which a video conference provider210provides videoconferencing functionality to various client devices220-250. The client devices220-250include two conventional computing devices220-230, dedicated equipment for a video conference room240, and a telephony device250. Each client device220-250communicates with the video conference provider210over a communications network, such as the internet for client devices220-240or the PSTN for client device250, generally as described above with respect toFIG.1. The video conference provider210is also in communication with one or more user identity providers215, which can authenticate various users to the video conference provider210generally as described above with respect toFIG.1. In this example, the video conference provider210employs multiple different servers (or groups of servers) to provide different aspects of video conference functionality, thereby enabling the various client devices to create and participate in video conference meetings. The video conference provider210uses one or more real-time media servers212, one or more network services servers214, one or more video room gateways216, and one or more telephony gateways218. Each of these servers212-218is connected to one or more communications networks to enable them to collectively provide access to and participation in one or more video conference meetings to the client devices220-250. The real-time media servers212provide multiplexed multimedia streams to meeting participants, such as the client devices220-250shown inFIG.2. While video and audio streams typically originate at the respective client devices, they are transmitted from the client devices220-250to the video conference provider210via one or more networks where they are received by the real-time media servers212. The real-time media servers212determine which protocol is optimal based on, for example, proxy settings and the presence of firewalls. For example, the client device might select among UDP, TCP, TLS, or HTTPS for audio and video and UDP for content screen sharing. The real-time media servers212then multiplex the various video and audio streams based on the target client device and communicate multiplexed streams to each client device. For example, the real-time media servers212receive audio and video streams from client devices220-240and only an audio stream from client device250. The real-time media servers212then multiplex the streams received from devices230-250and provide the multiplexed streams to client device220. The real-time media servers212are adaptive, for example, reacting to real-time network and client changes, in how they provide these streams. For example, the real-time media servers212may monitor parameters such as a client's bandwidth CPU usage, memory and network I/O as well as network parameters such as packet loss, latency and jitter to determine how to modify the way in which streams are provided. The client device220receives the stream, performs any decryption, decoding, and demultiplexing on the received streams, and then outputs the audio and video using the client device's video and audio devices. In this example, the real-time media servers do not multiplex client device220's own video and audio feeds when transmitting streams to it. Instead each client device220-250only receives multimedia streams from other client devices220-250. For telephony devices that lack video capabilities, e.g., client device250, the real-time media servers212only deliver multiplex audio streams. The client device220may receive multiple streams for a particular communication, allowing the client device220to switch between streams to provide a higher quality of service. In addition to multiplexing multimedia streams, the real-time media servers212may also decrypt incoming multimedia streams in some examples. As discussed above, multimedia streams may be encrypted between the client devices220-250and the video conference system210. In some such examples, the real-time media servers212may decrypt incoming multimedia streams, multiplex the multimedia streams appropriately for the various clients, and encrypt the multiplexed streams for transmission. In some examples, to provide multiplexed streams, the video conference provider210may receive multimedia streams from the various participants and publish those streams to the various participants to subscribe to and receive. Thus, the video conference provider210notifies a client device, e.g., client device220, about various multimedia streams available from the other client devices230-250, and the client device220can select which multimedia stream(s) to subscribe to and receive. In some examples, the video conference provider210may provide to each client device the available streams from the other client devices, but not from the respective client device itself, though in other examples it may provide all available streams to all available client devices. Using such a multiplexing technique, the video conference provider210may enable multiple different streams of varying quality, thereby allowing client devices to change streams in real-time as needed, based on network bandwidth, latency, etc. As mentioned above with respect toFIG.1, the video conference provider210may provide certain functionality with respect to unencrypted multimedia streams at a user's request. For example, the meeting host may be able to request that the meeting be recorded or that a transcript of the audio streams be prepared, which may then be performed by the real-time media servers212using the decrypted multimedia streams, or the recording or transcription functionality may be off-loaded to a dedicated server (or servers), e.g., cloud recording servers, for recording the audio and video streams. In some examples, the video conference provider210may allow a meeting participant to notify it of inappropriate behavior or content in a meeting. Such a notification may trigger the real-time media servers to212record a portion of the meeting for review by the video conference provider210. Still other functionality may be implemented to take actions based on the decrypted multimedia streams at the video conference provider, such as monitoring video or audio quality, adjusting or changing media encoding mechanisms, etc. It should be appreciated that multiple real-time media servers212may be involved in communicating data for a single meeting and multimedia streams may be routed through multiple different real-time media servers212. In addition, the various real-time media servers212may not be co-located, but instead may be located at multiple different geographic locations, which may enable high-quality communications between clients that are dispersed over wide geographic areas, such as being located in different countries or on different continents. Further, in some examples, one or more of these servers may be co-located on a client's premises, e.g., at a business or other organization. For example, different geographic regions may each have one or more real-time media servers212to enable client devices in the same geographic region to have a high-quality connection into the video conference provider210via local servers212to send and receive multimedia streams, rather than connecting to a real-time media server located in a different country or on a different continent. The local real-time media servers212may then communicate with physically distant servers using high-speed network infrastructure, e.g., internet backbone network(s), that otherwise might not be directly available to client devices220-250themselves. Thus, routing multimedia streams may be distributed throughout the video conference system210and across many different real-time media servers212. Turning to the network services servers214, these servers214provide administrative functionality to enable client devices to create or participate in meetings, send meeting invitations, create or manage user accounts or subscriptions, and other related functionality. Further, these servers may be configured to perform different functionalities or to operate at different levels of a hierarchy, e.g., for specific regions or localities, to manage portions of the video conference provider under a supervisory set of servers. When a client device220-250accesses the video conference provider210, it will typically communicate with one or more network services servers214to access their account or to participate in a meeting. When a client device220-250first contacts the video conference provider210in this example, it is routed to a network services server214. The client device may then provide access credentials for a user, e.g., a username and password or single sign-on credentials, to gain authenticated access to the video conference provider210. This process may involve the network services servers214contacting a user identity provider215to verify the provided credentials. Once the user's credentials have been accepted, the client device214may perform administrative functionality, like updating user account information, if the user has an identity with the video conference provider210, or scheduling a new meeting, by interacting with the network services servers214. In some examples, users may access the video conference provider210anonymously. When communicating anonymously, a client device220-250may communicate with one or more network services servers214but only provide information to create or join a meeting, depending on what features the video conference provider allows for anonymous users. For example, an anonymous user may access the video conference provider using client220and provide a meeting ID and passcode. The network services server214may use the meeting ID to identify an upcoming or on-going meeting and verify the passcode is correct for the meeting ID. After doing so, the network services server(s)214may then communicate information to the client device220to enable the client device220to join the meeting and communicate with appropriate real-time media servers212. In cases where a user wishes to schedule a meeting, the user (anonymous or authenticated) may select an option to schedule a new meeting and may then select various meeting options, such as the date and time for the meeting, the duration for the meeting, a type of encryption to be used, one or more users to invite, privacy controls (e.g., not allowing anonymous users, preventing screen sharing, manually authorize admission to the meeting), meeting recording options, etc. The network services servers214may then create and store a meeting record for the scheduled meeting. When the scheduled meeting time arrives (or within a threshold period of time in advance), the network services server(s)214may accept requests to join the meeting from various users. To handle requests to join a meeting, the network services server(s)214may receive meeting information, such as a meeting ID and passcode, from one or more client devices220-250. The network services server(s)214locate a meeting record corresponding to the provided meeting ID and then confirm whether the scheduled start time for the meeting has arrived, whether the meeting host has started the meeting, and whether the passcode matches the passcode in the meeting record. If the request is made by the host, the network services server(s)214activates the meeting and connects the host to a real-time media server212to enable the host to begin sending and receiving multimedia streams. Once the host has started the meeting, subsequent users requesting access will be admitted to the meeting if the meeting record is located and the passcode matches the passcode supplied by the requesting client device220-250. In some examples additional access controls may be used as well. But if the network services server(s)214determines to admit the requesting client device220-250to the meeting, the network services server214identifies a real-time media server212to handle multimedia streams to and from the requesting client device220-250and provides information to the client device220-250to connect to the identified real-time media server212. Additional client devices220-250may be added to the meeting as they request access through the network services server(s)214. After joining a meeting, client devices will send and receive multimedia streams via the real-time media servers212, but they may also communicate with the network services servers214as needed during meetings. For example, if the meeting host leaves the meeting, the network services server(s)214may appoint another user as the new meeting host and assign host administrative privileges to that user. Hosts may have administrative privileges to allow them to manage their meetings, such as by enabling or disabling screen sharing, muting or removing users from the meeting, creating sub-meetings or “break-out” rooms, recording meetings, etc. Such functionality may be managed by the network services server(s)214. For example, if a host wishes to remove a user from a meeting, they may identify the user and issue a command through a user interface on their client device. The command may be sent to a network services server214, which may then disconnect the identified user from the corresponding real-time media server212. If the host wishes to create a break-out room for one or more meeting participants to join, such a command may also be handled by a network services server214, which may create a new meeting record corresponding to the break-out room and then connect one or more meeting participants to the break-out room similarly to how it originally admitted the participants to the meeting itself. In addition to creating and administering on-going meetings, the network services server(s)214may also be responsible for closing and tearing-down meetings once they have completed. For example, the meeting host may issue a command to end an on-going meeting, which is sent to a network services server214. The network services server214may then remove any remaining participants from the meeting, communicate with one or more real time media servers212to stop streaming audio and video for the meeting, and deactivate, e.g., by deleting a corresponding passcode for the meeting from the meeting record, or delete the meeting record(s) corresponding to the meeting. Thus, if a user later attempts to access the meeting, the network services server(s)214may deny the request. Depending on the functionality provided by the video conference provider, the network services server(s)214may provide additional functionality, such as by providing private meeting capabilities for organizations, special types of meetings (e.g., webinars), etc. Such functionality may be provided according to various examples of video conferencing providers according to this description. Referring now to the video room gateway servers216, these servers216provide an interface between dedicated video conferencing hardware, such as may be used in dedicated video conferencing rooms. Such video conferencing hardware may include one or more cameras and microphones and a computing device designed to receive video and audio streams from each of the cameras and microphones and connect with the video conference provider210. For example, the video conferencing hardware may be provided by the video conference provider to one or more of its subscribers, which may provide access credentials to the video conferencing hardware to use to connect to the video conference provider210. The video room gateway servers216provide specialized authentication and communication with the dedicated video conferencing hardware that may not be available to other client devices220-230,250. For example, the video conferencing hardware may register with the video conference provider210when it is first installed and the video room gateway servers216may authenticate the video conferencing hardware using such registration as well as information provided to the video room gateway server(s)216when dedicated video conferencing hardware connects to it, such as device ID information, subscriber information, hardware capabilities, hardware version information etc. Upon receiving such information and authenticating the dedicated video conferencing hardware, the video room gateway server(s)216may interact with the network services servers214and real-time media servers212to allow the video conferencing hardware to create or join meetings hosted by the video conference provider210. Referring now to the telephony gateway servers218, these servers218enable and facilitate telephony devices' participation in meetings hosed by the video conference provider210. Because telephony devices communicate using the PSTN and not using computer networking protocols, such as TCP/IP, the telephony gateway servers218act as an interface between the PSTN and the networking system used by the video conference provider210. For example, if a user uses a telephony device to connect to a meeting, they may dial a phone number corresponding to one of the video conference provider's telephony gateway servers218. The telephony gateway server218will answer the call and generate audio messages requesting information from the user, such as a meeting ID and passcode. The user may enter such information using buttons on the telephony device, e.g., by sending dual-tone multi-frequency (“DTMF”) audio signals to the telephony gateway server218. The telephony gateway server218determines the numbers or letters entered by the user and provides the meeting ID and passcode information to the network services servers214, along with a request to join or start the meeting, generally as described above. Once the telephony client device250has been accepted into a meeting, the telephony gateway server218is joined to the meeting on the telephony device's behalf. After joining the meeting, the telephony gateway server218receives an audio stream from the telephony device and provides it to the corresponding real-time media server212; likewise the telephony gateway server218receives audio streams from the real-time media server212, decodes them, and provides the decoded audio to the telephony device. Thus, the telephony gateway servers218operate essentially as client devices, while the telephony device operates largely as an input/output device, e.g., a microphone and speaker, for the corresponding telephony gateway server218, thereby enabling the user of the telephony device to participate in the meeting despite not using a computing device or video. It should be appreciated that the components of the video conference provider210discussed above are merely examples of such devices and an example architecture. Some video conference providers may provide more or less functionality than described above and may not separate functionality into different types of servers as discussed above. Instead, any suitable servers and network architectures may be used according to different examples. II. Device Management System A. Components FIG.3shows a schematic diagram of a system300for encryption-based device enrollment, according to some embodiments. Referring toFIG.3, a device management system302can include one or more components to facilitate secure enrollment of new devices into a user account, which can prevent malicious attackers from compromising the user account through MITM attacks (for example). In some instances, the device management system302includes a user-authentication subsystem304, a cryptographic-key manager316, a signature-chain manager318, and an attestation-verification subsystem320. The user-authentication subsystem304can retrieve authentication information records from a database308that include individual records corresponding with user accounts. The database308can be a server that provides database records to the device management system302, including providing access to cryptographic keys or signature chains associated with the user accounts. In some instances, each of these database records includes a plurality of account credentials for various services (e.g., a video-conferencing service) that are associated with a respective user account. Those account credentials may be used to provide a login service for the respective user. For example, a user can use one of enrolled devices310a-cto log into a video-conference provider312using a username and password for an account that is maintained on the user-authentication subsystem304. In some instances, the user-authentication subsystem304accesses a signature chain associated with the user from the database308and determines whether a device identifier of an account-accessing device (e.g., the enrolled device310b) exists in the signature chain. If so, the user-authentication subsystem304indicates that the login is successful, and the user can begin accessing services provided by the videoconference provider312. The user-authentication subsystem304can also detect an unenrolled device314that is used to access the user account. In some instances, the attempt to access the user account includes a user inputting a user identifier and a password of the user account via the unenrolled device314. In response, the user-authentication subsystem304can determine a device identifier of the unenrolled device314, access a signature chain associated with the user, and determine whether the device identifier of the unenrolled device314exists in the signature chain. If the device identifier of the unenrolled device314does not exist in the signature, the user-authentication system304initiates the device enrollment process for the unenrolled device314. The cryptographic-key manager316can be implemented in the device management system203in order to create, import, and rotate cryptographic keys generated by each of the enrolled devices310a-c. For example, the cryptographic-key manager316can retrieve cryptographic keys associated with an enrolled device (e.g., the enrolled device310a), and use the cryptographic keys such that data transmitted between the enrolled device and the videoconference provider312are encrypted. In some instances, the enrolled device310aobtains, from the cryptographic-key manager316, public keys for each of the other client devices participating in a videoconference meeting and securely exchange a set of keys to encrypt and decrypt multimedia content transmitted during the meeting. In some instances, the cryptographic-key manager316accesses the cryptographic keys of the enrolled device from a corresponding sequential record of the signature chain associated with the user. The signature-chain manager318can manage a signature chain for each user account of the video conference provider312. The signature chain can be a cryptographically verifiable ledger to track various transactions performed for the user account. The signature chain can include multiple linked, or “chained” sequential records. In some instances, the sequential record includes identification information corresponding to the user associated with the user account, a device identifier of a corresponding enrolled device, a set of cryptographic keys for encrypting data, a timestamp identifying when the device was added into the signature chain, a sequence number of the sequential record, and the like. The sequential record can also include hashed information linking it to a previous sequential record in the signature chain. The sequential records of the signature chain can reflect various transactions performed for the user account, and the hashed information can ensure integrity of such sequential records. Additional aspects of the signature chains are described in Section III provided herein. The attestation-verification subsystem320can receive and process attestation messages provided by one of the enrolled devices310a-c, indicating that the unenrolled device314can be enrolled into the user account. The attestation message can refer to a machine-readable, programmatically provable statement that the enrolled device can attest that the unenrolled device314has been authenticated and should be enrolled into the user account. In some instances, a cryptographic signature of the attestation message is generated using a private cryptographic key of a corresponding enrolled device, at which the attestation-verification subsystem320can verify the attestation message by decrypting the cryptographic signature using a public cryptographic key of the enrolled device stored in the signature chain. In some instances, the attestation-verification subsystem320compares a timestamp associated with the attestation message with a time at which the attestation message was received, in order to filter stale attestation messages that may have been compromised by malicious attacks. B. Methods for Encryption-Based Device Enrollment FIG.4illustrates a process400for encryption-based device enrollment, according to some embodiments. For illustrative purposes, the process400is described with reference to the components illustrated inFIG.3, though other implementations are possible. For example, the program code for a device management system302ofFIG.3, which is stored in a non-transitory computer-readable medium, is executed by one or more processing devices to cause a server system to perform one or more operations described herein. At step402, a device management system (e.g., the device management system302ofFIG.3) detects an attempt to access a user account by an unenrolled device. The device management system can initially accept account credentials of the user account, including a user identifier and a password. This can allow the device management system to identify the enrolled devices to be used for the encryption-based device enrollment process. In some instances, the device management system accesses a signature chain associated with the user account from the database and detects that a device identifier of the unenrolled device does not exist in the signature chain. At step404, the device management system identifies enrolled devices corresponding to the user account by accessing the signature chain of the user account. In some instances, the signature chain includes a sequential record for each of the enrolled devices, in which the sequential record can identify a device identifier of the enrolled device and its respective cryptographic keys. The sequential record can also include hashed information linking it to a previous sequential record in the signature chain. The sequential records of the signature chain can reflect various transactions performed for the user account, and the hashed information can ensure integrity of such sequential records. At step406, the device management system facilitates a transmission of an enrollment request and a corresponding cryptographic signature from the unenrolled device to each of the enrolled devices. In some instances, the cryptographic signature is generated using a private cryptographic key of the unenrolled device, to provide an assurance that the enrollment request is not compromised or tampered by another user. The enrollment request can include a set of long-term cryptographic keys to be added into a sequential record in the signature chain for the unenrolled device. In some instances, a public cryptographic key of the unenrolled device is also transmitted to the enrolled devices. The enrolled devices can be configured to cryptographically validate the enrollment request at least by decrypting the cryptographic signature of the enrollment request using the public cryptographic key of the unenrolled device. In response to cryptographically validating the enrollment request, each of the enrolled devices can allow the user to confirm whether the unenrolled device should be enrolled into the user account (such as with a pop up message). Once the user confirms through one of the enrolled devices that the unenrolled device should be enrolled, the selected enrolled device can cause a passcode to be displayed and can generate an encrypted attestation message. The encrypted attestation message can indicate that the unenrolled device has been authenticated by the selected enrolled device. In some instances, the encrypted attestation message includes the set of long-term cryptographic keys of the unenrolled device. The encrypted attestation message can be encrypted using a symmetric cryptographic key derived from the passcode. In some instances, the symmetric cryptographic key is derived using a password-based key derivation function (e.g., PBKDF1, PBKDF2). The password-based key derivation function can include applying a hash-based message authentication code (HMAC) to the passcode for a predefined number of times, until the symmetric cryptographic key is generated. The selected enrolled device can transmit the encrypted attestation message to the unenrolled device. At step408, the device management system receives a decrypted attestation message from the unenrolled device. In particular, the unenrolled device can receive the encrypted attestation message transmitted from the selected enrolled device. The unenrolled device can decrypt the encrypted attestation message in response to the user inputting the passcode displayed on the enrolled device into the enrolled device via user-interface input. The inputted passcode can be derived into another symmetric cryptographic key that corresponds to the symmetric cryptographic key used by the selected enrolled device to encrypt the attestation message. The unenrolled device can then use the other symmetric cryptographic key to decrypt the encrypted attestation message. Using symmetric keys derived from the passcode prevents other devices from decrypting the attestation message using incorrect passcodes. At step410, the device management system updates the signature chain to include a new sequential record for the unenrolled device. In some instances, the new sequential record of the unenrolled device includes the decrypted attestation message and the set of long-term cryptographic keys thereof. In addition, the new sequential record indicates that the unenrolled device has been associated with the user account as a new enrolled device. Process400terminates thereafter. III. Signature Chain Both accounts and users have states that change over time. For example, a user can change their email address or add and remove devices. To keep track of these states that change over time, the sequence of changes in a data structure called a signature chain (alternatively referred to as “sigchain”) can be used. Using sigchains, the only valid changes to this chain can be extensions of the sequence. Since changes cannot be “forgotten,” the device management system cannot rewrite portions of the sequence. In some examples, a sigchain is a sequence of statements or records (alternatively referred to as “links”), where each sequential record includes a collision-resistant hash of the previous link. These sequential records can be considered as state transitions which modify an object (e.g., the sigchain state). For a user sigchain, the sigchain state would contain the list of active devices, list of revoked devices, the trust graph, and the list of email addresses and accounts historically associated with the user. In some instances, a transition is accepted as valid if it satisfies several conditions, including that:1. The link is of a known type.2. The link has the correct fields for that type.3. The transition is admissible given the current state.4. The link correctly includes the hash of the previous link.5. Some links require cryptographic signatures by the devices authorizing the transition to be considered valid. In these cases, the signatures are encoded as part of the links to compute link hashes. Examples of admissibility rules for a user sigchain include that a device can only be revoked if it was in an active state, and that signatures over revocation links must be by another device in the active state. Since each of the links in a sigchain contains a hash of the previous link, the hash of the last link can include a hash of the entire sigchain state. Each sigchain link also contains an incrementing sequence number. For example, a sequential record may be a data structure that identifies a sigchain type, the previous sequential record's sequence number, and the previous sequential record's hash as the sigchain tail:{“sigchainType”: “User”,“lastSequenceNumber”: 15,“lastLinkHash”: “484ad7 . . . ”} The sigchain can be queried for the new links and ensure that the first new link contains a previous hash matching the cached tail. The example sigchain is encoded in JSON, but the actual implementation may use different application encodings and data structures that will be apparent to those skilled in the art without departing from the spirit and scope of the disclosure. In some instances, different applications require different levels of access to sigchains. For example, although a user should be able to fully audit the history of past email addresses stored in their sigchain, other users (e.g., meeting participants) may only need to see the most recent email address in the sigchain to display it in the UI. In this example, rather than being directly encoded, sensitive information on a sigchain link can be obfuscated. An example sigchain link may thus include:COMMIT([email protected])=HMAC(randomKey; [email protected]): Continuing with the above example, an identity provider (e.g., the user identity provider115) provides the entire sigchain link to users, so that they can check the validity of its signatures and hashes but would not retrieve the plaintext email address (e.g., randomKey is not transmitted). By contrast, authorized users (e.g., Alice's devices) can obtain the sigchain with the plaintext email addresses and 32-byte random keys corresponding to previous email addresses. COMMIT can also be used to selectively delete parts of links such as device names. For example, the identity provider can throw away the random HMAC key as well as the plaintext data, although the signature over the sigchain link will still verify. A. Types of Signature Chains Devices, users, and accounts identified in the signature chain can each be internally represented by unique immutable identifiers (e.g., deviceID, userID, accountTD). In some instances, each of the devices, users, and accounts is also associated with more user-friendly but mutable identifiers (e.g., device names, email addresses, and account domain names). In some instances, different types of sigchains are used to represent various different components associated with the user of a service provider (e.g., the video conference provider110) and their relationships. For example, the following types of signature chains can be considered:1. For each user identifier (e.g., userID), user sigchains store information related to that user's identity, including the user's email address, account identifiers, and the set of their devices and their trust relationships;2. For each email address, email sigchains store the associated user identifiers;3. For each account identifier, account sigchains store both the account domain name (“ADN”) and identity provider associated with the account;4. For each domain name, ADN sigchains keep track of the account identifier to which the domain is associated; and5. For each account identifier, membership sigchains keep track of the user identifiers associate with the account. Continuing with the above example, some of the information stored on the above sigchains can be redundant. For instance, a mapping between an email and the corresponding user identifier is recorded both in a user sigchain and in an email sigchain. This can prevent the identity provider from claiming two separate user identifiers being associated with the same email address at the same time. In effect, some operations will cause two or more signature chains to be updated at the same time. Additionally or alternatively, a user can be associated with a subset of the set of signature chains based on an extent and type of information available for the user. For example, if a particular user account only includes a single user and does not have an ADN, then there will be no corresponding account or membership sigchains until the account either obtains another user or an ADN. B. Adding Sequential Records in Signature Chains In addition to recording changes to user information, signature chains can be used to enroll one or more devices (e.g., client devices140-160ofFIG.1) under the user account. A sequential record of a device can be added into the signature chain in response to authenticating the device and confirming that the device enrollment has been requested by the authorized user. In some instances, a sequential record for an enrolled device includes a set of long-term public device keys. An illustrative example shows a sequential record being represented by the following data structure:{“sigchainType”: “User”,“linkType”: “DeviceAdd”,. . .“deviceID”: “ebc0d2 . . . ”,“deviceName”: COMMIT({“name”: “Alice's Work Smartphone”, “version”: 1}),“ed25519PublicKey”: “ce8564 . . . ”,“x25519PublicKey”: “ad7913 . . . ”,“perUserX25519PublicKey”: “c2cce1 . . . ”,“emailChange”: . . . ,“accountChange”: . . . ,“revokeDeviceIDs”: [“ac98ad . . . ”, . . . ]} Continuing with the above example, the sequential record includes a device identifier field (e.g., “deviceID”), a signing public cryptographic key field (e.g., “ed25519PublicKey”), an encryption public cryptographic key field (e.g., “x25519PublicKey”), and a device name field (e.g., “deviceName”). For each cryptographic key field, the signature chain can also identify a type of algorithms associated with the public cryptographic key. Further, the device name can be obfuscated to prevent the device name from being revealed to other users. In order to support reuse of device names, the device name can be associated with an incrementing version component which will be visible in a user interface. Device names allow the users to have a human-readable, unambiguous way to distinguish their devices. In some instances, the sequential record also includes a new per-user public key field (e.g., “perUserX25519PublicKey”). The per-user keys (PUKs) can be symmetric cryptographic keys that can facilitate syncing encrypted data between the user's devices. Devices use the latest per-user key to encrypt all content, but the previous per-user keys are still useful for decrypting older data. In some instances, per-user keys can be asymmetric cryptographic keys that can be used to encrypt data for other users. C. Other Types of Operations Involving Sequential Records In addition to enrolling new devices, the device management system can revoke or otherwise remove devices that were previously added into the signature chain by adding a corresponding sequential record. For example, when a device is stolen, lost, or no longer used for the video conference provider, the user can revoke the device from being associated with the user account. If one of the user's valid devices performs the revocation, it can also rotate the PUKs in remaining enrolled devices and sign the sequential record to guarantee integrity of the new PUKs. In addition to revoking devices, device names can be renamed if desired, in which the changes are signed with the corresponding device's public key. Additionally or alternatively, if the user suspects one of its enrolled devices was temporarily compromised, or if they have institutional key rotation policies, the user can rotate device key and PUKs of each enrolled device. This operation can maintain the same device identifiers but select new signing and encryption keys, as well as a new set of PUKs. After keys are rotated, only signatures and ciphertexts from the new public keys can be used for the corresponding devices. The key rotation can be initiated by adding a sequential record signed by the device's previous public key in addition to the new key. IV. Authenticating Enrollment Requests FIG.5illustrates a swim lane diagram for a process500for authenticating enrollment requests, according to some embodiments. The process500includes an unenrolled device502accessing a user account associated with a device management system504(step510). In some instances, the unenrolled device502accesses the user account by inputting account credentials of the user account via its user interface. At step512, the device management system504detects that the unenrolled device502is not associated with the user account. In particular, the device management system504accesses a signature chain associated with the user account from the database and detects that a device identifier of the unenrolled device does not exist in the signature chain. At step514, the device management system504identifies one or more enrolled devices that are enrolled in the user account. In some instances, the one or more enrolled devices includes a first enrolled device506and a second enrolled device508. The device management system can determine the one or more enrolled devices that are identified in the signature chain of the user account. The signature chain can include a sequential record for each of the one or more enrolled devices, in which the sequential record can be used to verify that a corresponding enrolled device is associated with the user account. In some instances, the sequential record includes a device identifier of the corresponding enrolled device. At step516, the device management system504instructs the unenrolled device to generate and transmit an enrollment request and a corresponding cryptographic signature to the one or more enrolled devices (e.g., the first enrolled device506, the second enrolled device508). For example, the device management system504causes a prompt to be presented on the unenrolled device requesting whether the user would like to enroll the device. In response to the user confirming the prompt, the unenrolled device generates the enrollment request (step518), which can be transmitted to each of the first and second enrolled devices506and508. The cryptographic signature can be generated by the unenrolled device502generating a hash of the enrollment request and encrypting the hash using a private cryptographic key of the unenrolled device. The hash representing the enrollment request is unique to the enrollment request. The enrollment request can include a set of long-term cryptographic keys of the unenrolled device that are to be added into a new sequential record of the signature chain. In some instances, the enrollment request further includes a reported location of the unenrolled device. The reported location can be further used to authenticate the enrollment request. For example, the enrolled device can authenticate the enrollment request by comparing the reported location of the unenrolled device and an estimated location of the unenrolled device. The estimated location is determined based on an IP address of the unenrolled device, from which the enrollment request was transmitted. At step520, the unenrolled device transmits the enrollment request to each of the first and second enrolled devices506and508. The unenrolled device can also transmit or otherwise avail a public cryptographic key of the unenrolled device that can be used for cryptographically verifying the enrollment request. In some instances, the device management system504receives the enrollment request then forwards the enrollment request to each of the first and second enrolled devices506and508. At step522, each of the first and second enrolled devices506and508can cryptographically validate the enrollment request at least by decrypting the cryptographic signature of the enrollment request using the public cryptographic key of the enrolled device. In some instances, each of the first and second enrolled devices506and508generates their own respective hash of the enrollment request and decrypts the cryptographic signature using the public cryptographic key of the unenrolled device. The enrolled devices then compare the respective hashes of the enrollment request against another hash generated by decrypting the cryptographic signature. If the hashes match, it can be confirmed that the enrollment request has not been modified and the unenrolled device is authenticated. As a result, the one or more enrolled devices can determine that the enrollment request is valid. V. Processing of an Encrypted Attestation Message FIG.6illustrates a swim lane diagram for a process600for processing of an encrypted attestation message, according to some embodiments. Process600may be executed after the enrolled devices cryptographically validate the enrollment request (e.g., step522ofFIG.5). At step610, each of the enrolled devices606and608displays a prompt requiring the user to confirm whether the unenrolled device602should be enrolled into the user account. At step612, the user selects the enrolled device606. The enrolled device606is selected by the user by responding to the prompt displayed on the enrolled device606(e.g., pressing a “Yes” button of the prompt via the user-interface of the enrolled device606). In some embodiments, only one of the enrolled devices is elected. As a result, each of the remaining enrolled devices can close its respective prompts in response to the selection of the enrolled device606. For example, the unselected enrolled device608can be disengaged and may no longer participate in the device enrollment process. At step614, the selected enrolled device displays a passcode. For example, the passcode can include a set of characters (e.g., 6-digit numerical passcode), which the user can retrieve and enter into the unenrolled device. At step616, the selected enrolled device606generates an encrypted attestation message. The selected enrolled device606can initially generate an attestation message, which can include information attesting that the unenrolled device has been authenticated by the selected enrolled device606. The selected enrolled device606can also generate a cryptographic signature of the attestation message using a private cryptographic key of the selected enrolled device606. The cryptographic signature of the attestation message can be generated before the attestation message is encrypted. In some instances, the selected enrolled device606encrypts the attestation message using a symmetric cryptographic key derived from the passcode. In some instances, the symmetric cryptographic key is derived using a password-based key derivation function (e.g., PBKDF1, PBKDF2). The password-based key derivation function can include applying a hash-based message authentication code (HMAC) to the passcode for a predefined number of times, until the symmetric cryptographic key is generated. As such, in addition to the passcode being inputted to enroll the unenrolled device into the user account, the passcode can be used by the selected enrolled device for encrypting attestation messages. The encryption can prevent unauthorized users from enrolling other devices into the user account simply by intercepting the passcode (e.g., by looking over the user's shoulder). In some instances, the encrypted attestation message includes a timestamp identifying a time at which the encrypted attestation message was generated. The timestamp can be used by the device management system to confirm whether the encrypted attestation message was received during a time period in which the encrypted attestation message can be received and processed. In addition, the encrypted attestation message can include the set of long-term cryptographic keys of the unenrolled device. At step618, the selected enrolled device606then transmits the encrypted attestation message to the unenrolled device602. In some instances, the encrypted attestation message is transmitted using a communication path that is only accessible by the selected enrolled device606and the unenrolled device602. As such, other devices, including the device management system604, cannot access the communication path. As a result, unauthorized users accessing the user account cannot manipulate the encrypted attestation message being transmitted to the unenrolled device. The selected enrolled device606can also transmit the corresponding cryptographic signature of the encrypted attestation message. At step620, the unenrolled device602can decrypt the encrypted attestation message. Specifically, the user can input the passcode displayed on the enrolled device into the unenrolled device602via user-interface input. The inputted passcode can be derived into another symmetric cryptographic key. If the passcode is correct, the other symmetric key would correspond to the symmetric cryptographic key used by the selected enrolled device to encrypt the attestation message. The unenrolled device can then use the other symmetric cryptographic key to decrypt the encrypted attestation message. Using symmetric keys derived from the passcode prevents other devices from decrypting the attestation message using incorrect passcodes. At step622, the unenrolled device602can transmit the decrypted attestation message to the device management system604. In some instances, the unenrolled device602also transmits the cryptographic signature of the attestation message. As described herein below, the device management system can add the decrypted attestation message as a new sequential record of the signature chain associated with the user account. VI. Enrolling Unenrolled Device Using Signature Chain FIG.7illustrates a process700for enrolling an unenrolled device using signature chains, according to some embodiments. Process700may be executed after the enrolled devices cryptographically validate the enrollment request (e.g., step522ofFIG.5). For illustrative purposes, the process700is described with reference to the components illustrated inFIG.3, though other implementations are possible. For example, the program code for a device management system302ofFIG.3, which is stored in a non-transitory computer-readable medium, is executed by one or more processing devices to cause a server system to perform one or more operations described herein. At step702, a device management system (e.g., the device management system302ofFIG.3) receives the decrypted attestation message from an unenrolled device. As described herein, the decrypted attestation message is generated by the unenrolled device decrypting the encrypted attestation message based on an input of a passcode shown on a selected enrolled device (e.g., the selected enrolled device606ofFIG.6). The decrypted attestation message can be used to verify that the unenrolled device can be added into the user account. In some instances, the device management system also receives the cryptographic signature of the attestation message, to verify that the message was transmitted by the selected enrolled device. At step704, the device management system determines whether contents of the decrypted attestation message can be cryptographically verified. For example, the device management system can generate a hash of the decrypted attestation message and decrypts the cryptographic signature using the public cryptographic key of the selected enrolled device. The device management system can compare the generated hash against another hash generated by decrypting the cryptographic signature. If the hashes do not match, the device management system exits the device-enrollment process (step706). If the hashes match, the device management system can confirm that the attestation message is authentic and the process continues to step708. At step708, to verify whether the decrypted attestation message was timely received, the device management system can determine an elapsed time between a timestamp of the encrypted attestation message and a time at which the decrypted attestation message was received from the unenrolled device. In some instances, the timestamp identifies a time at which the encrypted attestation message was generated. At step710, the device management system compares the elapsed time against a threshold value. If the elapsed time does not exceed the threshold value, the device management system proceeds with the process700to update the signature chain to include the new sequential record. If the elapsed time exceeds a threshold value, the device management system exits the device-enrollment process by aborting the update of the signature chain to include the new sequential record (step706). At step712, the device management system updates the signature chain to include the new sequential record for the unenrolled device. In some instances, the new sequential record includes the decrypted attestation message and the long-term cryptographic key of the unenrolled device. Additionally or alternatively, the device management system can output a message that the unenrolled device has been successfully associated with the user account as a newly enrolled device. Process700terminates thereafter. After the new enrollment of the device, the device management system can restrict the newly enrolled device from performing one or more user-account operations for a predetermined period of time. This allows the device management system to provide an additional layer of security by preventing unauthorized users from compromising the user account, even if an unauthorized unenrolled device was somehow successfully enrolled (e.g., the user account had been hacked and the enrolled device had been obtained by the same malicious attacker). In some instances, the restricted user-account operations include enrolling additional devices and/or removing enrolled devices from the signature chain. By contrast, the enrolled devices can remove the newly enrolled device from the user account, to allow the user to maintain control of the user account in the event of a malicious attack. VII. Additional Considerations FIG.8shows an example computing device800suitable for use in example systems or methods for encryption-based device enrollment, according to some embodiments. The example computing device800includes a processor810which is in communication with the memory820and other components of the computing device800using one or more communications buses802. The processor810is configured to execute processor-executable instructions stored in the memory820to perform one or more methods for encryption-based device enrollment according to different examples, such as part or all of the example methods described above with respect toFIGS.4,5,6, and7. The computing device800, in this example, also includes one or more user input devices850, such as a keyboard, mouse, touchscreen, microphone, etc., to accept user input. The computing device800also includes a display840to provide visual output to a user. In addition, the computing device800includes video conference software860to enable a user to join and participate in a video conference, such as a conventional meeting or webinar, by receiving multimedia streams from a video conference provider, sending multimedia streams to the video conference provider, joining and leaving breakout rooms, such as described throughout this disclosure, etc. The computing device800also includes a communications interface840. In some examples, the communications interface830may enable communications using one or more networks, including a local area network (“LAN”); wide area network (“WAN”), such as the Internet; metropolitan area network (“MAN”); point-to-point or peer-to-peer connection; etc. Communication with other devices may be accomplished using any suitable networking protocol. For example, one suitable networking protocol may include the Internet Protocol (“IP”), Transmission Control Protocol (“TCP”), User Datagram Protocol (“UDP”), or combinations thereof, such as TCP/IP or UDP/IP. While some examples of methods and systems herein are described in terms of software executing on various machines, the methods and systems may also be implemented as specifically-configured hardware, such as field-programmable gate array (FPGA) specifically to execute the various methods according to this disclosure. For example, examples can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in a combination thereof. In one example, a device may include a processor or processors. The processor comprises a computer-readable medium, such as a random access memory (RAM) coupled to the processor. The processor executes computer-executable program instructions stored in memory, such as executing one or more computer programs. Such processors may comprise a microprocessor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), field programmable gate arrays (FPGAs), and state machines. Such processors may further comprise programmable electronic devices such as PLCs, programmable interrupt controllers (PICs), programmable logic devices (PLDs), programmable read-only memories (PROMs), electronically programmable read-only memories (EPROMs or EEPROMs), or other similar devices. Such processors may comprise, or may be in communication with, media, for example one or more non-transitory computer-readable media, that may store processor-executable instructions that, when executed by the processor, can cause the processor to perform methods according to this disclosure as carried out, or assisted, by a processor. Examples of non-transitory computer-readable medium may include, but are not limited to, an electronic, optical, magnetic, or other storage device capable of providing a processor, such as the processor in a web server, with processor-executable instructions. Other examples of non-transitory computer-readable media include, but are not limited to, a floppy disk, CD-ROM, magnetic disk, memory chip, ROM, RAM, ASIC, configured processor, all optical media, all magnetic tape or other magnetic media, or any other medium from which a computer processor can read. The processor, and the processing, described may be in one or more structures, and may be dispersed through one or more structures. The processor may comprise code to carry out methods (or parts of methods) according to this disclosure. The foregoing description of some examples has been presented only for the purpose of illustration and description and is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Numerous modifications and adaptations thereof will be apparent to those skilled in the art without departing from the spirit and scope of the disclosure. Reference herein to an example or implementation means that a particular feature, structure, operation, or other characteristic described in connection with the example may be included in at least one implementation of the disclosure. The disclosure is not restricted to the particular examples or implementations described as such. The appearance of the phrases “in one example,” “in an example,” “in one implementation,” or “in an implementation,” or variations of the same in various places in the specification does not necessarily refer to the same example or implementation. Any particular feature, structure, operation, or other characteristic described in this specification in relation to one example or implementation may be combined with other features, structures, operations, or other characteristics described in respect of any other example or implementation. Use herein of the word “or” is intended to cover inclusive and exclusive OR conditions. In other words, A or B or C includes any or all of the following alternative combinations as appropriate for a particular usage: A alone; B alone; C alone; A and B only; A and C only; B and C only; and A and B and C.
82,900